The bad news: Online user reviews don’t really match up with performance reviews, says behavioral scientist Bart de Langhe. But that means there’s also good news: We can stop obsessing over them.
This post is part of TED’s “How to Be a Better Human” series, each of which contains a piece of helpful advice from people in the TED community; browse through all the posts here.
Online shopping has brought tremendous convenience — but it’s also brought us a staggering number of options. Burdened as we can be with too many choices, it’s easy to feel like online reviews and ratings from other consumers can provide us with a crowdsourced pool of good information about the product we’re considering.
But is it the most reliable information?
Bart de Langhe, a behavioral scientist and marketing professor at ESADE in Barcelona, Spain, was led to ask this question after he went shopping in a store for a car seat for his newborn son. He faced a dilemma: Should he pay $300 for a car seat from a well-known brand that was highly recommended by the store’s salesperson, or $50 for a car seat from an unknown brand? Like many of us, he found a quiet corner in the store to take out his phone and read through online reviews. Since they were largely positive for the $300 car seat, he bought it.
Later, he wondered: Do user reviews on the car seats line up with the kind of objective tests that independent product testing organizations like Consumer Reports do? To his surprise and dismay, the answer was no. According to Consumer Reports, the $300 car seat received a significantly lower score in crash protection and ease of use than the $50 car seat did.
de Langhe collaborated with colleagues at the University of Colorado in Boulder to run a large-scale analysis comparing online reviews with performance reviews. They did this for 1,272 products in 120 categories — including car seats, bike helmets, blood pressure monitors, headphones, sunscreen and smoke alarms — which could be objectively assessed. In a study published in the Journal of Consumer Research, they found that even though a correlation existed between products that were positively reviewed online and those that performed well, it was an extremely weak correlation.
As he explained in Science Daily, “The likelihood that an item with a higher user rating performs objectively better than an item with a lower user rating is only 57 percent. A correspondence of 50 percent would be random, so user ratings provide very little insight about objective product performance.”
Instead, “there are many products that get high ratings but perform poorly, and there are many products that get low ratings but perform very well,” he says in a TEDxESADE Talk. Why does this happen? The existence of fake reviews is one reason. What’s more, people’s reviews are swayed by factors such as brand reputation, packaging and price (even though they may not realize it), and only a small subset of consumers — the ones holding the most extreme positive and negative opinions — tend to leave reviews. The latter causes the proliferation of 1- and 5-star reviews that we often see on products, while a truly random sampling of consumer reviews would likely generate more 3-star responses.
de Langhe’s conclusion: “I recommend you rely less on the recommendations of other consumers. You should realize that the ratings out there come from a small and biased subset of imperfect people who evaluate products in imperfect conditions.”
Does this mean we stop reading reviews all together? No. But we can release ourselves from agonizing about whether to buy the product that got 3 ½ stars or the one that got 4 stars or feeling like we need to read through every review before we make a significant purchase.
Watch his TEDxESADE Talk now: