Expert Reviewers’ Restraint from Extremes and Its Impact on Service Providers 

31 Dec 2019
Research

Marketing

Peter Nguyen, Xin (Shane) Wang, Xi Li, June Cotte

Journal of Consumer Research, forthcoming

For nearly 9 in 10 consumers, an online review is as important as a personal recommendation. People rely on Amazon reviews to choose the right products, on Yelp reviews to find the best restaurants, and on IMDb reviews to make the next movie choice. Most online reviews are written by consumers like us. But how do reviewers differ from each other, and how are we influenced by different reviewers?

In a recent study published in Journal of Consumer Research, Xi Li, Assistant Professor, Department of Marketing and his coauthors Peter Nguyen, Shane Wang and June Cotte investigate the issue. Using more than one million reviews on top reviewing websites such as Qunar, TripAdvisor and Yelp, the authors study the differences between novices --- those who haven’t reviewed much, and experts --- those who have reviewed a lot in the past. They find that, contrary to the conventional wisdom, reviewing experts (vs. novices), as a whole, have less impact on the aggregate valence metric, which is known to affect page-rank and consumer consideration. In other words, we are more likely to be influenced by novices. This is because novices give more extreme review ratings (e.g., 1-start or 5-star ratings) while experts’ review ratings are less extreme. 

But why do novices give more extreme review ratings? The authors find that expert reviewers typically consider more attributes (e.g., price, environment, location, and service) compared to novices when evaluating a product or service. Due to the regression toward the mean principle, this reduces the likelihood for experts to reach extreme reviews.  For service providers that generally provide mediocre (excellent) experiences, reviewing experts assign significantly higher (lower) ratings than novices.

“This study has important business implications”, said Xi Li. “We recommend review platforms to adopt different rating scales for their expert and novice users (using a more granular scale for their experts) and present different aggregate valence metrics for ratings by these two groups. One can see this approach with platforms such as Rotten Tomatoes, where critics evaluate on a 10-point scale and the audience evaluates on a 5-point scale, and the aggregate scores for critics and the audience are separated.”