This spring, when researching to find a hotel in New York City, I got lured into the reviews on travel sites. I read hundreds — one after another, as if the next one would hold my answer, would make my decision risk free.
Of course, even as I read them, I knew that a large chunk of what I was reading was most likely fake — officially know as “deceptive opinion spam,” fictitious opinions that have been deliberately written to sound authentic.
With sites like Fiverr and Craigslist easily connecting “reviewers” and companies, the race to get the most positive online reviews continues to explode.
An Algorithm to the Rescue?
Fittingly, a team of researchers from Cornell created a computer algorithm for detecting fake reviewers.
According to a recent New York Times article, the team was “instantly approached by a dozen companies, including Amazon, Hilton, TripAdvisor and several specialist travel sites, all of which have a strong interest in limiting the spread of bogus reviews.”
The algorithm, which worked about 90 percent of the time, is designed to distinguish fake from real, noting differences such as fake reviews being narrative versus descriptive, and using words like “I” and “me” more often.
Whether algorithms can save us from fake reviews and opinion spam or not, in the meantime the situation worsens. So what is the user supposed to do?
Do we discredit all the glowing five star ratings?
We can’t believe the bashing reviews either — competitive companies and spammers pump those into the mix as well.
Do we just try to sift through the obvious spam reviews and hope some of the middle ground reviews are genuine?
Or should we just ignore the reviews on sites like Amazon, TripAdvisor, Citysearch, and Yelp completely?
How much weight do you give user generated online reviews? Do bad reviews keep you away? Do five star reviews win you over? Or, do you assume that a significant portion of what you are reading are bogus reviews?
photo credit: Heather Ainsworth for The New York Times