How TripAdvisor is fighting the war against fake reviews, sock-puppeting, review brushing-bombing....
The senior director of trust & safety at TripAdvisor talks about how the platform handles reviews
- Published 21.10.19, 12:42 AM
- Updated 21.10.19, 12:42 AM
- 3 mins read
For a few hundred million people, the ultimate guestbook is TripAdvisor, where people continuously record their holiday highs and lows for the benefit of other holidaygoers and hotel proprietors. From time to time, the issue of fake reviews on the website have cropped up but TripAdvisor says the company has never failed to tackle situations head-on. This is something Greek geographer Pausanias — some credit him for authoring the first travel guide, Description of Greece, sometime in the second century AD — wouldn’t have faced.
t2 chatted with Becky Foley, senior director of trust & safety at TripAdvisor, about how the platform handles reviews.
How serious is the issue of fake reviews on travel sites?
We can only speak for our platform and as per the data analysed in this report (the recent TripAdvisor Transparency Report), 2.1 per cent of all review submissions in 2018 were identified as fake — but most never made it onto the platform.
However, while we are winning the fight against fake reviews on TripAdvisor, we can only protect our corner of the Internet. As long as other review platforms aren’t taking aggressive action, then fraudsters will continue to exploit and extort small businesses for cash. It is time other platforms, like Google and Facebook, stepped up to the plate to join us in tackling this problem head on.
What technology has TripAdvisor deployed to tackle fake reviews? How much of it is a mix of human intervention and AI?
The techniques and technology we use to catch fraud are constantly evolving to stay one step ahead of fraudsters. We have adapted many of the techniques that credit card and banking industries use to detect fraud, including network forensics and machine learning, and applied those same principles to catch fake reviews. We have also benefited from an ever- increasing volume of review data which we can use to train our review analysis system on the patterns of user behaviour it needs to watch out for.
Sixty-six million reviews were submitted to TripAdvisor in 2018 alone. Assessing reviews on such a scale is a huge undertaking. That is why, when a review is submitted, it first goes through our review analysis system. That system can reject a review outright, allow it to be posted onto the site, or it can flag the review for inspection by our human moderation team. By using advanced technology to filter reviews this way, we can ensure that our human moderation team is focused only on potentially problematic reviews - 2.7 million reviews were assessed by our human moderation team in 2018. This approach avoids a backlog of unpublished reviews while still achieving very high levels of detection and accuracy.
There have been allegations (from the consumer group Which?) against TripAdvisor that it had artificially boosted hotel ratings. How did you tackle the criticism?
The findings presented by Which? were extremely misleading and ignored too many of the facts. Their claims about fake reviews on TripAdvisor were based on a flawed understanding of fake review patterns that was reliant on too many assumptions, and too little data.
The fraud detection techniques we use are far more sophisticated, and make TripAdvisor the leaders in our industry when it comes to fake review detection. We analyse hundreds of data-points about each review — most of which only we have access to — and we combine that data with a wealth of knowledge and understanding of review patterns that our team of experts has gained from tracking hundreds of millions of reviews over a near 20-year period. This includes an ability to track and analyse first-time reviews in far more detail and with far more rigor than Which?’s team was able to do.
That is why we can confidently say that Which?’s methods are not reliable at catching fake reviews. It is simply far too simplistic to assume all first-time reviewers are suspicious. Every genuine reviewer in the world is at some point a first-time reviewer. Accurate fraud detection requires analysis of a wide range of data-points, such as IP information, location data or details about the device an account was using when submitting a review. This crucial data is missing from Which?’s analysis.
Can you share some instances where TripAdvisor has been able to spot fake review companies and take them down?
Since 2015, TripAdvisor has stopped the activity of more than 100 websites that were caught trying to sell reviews, including one individual who was sentenced to nine months in prison by a criminal court of Lecce in Italy last year. While that example was the first of its kind that we are aware of, we hope it will spur on law enforcement authorities in other markets to take action against paid review companies based within their jurisdictions.
What are some of the points users should be careful about while going through the reviews? How do they know if a review is fake?
Most of the signals associated with fake reviews that would be visible to the naked eye are already tracked by our review analysis system, so the chances are that anything suspicious would already have been blocked by our system before it ever made it onto the site. Nevertheless, if a user does see something suspicious about a review, it would be best to report it to TripAdvisor immediately using our free reporting tools. Nearly half (43 per cent) of the reviews that were reported to us in 2018 were removed for violating our guidelines, after being processed by our review moderation team.