Customer reviews have been a core part of why customers love shopping in Amazon’s stores ever since the company opened in 1995. Amazon makes sure that it’s easy for customers to leave honest reviews to help inform the purchase decisions of millions of other customers around the world. At the same time, the company makes it hard for bad actors to take advantage of Amazon’s trusted shopping experience.
So, what happens when a customer submits a review? Before being published online, Amazon uses Artificial Intelligence (AI) to analyze the review for known indicators that the review is fake. The vast majority of reviews pass Amazon’s high bar for authenticity and get posted right away. However, if potential review abuse is detected, there are several paths the company takes. If Amazon is confident the review is fake, they move quickly to block or remove the review and take further action, such as revoking a customer’s review privileges, blocking bad actor accounts and even pursuing litigation against the bad actors. If a review is suspicious but additional evidence is needed, Amazon’s expert investigators, who are specially trained to identify abusive behavior, investigate other signals before taking action. In fact, in 2022, Amazon observed and proactively blocked more than 200 million suspected fake reviews in its stores worldwide.
“Fake reviews intentionally mislead customers by providing information that is not impartial, authentic, or intended for that product or service,” says Josh Meek, Senior Data Science Manager on Amazon’s Fraud Abuse and Prevention team. “Not only do millions of customers count on the authenticity of reviews in Amazon’s store when deciding what to buy, but millions of brands and businesses whose products are sold in our stores also count on us to stop them from ever reaching customers. We work hard to responsibly monitor and enforce our policies to ensure reviews reflect the views of real customers, and protect honest sellers who rely on us to get it right.”
Among other measures, Amazon uses the latest advancements in AI to stop hundreds of millions suspected fake reviews, manipulated ratings, fake customer accounts and other abuse before customers see it. Machine Learning (ML) models analyze a multitude of proprietary data including whether the seller has invested in ads (which may be driving additional reviews), customer-submitted reports of abuse, risky behavioural patterns, review history, and more. Large Language Models (LLMs) are leveraged alongside Natural Language Processing techniques to analyse anomalies in this data that might indicate that a review was fake or incentivised – say with a gift card, free product, or some other form of reimbursement. Amazon also uses Deep Graph Neural Networks (GNNs) to analyse and understand complex relationships and risk patterns to help detect and remove groups of bad actors.
“The difference between an authentic and fake review is not always obvious on the surface,” Meek said. “For example, a product might accumulate reviews quickly because a seller invested in advertising or is offering a great product at the right price. Or a customer may think a review is fake because it includes poor grammar. Those are not always the best indicators.”
This is where some of our critics get fake review detection wrong – they have to make big assumptions without having access to data signals that indicate patterns of abuse. The combination of advanced technology and data helps Amazon identify fake reviews more accurately by going beyond the surface level indicators of abuse to identify deeper signals of bad actor activity.
“Maintaining a trustworthy shopping experience is our top priority,” said Rebecca Mond, Head of External Relations, Trustworthy Reviews at Amazon. “We continue to invent new ways to improve and stop fake reviews from entering our store and protect our customers so they can shop with confidence.”
Learn more about Amazon’s efforts to combat fake reviews here.