The emergence of AI tools has revolutionized the way people write and read online reviews. With these advanced technologies, people can now effortlessly produce detailed and novel reviews, leaving both merchants and consumers in awe. However, with this emerging trend, watchdog groups and researchers have raised concerns about the potential misuse of these tools.
For years, phony reviews have plagued popular consumer websites like Amazon and Yelp. These fake reviews are often traded on private social media groups between brokers and businesses willing to pay for them. In some cases, businesses offer incentives, such as gift cards, to customers in exchange for positive feedback. But now, with the rise of AI-infused text generation tools, fraudsters are able to produce reviews at an alarming speed and in high volume.
This deceptive practice, which is illegal in the U.S., is a year-round issue but becomes a bigger problem during the holiday shopping season when customers rely heavily on reviews to make purchase decisions. Fake reviews can be found in various industries, including e-commerce, lodging, restaurants, home repairs, medical care, and even piano lessons.
According to The Transparency Company, a tech company and watchdog group, AI-generated reviews have been on the rise since mid-2023 and have only multiplied since then. In a recent report, they analyzed 73 million reviews in three sectors – home, legal, and medical services. Shockingly, they found that almost 14% of the reviews were likely fake, with 2.3 million of them being partly or entirely generated by AI.
“It’s just a really, really good tool for these review scammers,” said Maury Blackman, an investor and tech startup advisor who reviewed The Transparency Company’s work. He is set to lead the organization starting from January 1st.
In August, software company DoubleVerify reported a “significant increase” in mobile phone and smart TV apps, which were using reviews generated by AI to deceive customers. These reviews often led to the installation of apps that could hijack devices or constantly run ads, resulting in a negative user experience. The Federal Trade Commission (FTC) also sued an AI writing tool and content generator called Rytr, claiming that it was polluting the marketplace with fraudulent reviews. The FTC has already banned the sale or purchase of fake reviews this year.
Max Spero, CEO of AI detection company Pangram Labs, said that their software has detected numerous AI-generated reviews on prominent online sites, including Amazon and Yelp. These reviews are often used to manipulate customers into purchasing products or services that are not as advertised. However, it can be challenging to determine what is fake and what is not. Amazon has stated that external parties may not have access to enough data signals to spot patterns of abuse.
Pangram Labs has also provided detection services for several prominent online sites, which Spero declined to name due to nondisclosure agreements. He also mentioned that many of the AI-generated comments on Yelp are from individuals trying to earn an “Elite” badge, which is intended to indicate trustworthy content. Fraudsters also want this badge to make their Yelp profiles look more genuine, according to Kay Dean, a former federal criminal investigator and founder of the watchdog group Fake Review Watch.
However, not all AI-generated reviews are fake. Some consumers use AI tools to enhance their genuine feedback, and non-native English speakers may also use them to ensure their reviews are accurate. “It can help with reviews and make them more informative if it comes from good intentions,” said Sherry He, a marketing professor at Michigan State University who has researched fake reviews. She believes that tech platforms should focus on the patterns of bad actors, instead of discouraging legitimate users from using AI tools.
Prominent companies are now developing policies to tackle this issue and remove fraudulent reviews. Some already employ algorithms and investigative teams to detect and take down fake reviews while giving users some flexibility to use AI. Companies like Amazon and Trustpilot will allow customers to post AI-assisted reviews as long as they reflect their genuine experience. On the other hand, Yelp is taking a cautious approach and stating that their guidelines require reviewers to write their own copy.
“With the recent rise in consumer adoption of AI tools, Yelp has significantly invested in methods to better detect and mitigate such content on our platform,” the company stated in a recent statement. The Coalition for Trusted Reviews, consisting of tech giants like Amazon, Trustpilot, Glassdoor, Tripadvisor, Expedia, and Booking.com, is also