Online product reviews are becoming a battleground for modern AI

In the battleground of online reviews, it’s AI vs. AI.

Generative artificial intelligence that can spit out human-like reviews is met by AI trained to spot fake reviews. It’s the kind of clash that has implications for consumers as well as the future of online content.

Saoud Khalifah, founder and CEO of Fakespot, a startup that uses AI to detect fraudulent reviews, said his company has seen an influx of AI-generated fake reviews. Fakespot is working on a way to detect content written by AI platforms like ChatGPT.

“What’s very different today is that the models are savvy to the point where they can write about anything,”[ads1]; he said.

Fake online reviews have been around for about as long as genuine online reviews, but the issue has gained new traction due to broader concerns about advanced AI technology now widely available on the internet.

After years of policing the issue through case-by-case enforcement, the Federal Trade Commission last month proposed a new rule to crack down on fraudulent reviews. If passed, the rule would ban writing fake reviews, paying for reviews, hiding honest reviews and other deceptive practices — and impose hefty fines on those who violate it.

But exactly what is or isn’t a fake review is now less clear, and the technology to detect fraudulent reviews is still a work in progress.

“We don’t know — really have no way of knowing — to what extent bad actors are actually using some of these tools, and how much may be bot-generated versus human-made,” said Michael Atleson, an attorney in the FTC’s Division of Advertising Practice. “It’s really more of a serious concern, and it’s just a microcosm of the concerns that these chatbots are going to be used to create all kinds of fake content online.”

There are some indications that AI-generated reviews are already commonplace. CNBC reported in April that some reviews on Amazon had clear indications of AI involvement, with many starting with the phrase “Like an AI language model…”

Amazon is among the many online sellers that have been fighting fake reviews for years. A spokesperson said the company receives millions of reviews every week and that it proactively blocked 200 million suspected fake reviews by 2022. The company uses a combination of human investigators and AI to detect fake reviews, using machine learning models that analyze factors such as a user’s review history, login activity and relationships with other accounts.

Further complicating the issue is the fact that AI-generated reviews aren’t entirely against Amazon’s rules. An Amazon spokesperson said the company allows customers to post AI-generated reviews as long as they are authentic and do not violate its policies.

The online shopping giant has also indicated that it may need some help. In June, Dharmesh Mehta, Amazon’s vice president of worldwide sales partner services, called in a company blog post for more collaboration between the “private sector, consumer groups and governments” to address the growing problem of fake reviews.

The crucial question is whether AI detection will be able to outfox the AI ​​that creates fake reviews. The first AI-generated fake reviews detected by Fakespot came from India a few months ago, Khalifah said, produced by what he calls “fake review farms” — businesses that sell fraudulent reviews en masse. Generative AI has the potential to make their work much easier.

“It’s definitely a tough test to pass for these detection tools,” said Bhuwan Dhingra, assistant professor of computer science at Duke University. “Because if the models exactly match the way humans write something, then you really can’t tell the difference between the two. I wouldn’t expect to see any detectors pass the test with flying colors anytime soon.”

Several studies have found that humans are not particularly good at detecting reviews written by AI. Many technologists and companies are working on systems to discover AI-generated content, with some like OpenAI, the company behind ChatGPT, even working with AI to discover its own AI.

Ben Zhao, a professor of computer science at the University of Chicago, said it is “almost impossible” for AI to take on the challenge of removing AI-generated reviews, because reviews created by bots are often indistinguishable from human ones.

“It’s an ongoing cat-and-mouse hunt, but there’s nothing fundamental at the end of the day that differentiates an AI-created piece of content,” he said. “You will find systems that claim to be able to distinguish between human written text and ChatGPT text. But the techniques underlying them are all pretty simple compared to what they’re trying to catch up to.”

With 90% of consumers saying they read reviews while shopping online, it’s a prospect that has some consumer advocates worried.

“It’s terrifying for consumers,” said Teresa Murray, who heads the consumer watchdog office for the US Public Interest Research Group. “Already, AI is helping dishonest businesses spit out genuine reviews in a conversational tone for thousands within seconds.”

Source link

Back to top button

mahjong slot