Regulators are dusting off rule books to tackle generative artificial intelligence like ChatGPT
- Watchdogs are racing to keep pace with the mass deployment of AI
- In anticipation of new laws, regulators adapt existing ones
- Generative tools face privacy, copyright and other challenges
LONDON/STOCKHOLM, May 22 (Reuters) – As the race to develop more powerful artificial intelligence services like ChatGPT accelerates, some regulators are relying on old laws to control a technology that could change the way societies and businesses operate.
The European Union is at the forefront of drafting new AI rules that could set the global standard for addressing privacy and security concerns raised by the rapid advances in the generative AI technology behind OpenAI’s ChatGPT.
But it will take several years before the law is enforced.
“In the absence of regulation, the only thing governments can do is use existing rules,” said Massimiliano Cimnaghi, a European data governance expert at consultancy BIP.
“If it’s about protecting personal data, they use data protection laws, if it’s a threat to the safety of people, there are rules that aren’t specifically defined for AI, but they still apply.”
In April, Europe’s national data protection watchdog set up a task force to address issues with ChatGPT after Italian regulator Garante took the service offline, accusing OpenAI of violating the EU’s GDPR, a sweeping privacy regime enacted in 2018.
ChatGPT was reinstated after the US company agreed to install age verification features and allowed European users to block their information from being used to train the AI model.
The agency will begin investigating other generative AI tools more broadly, a source close to Garante told Reuters. Data protection authorities in France and Spain also launched investigations into OpenAI’s compliance with privacy legislation in April.
BRING IN THE EXPERTS
Generative AI models have been known to make mistakes, or “hallucinations,” spewing out misinformation with uncanny certainty.
Such mistakes can have serious consequences. If a bank or government department used AI to speed up the decision-making process, individuals could be unfairly turned down for loans or benefit payments. Big technology companies including Alphabet’s Google ( GOOGL.O ) and Microsoft Corp ( MSFT.O ) had stopped using AI products considered ethically difficult, such as financial products.
Regulators aim to apply existing rules covering everything from copyright and data protection to two key issues: the data fed into the models and the content they produce, according to six regulators and experts in the US and Europe.
Agencies in the two regions are being urged to “interpret and reinterpret their mandates,” said Suresh Venkatasubramanian, a former White House technology adviser. He cited the US Federal Trade Commission’s (FTC) investigation into algorithms for discriminatory practices under existing regulatory authorities.
In the EU, proposals for the bloc’s AI law would force companies such as OpenAI to disclose any copyrighted material – such as books or photographs – used to train their models, leaving them vulnerable to legal challenges.
However, proving copyright infringement will not be easy, according to Sergey Lagodinsky, one of several politicians involved in the drafting of the EU proposals.
“It’s like reading hundreds of novels before you write your own,” he said. “If you actually copy something and publish it, that’s one thing. But if you’re not outright plagiarizing someone else’s material, it doesn’t matter what you trained on.
French data regulator CNIL has started to “think creatively” about how existing laws might apply to AI, according to Bertrand Pailhes, its chief technology officer.
For example, discrimination claims in France are usually dealt with by the Defenseur des Droits (Defender of Rights). However, the lack of expertise in AI bias has prompted the CNIL to take the lead on the matter, he said.
“We are looking at the full range of effects, although our focus remains on data protection and privacy,” he told Reuters.
The organization is considering using a provision in the GDPR that protects individuals against automated decisions.
“At this stage, I can’t say if it’s enough, legally,” Pailhes said. “It will take some time to build an opinion, and there is a risk that different regulators will have different views.”
In the UK, the Financial Conduct Authority is one of several government regulators tasked with drafting new guidelines covering AI. It is consulting with the Alan Turing Institute in London, along with other legal and academic institutions, to improve its understanding of the technology, a spokesperson told Reuters.
As regulators adapt to the pace of technological advances, some industry insiders have called for greater engagement with business leaders.
Harry Borovick, general counsel at Luminance, a startup that uses AI to process legal documents, told Reuters that dialogue between regulators and companies has been “limited” so far.
“This does not bode well for the future,” he said. “Regulators seem either slow or unwilling to implement the approaches that would enable the right balance between consumer protection and business growth.”
(This story has been refiled to fix a spelling of Massimiliano, not Massimiliano, in section 4)
Reporting by Martin Coulter in London, Supantha Mukherjee in Stockholm, Kantaro Komiya in Tokyo and Elvira Pollina in Milan; editing by Kenneth Li, Matt Scuffham and Emelia Sithole-Matarise
Our standards: Thomson Reuters Trust Principles.