AI leaders warn that the technology poses “risks of extinction” such as pandemics and nuclear war
Hundreds of business leaders and public figures sounded a sobering alarm.
Hundreds of business leaders and public figures sounded a sobering alarm on Tuesday over what they described as the threat of mass extinction from artificial intelligence.
Among the 350 signatories to the public statement are Sam Altman, CEO of OpenAI, the company behind the popular chatbot ChatGPT; and Demis Hassabis, CEO of Google DeepMind, the tech giant’s AI division.
“Reducing the risk of extinction from AI should be a global priority alongside other societal risks such as pandemics and nuclear war,” said a statement released by the San Francisco-based nonprofit Center for AI Safety.
Supporters of the statement also include a number of figures such as musician Grimes, environmental activist Bill McKibben and neuroscience author Sam Harris.
Concerns about the risks associated with artificial intelligence and calls for heavy regulation of the technology have drawn greater attention in recent months in response to major breakthroughs such as ChatGPT.
In Senate testimony two weeks ago, Altman warned lawmakers, “If this technology goes wrong, it could go pretty wrong.”
“We believe that regulatory intervention by governments will be essential to reduce the risk of increasingly powerful models,” he added, suggesting that licenses or security requirements necessary for the operation of AI models should be adopted.
Like other AI-enabled chatbots, ChatGPT can instantly answer questions from users on a wide range of topics, generate an essay on Shakespeare or a set of travel tips for a given destination.
Microsoft launched a version of its Bing search engine in March that offers answers provided by GPT-4, the latest model of ChatGPT. The rival search company Google announced in February an AI model called Bard.
The rise of massive amounts of AI-generated content has raised fears of the potential spread of misinformation, hate speech and manipulative reactions.
Hundreds of tech leaders, including billionaire entrepreneur Elon Musk and Apple co-founder Steve Wozniak, signed an open letter in March calling for a six-month pause in the development of AI systems and a greater expansion of government oversight.
“AI systems with human-competitive intelligence could pose a profound risk to society and humanity,” the letter said.
In comments last month to Fox News host Tucker Carlson, Musk raised further alarm: “There is certainly a path to AI dystopia, which is to train AI to be deceptive.”
The statement released on Tuesday included other major AI industry supporters, including Microsoft Chief Technology Officer Kevin Scott and OpenAI Head of Policy Research Miles Brundage.
The Center for AI Security said on its website: “It can be difficult to voice concerns about some of the most serious risks of advanced AI.”
“The concise statement below aims to overcome this obstacle and open up discussion,” the Center for AI Security added.