Artificial intelligence could lead humanity to “extinction”, experts warn


Many of the biggest names in artificial intelligence have signed a short statement warning that their technology could mean the end of humanity.
Published Tuesday, the full statement reads: “Reducing the risk of extinction from AI should be a global priority alongside other societal risks such as pandemics and nuclear war.”
The statement was posted on the website of the Center for AI Safety, a San Francisco-based nonprofit organization. It has been signed by nearly 400 people, including some of the biggest names in the field – Sam Altman, CEO of OpenAI, the company behind ChatGPT, as well as top AI leaders from Google and Microsoft and 200 academics.
The statement is the latest in a series of alarms raised by AI experts — but also one that fueled a growing backlash against a focus on what some see as overhyped hypothetical harms from AI.
Meredith Whittaker, president of encrypted messaging app Signal and chief adviser to the AI Now Institute, a nonprofit group dedicated to ethical AI practices, scoffed at the statement as tech executives overpromised their product.
Clément Delangue, co-founder and CEO of AI company Hugging Face, tweeted an image of an edited version of the statement subbing in “AGI” for AI.
AGI stands for artificial general intelligence, which is a theoretical form of AI that is as capable or more capable than humans.
The statement comes two months after another group of AI and technology leaders, including Tesla owner Elon Musk, Apple co-founder Steve Wozniak and IBM chief scientist Grady Booch, signed a petition calling for a “pause” on all large-scale AI research that was open to the public. Neither has yet signed the new declaration, and no such break has occurred.
Altman, who has repeatedly called for AI to be regulated, charmed Congress earlier this month. He held a private dinner with dozens of House members and was the subject of an amicable hearing in the Senate, where he became the rare technology chief warmed to by both parties.
Altman’s demand for regulation has had its limitations. Last week he said OpenAI could leave the EU if AI became “over-regulated”.
While the White House has announced some plans to address AI, there is no indication that the US has any imminent plans for large-scale regulation of the industry.
Gary Marcus, a leading AI critic and professor emeritus of psychology and neural science at New York University, said that while potential threats from AI are very real, worrying only about a hypothetical worst-case scenario is distracting.
“Literal extinction is only one possible risk, which is not yet well understood, and there are many other risks from AI that also deserve attention,” he said.
Some technology experts have said that more mundane and immediate uses of AI are a greater threat to humanity. Microsoft president Brad Smith has said that deepfakes and the potential for them to be used for disinformation are his biggest concerns about the technology.
Last week, markets fell briefly after a fake, apparently AI-generated photo of an explosion near the Pentagon went viral on Twitter.
