Elon Musk and top AI researchers call for pause in ‘gigantic AI experiments’
A number of well-known AI researchers – and Elon Musk – have signed an open letter calling on AI labs around the world to halt development of large-scale AI systems, citing fears of the “profound risks to society and humanity” they claim software poses.
The letter, published by the nonprofit Future of Life Institute, notes that AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems “that no one — not even their creators — can understand, predict , or reliable control.”
“We urge all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
“Therefore, we urge all AI laboratories to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter said. “This break must be public and verifiable, and include all key actors. If such a break cannot be implemented quickly, governments should step in and impose a moratorium.”
Signatories include author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang, and a number of well-known AI researchers and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque. The full list of signatories can be seen here, although new names should be treated with caution as there are reports of names being added to the list as a joke (eg OpenAI CEO Sam Altman, a person partially responsible for the current the race dynamics in AI).
The letter is unlikely to have any impact on the current climate of AI research, which has seen tech companies such as Google and Microsoft rush to deploy new products, often sidelining previously stated concerns about safety and ethics. But it is a sign of the growing resistance to this “send it now and fix it later” approach; an opposition that could potentially enter the political domain for consideration by actual legislators.
As noted in the letter, even OpenAI itself has expressed the potential need for “independent review” of future AI systems to ensure they meet security standards. The signatories say that this time has now come.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared security protocols for advanced AI design and development that are rigorously audited and monitored by independent external experts,” they write. “These protocols should ensure that systems that adhere to them are secure beyond a reasonable doubt.”
You can read the entire letter here.