The tone of congressional hearings with tech industry leaders in recent years can best be described as antagonistic. Mark Zuckerberg, Jeff Bezos and other tech luminaries have all been dressed down on Capitol Hill by lawmakers upset with their companies.
But on Tuesday, Sam Altman, CEO of San Francisco startup OpenAI, testified before members of a Senate subcommittee and largely agreed with them about the need to regulate the increasingly powerful AI technology being created at his company and others like Google and Microsoft.
In his first congressional testimony, Altman called on lawmakers to regulate artificial intelligence as members of the committee showed a burgeoning understanding of the technology. The hearing underscored the deep unease that technologists and authorities feel about AI̵[ads1]7;s potential harms. But this unease did not concern Mr. Altman, who had a friendly audience in the members of the subcommittee.
The appearance of Mr. Altman, a 38-year-old Stanford University dropout and technology entrepreneur, was his baptism as a leading figure in AI. The boyish Mr. Altman traded in his usual sweater and jeans for a blue suit and tie for the three-hour hearing.
Mr. Altman also spoke about the company’s technology at a dinner with dozens of members of the House on Monday night, and met privately with a number of senators before the hearing. He offered a loose framework to govern what happens next with the rapidly evolving systems that some believe could fundamentally change the economy.
“I think if this technology goes wrong, it can go pretty bad. And we want to be vocal about that,” he said. “We want to work with the government to prevent that from happening.”
Mr. Altman made his public debut on Capitol Hill as interest in AI has exploded. Tech giants have poured effort and billions of dollars into what they say is a transformative technology, even amid growing concerns about AI’s role in spreading misinformation, killing jobs and one day matching human intelligence.
It has put the technology in the spotlight in Washington. President Biden said this month at a meeting of a group of CEOs of AI companies that “what you’re doing has enormous potential and enormous danger.” Top leaders in Congress have also promised AI rules.
That members of the Senate Subcommittee on Privacy, Technology and the Law were not planning a rough grilling of Mr. Altman was clear when they thanked Mr. Altman for his private meetings with them and for agreeing to appear at the hearing. Cory Booker, Democrat of New Jersey, repeatedly referred to Mr. Altman by his first name.
Mr. Altman was joined at the hearing by Christina Martin, IBM’s head of privacy and trust, and Gary Marcus, a well-known professor and frequent critic of AI technology.
Altman said the company’s technology could destroy some jobs, but also create new ones, and that it will be important for “governments to figure out how we want to reduce this.” He proposed the creation of an agency that issues licenses to create large AI models, safety regulations and tests that AI models must pass before being released to the public.
“We believe the benefits of the tools we’ve used so far outweigh the risks, but ensuring their safety is critical to our work,” Altman said.
But it was unclear how lawmakers would respond to the call to regulate AI. Congress’s track record on technical regulations is dismal. Dozens of privacy, speech and security bills have failed over the past decade due to partisan bickering and fierce opposition from tech giants.
The United States has followed the world for regulations in privacy, speech and child protection. It is also behind the AI regulations. Legislators in the EU will introduce rules for the technology later this year. And China has created AI laws that are consistent with the censorship laws.
Sen. Richard Blumenthal, Democrat of Connecticut and chairman of the Senate panel, said the hearing was the first in a series to learn more about the potential benefits and harms of AI to ultimately “write the rules” for it.
He also acknowledged the inability of Congress to keep pace with the introduction of new technology in the past. “Our goal is to demystify and hold the new technologies accountable to avoid some of the mistakes of the past,” Blumenthal said. “Congress failed to meet the social media moment.”
Members of the subcommittee proposed an independent agency to oversee AI; rules forcing companies to disclose how their models work and the data sets they use; and antitrust rules to prevent companies like Microsoft and Google from monopolizing the emerging market.
“The devil will be in the details,” said Sarah Myers West, executive director of the AI Now Institute, a policy research center. She said Mr. Altman’s proposed regulations do not go far enough and should include restrictions on how AI is used in policing and the use of biometric data. She noted that Mr. Altman showed no indication of slowing development of OpenAI’s ChatGPT tool.
“It’s so ironic to see an attitude about the concern about harm from people who quickly release into commercial use the system responsible for those harms,” West said.
Some lawmakers in the hearing still pointed to the persistent gap in technological know-how between Washington and Silicon Valley. Lindsey Graham, Republican of South Carolina, repeatedly asked witnesses whether a speech liability shield for online platforms like Facebook and Google also applies to AI
Mr. Altman, calm and unruffled, tried several times to distinguish between AI and social media. “We need to work together to find a whole new approach,” he said.
Some members of the subcommittee also showed a reluctance to crack down too hard on an industry that holds great economic promise for the United States and that competes directly with adversaries such as China.
The Chinese are creating artificial intelligence that “reinforces the core values of the Chinese Communist Party and the Chinese system,” said Chris Coons, Democrat of Delaware. “And I’m concerned about how we advance AI that reinforces and strengthens open markets, open societies and democracy.”
Some of the toughest questions and comments to Mr. Altman came from Dr. Marcus, who noted that OpenAI has not been open about the data it uses to develop its systems. He expressed doubt in Mr. Altman’s prediction that new jobs will replace those killed by AI
“We have unprecedented opportunities here, but we also face a perfect storm of corporate irresponsibility, widespread distribution, lack of adequate regulation and inherent unreliability,” Dr. Marcus said.
Tech companies have argued that Congress should be wary of any broad rules that lump different types of AI together. In Tuesday’s hearing, IBM’s Martin called for an AI law similar to Europe’s proposed regulations, which outline different levels of risk. She called for rules that focus on specific areas of use, not regulating the technology itself.
“At its core, AI is just a tool, and tools can serve different purposes,” she said, adding that Congress should take a “precision regulatory approach to AI”