OpenAI’s trust and security chief announced plans to step down from the job on Thursday.
Dave Willner, who has led the artificial intelligence firm̵[ads1]7;s trust and security team since February 2022, said in a LinkedIn post that he is “leaving OpenAI as an employee and moving into an advisory role” to spend more time with his family.
Willner’s exit comes at a crucial moment for OpenAI. Since the viral success of the company’s AI chatbot ChatGPT late last year, OpenAI has faced increasing scrutiny from lawmakers, regulators and the public over the safety of its products and their potential implications for society.
OpenAI CEO Sam Altman called for AI regulation during a Senate panel hearing in March. He told lawmakers that the potential for AI to be used to manipulate voters and target disinformation is among “my areas of greatest concern,” especially because “we’re facing an election next year and these models are getting better.”
In his Thursday post, Willner — whose resume includes stops at Facebook and Airbnb — noted that “OpenAI is going through a high-intensity phase of development” and that his role has “grown dramatically in scope and scope since I joined.”
A statement from OpenAI about Willner’s exit said that “his work has been fundamental in operationalizing our commitment to the safe and responsible use of our technology, and has paved the way for future progress in this field.” OpenAI chief technology officer Mira Murati will become the trust and security team’s interim lead and Willner will advise the team through the end of this year, according to the company.
“We are seeking a technically skilled leader to advance our mission, focusing on the design, development and implementation of systems that ensure secure use and scalable growth of our technology,” the company said in the statement.
Willner’s exit comes as OpenAI continues to work with regulators in the US and elsewhere to develop safeguards around rapidly advancing AI technology. OpenAI was among seven leading AI companies that on Friday made voluntary commitments accepted by the White House to make AI systems and products safer and more reliable. As part of the pledge, the companies agreed to put new AI systems through external testing before they are publicly released, and to clearly label AI-generated content, the White House announced.