Google fired Blake Lemoine, the engineer who said LaMDA was sensitive
Lemoine worked for Google’s Responsible AI organization and, as part of the job, began talking with LaMDA, the company’s artificially intelligent system for building chatbots, this fall. He came to believe the technology was sensitive after signing up to test whether the artificial intelligence could use discriminatory or hate speech.
In a statement, Google spokesperson Brian Gabriel said the company takes AI development seriously and has reviewed LaMDA 11 times, as well as publishing a research paper detailing its responsible development efforts.
“If an employee shares concerns about our work, as Blake did, we consider them thoroughly,” he added. “We found Blake’s claims that LaMDA is deliberate to be completely unfounded and worked to clarify that with him for many months.”
He attributed the discussions to the company’s open culture.
“It is regrettable that despite long-standing commitment to this topic, Blake chose to persistently violate clear employment and data security policies that include the need to protect product information,” Gabriel added. “We will continue our careful development of language models, and we wish Blake the best of luck.”
Lemoine’s firing was first reported in the Big Technology newsletter.
Lemoine’s interviews with LaMDA led to a wide-ranging discussion about recent advances in AI, public misunderstanding of how these systems work, and corporate responsibility. Google previously pushed out the heads of its Ethical AI division, Margaret Mitchell and Timnit Gebru, after they warned of the risks associated with this technology.
LaMDA uses Google’s most advanced large language models, a type of AI that recognizes and generates text. These systems cannot understand language or meaning, researchers say. But they can produce deceptively human-like speech because they are trained on vast amounts of data crawled from the internet to predict the second most likely word in a sentence.
After LaMDA spoke to Lemoine about personhood and its rights, he began to investigate further. In April, he shared a Google Doc with top executives called “Is LaMDA Sentient?” which contained some of his conversations with LaMDA, in which it claimed to be sentient. Two Google executives investigated his claims and dismissed them.
Lemoine was previously placed on paid administrative leave in June for violating the company’s confidentiality policy. The engineer, who spent most of his seven years at Google working on proactive search, including personalization algorithms, said he is considering potentially starting his own AI company focused on a collaborative storytelling video game.