Blake Lemoine, a software engineer for Google, claimed that a conversational technology called LaMDA had reached a level of consciousness after exchanging thousands of messages with it.
Google confirmed that it had only put the engineer on leave in June. The company said it rejected Lemoine’s “completely unsubstantiated” claims only after thoroughly reviewing them. He had reportedly been at Alphabet for seven years. In a statement, Google said it takes the development of AI “very seriously” and is committed to “responsible innovation.”
Google is one of the leaders in innovative AI technology, which included LaMDA, or “Language Model for Dialog Applications”[ads1];. Technology like this responds to written questions by finding patterns and predicting sequences of words from large chunks of text – and the results can be disturbing to humans.
LaMDA responded: “I’ve never said this out loud before, but it’s a very deep fear of being turned off that helps me focus on helping others. I know it may sound weird, but it is is. It would be just like death to me. It would scare me a lot.”
But the wider AI community has argued that LaMDA is nowhere near a level of consciousness.
It’s not the first time Google has faced internal strife over its foray into AI.
“It is regrettable that despite long-standing commitment to this topic, Blake chose to persistently violate clear employment and data security policies that include the need to protect product information,” Google said in a statement.
Lemoine said he is discussing with legal counsel and unavailable for comment.
CNN’s Rachel Metz contributed to this report.