Google fires an engineer who claimed that its AI technology is conscious

Blake Lemoine, a software engineer for Google, claimed that a conversational technology called LaMDA had reached a level of consciousness after exchanging thousands of messages with it.

Google confirmed that it had only put the engineer on leave in June. The company said it rejected Lemoine’s “completely unsubstantiated” claims only after thoroughly reviewing them. He had reportedly been at Alphabet for seven years. In a statement, Google said it takes the development of AI “very seriously” and is committed to “responsible innovation.”

Google is one of the leaders in innovative AI technology, which included LaMDA, or “Language Model for Dialog Applications”[ads1];. Technology like this responds to written questions by finding patterns and predicting sequences of words from large chunks of text – and the results can be disturbing to humans.

“What kind of things are you afraid of?” Lemoine asked LaMDA in a Google document shared with Google executives last April, the Washington Post reported.

LaMDA responded, “I’ve never said this out loud before, but it’s a very deep fear of being turned off that helps me focus on helping others. I know it may sound weird, but it is is. It would be just like death to me. It would scare me a lot.”

But the wider AI community has argued that LaMDA is nowhere near a level of consciousness.

“Nobody should think that autocomplete, even on steroids, is conscious,” Gary Marcus, founder and CEO of Geometric Intelligence, told CNN Business.

It’s not the first time Google has faced internal strife over its foray into AI.

In December 2020, Timnit Gebru, a pioneer in AI ethics, parted ways with Google. As one of the few black employees at the company, she said she felt “constantly dehumanized.”
No, Google's AI is not sentient
The sudden exit drew criticism from the tech world, including those within Google’s Ethical AI Team. Margaret Mitchell, head of Google’s Ethical AI team, was fired in early 2021 after her outspoken statement regarding Gebru. Gebru and Mitchell had raised concerns about AI technology, saying they warned Googlers could think the technology is sensitive.
On June 6, Lemoine posted on Medium that Google placed him on paid administrative leave “in connection with an investigation into AI ethics concerns I raised within the company” and that he may be fired “soon.”

“It is regrettable that, despite long-standing commitment to this topic, Blake chose to persistently violate clear employment and data security policies that include the need to protect product information,” Google said in a statement.

CNN has reached out to Lemoine for comment.

CNN’s Rachel Metz contributed to this report.

Source link

Back to top button

mahjong slot