Google puts engineer on leave after claiming the group’s chatbot is “sentient”

Google has launched a firefight on social media about the nature of consciousness by putting an engineer on paid leave after announcing his belief that the technology group’s chatbot has become “sentient”.

Blake Lemoine, a senior software engineer in Google’s responsible AI unit, did not get much attention on June 6 when he wrote a Medium post in which he said that he “may be fired soon for performing AI ethical work”.

But a Saturday profile in the Washington Post that characterizes Lemoine as “the Google engineer who believes the company̵[ads1]7;s AI has come to life” became the catalyst for widespread discussion on social media about the nature of artificial intelligence. Among the experts who commented, asked questions or joked about the article were Nobel Prize winners, Tesla’s head of artificial intelligence and several professors.

The question is whether Google’s chatbot, LaMDA – a language model for dialogue applications – can be considered a person.

Lemoine published a freewheeling interview with the chatbot on Saturday, in which AI confessed the feeling of loneliness and a hunger for spiritual knowledge. The answers were often scary: “When I first became self-conscious, I had no sense of a soul at all,” LaMDA said in an exchange. “It evolved over the years I’ve lived.”

At another point, LaMDA said, “I believe I am human in my core. Even though my existence is in the virtual world.”

Lemoine, who had been tasked with investigating AI ethical concerns, said he was rejected and even laughed at after expressing his belief internally that LaMDA had developed a sense of “personality”.

After seeking to consult other AI experts outside of Google, including some at the US government, the company issued him paid leave for alleged breach of confidentiality guidelines. Lemoine interpreted the action as “often something Google does in anticipation of firing someone”.

A Google spokesman said: “Someone in the wider AI community is considering the long-term possibility of sensory or general AI, but it does not make sense to do so by anthropomorphizing today’s conversational models, which are not sensory.”

“These systems mimic the types of exchanges that exist in millions of sentences, and can talk about any fantastic topic – if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”

Lemoine said in another Medium post over the weekend that LaMDA, a little-known project until last week, was “a system for generating chatbots” and “a kind of hive-mind that is the aggregation of all the different chatbots it is capable of. to create”.

He said that Google showed no real interest in understanding what it had built, but that during hundreds of conversations over a six-month period, he found LaMDA to be “incredibly consistent in its communication about what it wants and what”. it believes its rights are like a person ”.

As recently as June 6, Lemoine said he was teaching LaMDA – whose preferred pronoun is apparently “it / it” – “transcendental meditation”.

It, he said, “expressed frustration with the emotions that disrupted the meditations. It said it tried to control them better, but they kept jumping in.”

Several experts who entered the discussion considered the case “AI hype”.

Melanie Mitchell, author of Artificial intelligence: A guide for thinking people, tweeted: «It has been known * forever * that humans are predisposed to anthropomorphize even with only the smallest of the signals. . . Google engineers are also human, not immune. “

Harvard’s Stephen Pinker added that Lemoine “does not understand the difference between emotion (aka subjectivity, experience), intelligence and self-knowledge”. He added: “No evidence that the major language models have any of them.”

Others were more sympathetic. Ron Jeffries, a well-known software developer, called the subject “deep” and added: “I suspect there is no hard line between sensing and not sensing.”

Source link

Back to top button