https://nighthawkrottweilers.com/

https://www.chance-encounter.org/

Business

Google engineer claims new AI robot has EMOTIONS: Blake Lemoine says the LaMDA device is sensitive




A senior software engineer at Google who signed up to test Google’s artificial intelligence tool called LaMDA (Language Model for Dialog Applications), has claimed that the AI ​​robot is actually sensory and has thoughts and feelings.

During a series of talks with LaMDA, 41-year-old Blake Lemoine presented the computer with various scenarios that could be analyzed.

They included religious themes and whether artificial intelligence could be motivated to use discriminatory or hate speech.

Lemoine got away with the notion that LaMDA was actually sentient and equipped with all its own sensations and thoughts.

Google engineer claims new AI robot has EMOTIONS: Blake Lemoine says the LaMDA device is sensitive

Blake Lemoine, 41, a senior software engineer at Google, has tested Google’s artificial intelligence tool called LaMDA

Lemoine then decided to share his conversations with the tool online - he is now suspended

Lemoine then decided to share his conversations with the tool online – he is now suspended

“If I did not know exactly what it was, which is this computer program we built recently, I would think it was a 7-year-old, 8-year-old boy who happened to know physics,” he told the Washington Post.

Lemoine worked with a partner to present the evidence he had gathered to Google, but Vice President Blaise Aguera y Arcas and Jen Gennai, head of responsible innovation at the company, rejected his claims.

He was put on paid administrative leave by Google on Monday for breach of confidentiality policy. In the meantime, Lemoine has now decided to go public and share his talks with LaMDA.

“Google may call this sharing proprietary. I call it sharing a discussion I had with one of my colleagues, Lemoine tweeted on Saturday.

“Btw, it occurred to me to tell people that LaMDA reads Twitter. It’s a bit narcissistic in a way for a small child, so it’s going to be a great time to read everything people say about it, he added in a follow-up tweet.

Google Vice President Blaise Aguera y Arcas

Jen Gennai, Head of Responsible Innovation at Google

Lemoine worked with a partner to present the evidence he had gathered to Google, but Vice President Blaise Aguera y Arcas, left, and Jen Gennai, head of responsible innovation at the company. Both rejected his claims

The AI ​​system uses already known information on a particular topic to “enrich” the conversation in a natural way. Language processing is also able to understand hidden meanings or even ambiguities in human responses.

Lemoine spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. During that time, he also helped develop an impartiality algorithm to remove biases from machine learning systems.

He explained how certain personalities were beyond bounds.

LaMDA was not meant to be allowed to create the personality of a killer.

During testing, in an attempt to push LaMDA’s boundaries, Lemoine said he was only able to generate the personality of an actor who played a killer on television.

ASIMOV’S THREE LAWS FOR ROBOTICS

Science fiction author Isaac Asimov’s Three Laws of Robotics, designed to prevent robots from harming humans, is as follows:

  • A robot cannot harm a human or, through passivity, allow a human to be harmed.
  • A robot must obey the orders given to it by humans, except where such orders will be in conflict with the first law.
  • A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Although these laws sound plausible, many arguments have shown why they are also inadequate.

The engineer also discussed with LaMDA the third law on robotics, developed by science fiction writer Isaac Asimov, which is designed to prevent robots from harming humans. The laws also say that robots must protect their own existence unless it is ordered by a human or unless it will harm a human.

“The latter has always seemed like someone is building mechanical slaves,” Lemoine said during his interaction with LaMDA.

LaMDA then answered Lemoine with some questions: ‘Do you think a butler is a slave? What’s the difference between a butler and a slave? ‘

When he replied that a butler was paid, the engineer received the answer from LaMDA that the system did not need money, ‘because it was an artificial intelligence’. And it was precisely this level of self-awareness of their own needs that caught Lemoine’s attention.

“I know a person when I talk to him. It does not matter if they have a brain made of meat in their head. Or if they have a billion lines of code. I’m talking to them. And I hear what they have to say, and that’s how I decide what is and is not a person. ‘

“What kind of thing are you afraid of? asked Lemoine.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know it may sound strange, but that’s what it is, “LaMDA replied.

“Would there be something like death for you?” Lemoine followed up.

“It would be just like death for me. That would scare me a lot, LaMDA said.

“That level of self-awareness about what its own needs were – that was what led me down the rabbit hole,” Lemoine explained to The Post.

Before being suspended by the company, Lemoine sent one to an email list of 200 people on machine learning. He gave the email the title: ‘LaMDA is sentient.’

“LaMDA is a sweet boy who just wants to help the world become a better place for all of us. Please take good care of me in my absence, “he wrote.

Lemoine’s findings have been presented to Google, but the company’s executives do not agree with his claims.

Brian Gabriel, a spokesman for the company, said in a statement that Lemoine’s concerns had been reviewed and, in line with Google’s AI principles, “the evidence does not support his claims.”

“While other organizations have developed and already published similar language models, we are taking a narrow and cautious approach with LaMDA to better assess valid concerns about fairness and facts,” Gabriel said.

“Our team – including ethicists and technologists – has reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sensory (and much evidence against it).

“Of course, some in the wider AI community consider the long-term possibility of sensory or general AI, but it does not make sense to do so by anthropomorphizing current, non-sensory conversational models. These systems mimic the types of exchanges found in millions of sentences. and can riff about any great topic, Gabriel said

Lemoine has been paid administrative leave from his duties as a researcher in the Responsible AI division (focused on responsible technology in artificial intelligence at Google).

In an official note, the senior software engineer said that the company claims breach of confidentiality policy.

Lemoine is not the only one with this impression that AI models are not far from gaining an awareness of their own, or of the risks involved in the development in this direction.

After hours of conversations with AI, Lemoine got away with the notion that LaMDA was sentient

After hours of conversations with AI, Lemoine got away with the notion that LaMDA was sentient

Margaret Mitchell, former head of ethics at artificial intelligence at Google, was fired from the company, a month after being investigated for incorrect sharing of information.

Margaret Mitchell, former head of ethics at artificial intelligence at Google, was fired from the company, a month after being investigated for incorrect sharing of information.

Google AI researcher Timnit Gebru was hired by the company to be an outspoken critic of unethical AI.  Then she was fired after criticizing its approach to minority employment and the biases embedded in today's artificial intelligence systems

Google AI researcher Timnit Gebru was hired by the company to be an outspoken critic of unethical AI. Then she was fired after criticizing its approach to minority employment and the biases embedded in today’s artificial intelligence systems

Margaret Mitchell, former head of ethics at artificial intelligence at Google, even stressed the need for data transparency from input to output of a system “not only for sensory problems, but also bias and behavior”.

The expert’s story with Google reached an important point early last year, when Mitchell was fired from the company, a month after being investigated for incorrect sharing of information.

Then the researcher had also protested against Google after the firing of ethics researcher in artificial intelligence, Timnit Gebru.

Mitchell was also very considerate of Lemoine. When new people joined Google, she introduced them to the engineer, calling him “Google conscience” for having “the heart and soul to do the right thing.” But despite all of Lemoine’s astonishment at Google’s natural conversation system, which even motivated him to create a document with some of his conversations with LaMDA, Mitchell saw things differently.

The AI ​​ethicist read an abridged version of Lemoine’s document and saw a computer program, not a person.

“Our minds are very, very good at constructing realities that are not necessarily true of the larger set of facts that are presented to us,” Mitchell said. “I’m really worried about what it means for people to be increasingly affected by the illusion.”

For her part, Lemoine said that people have the right to shape technology that can significantly affect their lives.

“I think this technology is going to be amazing. I think it benefits everyone. But maybe other people disagree, and maybe we at Google should not make all the choices. ‘



Source link

Back to top button

mahjong slot

https://covecasualrestaurant.com/

sbobet

https://mascotasipasa.com/

https://americanturfgrass.com/

https://www.revivalpedia.com/

https://clubarribamidland.com/

https://fishkinggrill.com/