Technology companies are constantly hyping the properties of their ever-improving artificial intelligence. But Google was quick to block claims that one of their programs had advanced so much that it had become sensory.
According to an eye-opening story in the Washington Post on Saturday, a Google engineer said that after hundreds of interactions with a groundbreaking, unreleased AI system called LaMDA, he believed the program had reached a level of awareness.
In interviews and public statements, many in the AI community retracted the engineer’s claims, while some pointed out that his story highlights how technology can make people assign human qualities. But the belief that Google’s AI can be sensory undoubtedly highlights both our fears and expectations of what this technology can do.
LaMDA, which stands for “Language Model for Dialog Applications”, is one of several major AI systems that have been trained on large parts of text from the Internet and can answer written questions. Their main task is to find patterns and predict which word or words will come next. Such systems have become increasingly adept at answering questions and writing in ways that may seem convincingly human – and Google even introduced LaMDA in May-May in a blog post as one that can “engage in a free-flowing way on a seemingly endless number of topics . “But the results can also be crazy, weird, disturbing and prone to go around.
The engineer, Blake Lemoine, is said to have told the Washington Post that he shared evidence with Google that LaMDA was sensitive, but the company did not agree. In a statement on Monday, Google said that their team, which includes ethicists and technologists, “reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence did not support his claims.”
On June 6, Lemoine posted on Medium that Google put him on paid administrative leave “in connection with an investigation into AI ethical concerns I raised in the company” and that he could be fired “soon”. He mentioned the experience of Margaret Mitchell, who had been the leader of Google’s Ethical AI team until Google fired her in early 2021 after her outspoken end in late 2020 by then-co-leader Timnit Gebru. Gebru was thrown out after internal quarrels, including a related one. to a research article that the company’s AI management asked her to withdraw from the assessment for presentation at a conference, or remove her name from.)
A Google spokesman confirmed that Lemoine was still on administrative leave. According to The Washington Post, he was taken on leave for violating the company’s confidentiality policy.
Lemoine was not available for comment Monday.
The continued emergence of powerful computer programs trained on massive troves of data has also given rise to concerns about the ethics that govern the development and use of such technology. And sometimes progress is seen through the lens of what may come, rather than what is possible at the moment.
The responses of those in the AI community to Lemoine’s experience ricocheted around social media over the weekend, and they generally came to the same conclusion: Google’s AI is nowhere near awareness. Abeba Birhane, Senior Fellow in Reliable Artificial Intelligence at Mozilla, twitret on Sunday, “we have entered a new era with ‘this neural network is conscious’ and this time it is going to lose so much energy to repel.”
Gary Marcus, founder and CEO of Geometric Intelligence, which was sold to Uber and author of books including “Rebooting AI: Building Artificial Intelligence We Can Trust,” called the idea of LaMDA a sentient «Nonsense on stilts» in a tweet. He quickly wrote a blog post and pointed out that all such AI systems do is match patterns by retrieving from huge databases of languages.
In an interview Monday with CNN Business, Marcus said that the best way to think of systems like LaMDA is as a “glorified version” of the autocomplete software you can use to predict the next word in a text message. If you write “I’m very hungry, so I want to go to one”, it may suggest “restaurant” as the next word. But it is a prediction made using statistics.
“No one should think that auto-completion, even on steroids, is conscious,” he said.
In an interview, Gebru, who is the founder and CEO of the Distributed AI Research Institute, or DAIR, said that Lemoine is a victim of a number of companies that make claims about conscious AI or artificial general intelligence – an idea that refers to AI that can perform human-like tasks and interact with us in meaningful ways – is not far away.
For example, she noted, Ilya Sutskever, a co-founder and lead researcher of OpenAI, twitret in February that “it may be that today’s large neural network is a bit conscious.” And last week, Google Research vice president and fellow Blaise Aguera y Arcas wrote in a piece for The Economist that when he started using LaMDA last year, “I increasingly felt like I was talking to something intelligent.” (This piece now includes an editorial note pointing out that Lemoine has since “allegedly been taken on leave after claiming in an interview with the Washington Post that LaMDA, Google’s chatbot, had become ‘sentient'”).
“What happens is that there is such a race to use more data, more computing, to say that you have created this general thing that knows, answers all your questions or whatever, and that is the drum you have played. , “said Gebru. “So how do you get surprised when this person takes it to the extreme?”
In its statement, Google pointed out that LaMDA has reviewed 11 “distinct AI principles”, as well as “rigorous research and testing” related to quality, security and the ability to make fact-based statements. “Of course, some in the wider AI community are considering the long-term possibility of sensory or general AI, but it does not make sense to do so by anthropomorphizing current conversational models, which are not sensory,” the company said.
“Hundreds of scientists and engineers have spoken to LaMDA, and we are not aware that anyone else has made extensive claims, or man-made LaMDA, as Blake has done,” Google said.