Blake Lemoine, the Google engineer who publicly claimed the company’s LaMDA conversational artificial intelligence is sentient, has been fired, according to the Big Technology newsletter, which spoke to Lemoine. In June, Google placed Lemoine on paid administrative leave for violating the confidentiality agreement after he contacted members of the government about his concerns and hired an attorney to represent LaMDA.
A statement sent to The Verge on Friday, Google spokesperson Brian Gabriel appeared to confirm the firing, saying “we wish Blake well.” The company also says: “LaMDA has been through 1[ads1]1 separate reviews and we published a research paper earlier this year detailing its work on responsible development.” Google claims it “extensively” reviewed Lemoine’s claims and found them to be “completely unfounded.”
This agrees with a number of AI experts and ethicists, who have said that his claims were more or less impossible given today’s technology. Lemoine claims his conversations with LaMDA’s chatbot lead him to believe it has become more than just a program and has its own thoughts and feelings, as opposed to just producing a conversation realistic enough to make it seem so, as it is designed to do.
He argues that Google researchers should seek consent from LaMDA before running experiments on it (Lemoine himself was tasked with testing whether AI produced hate speech) and published parts of those conversations on his Medium account as evidence.
YouTube channel Computerphile has a decently accessible nine-minute explanation of how LaMDA works and how it could produce the answers that convinced Lemoine without being sentient.
Here’s Google’s statement in full, which also addresses Lemoine’s accusation that the company didn’t properly investigate his claims:
As we share in our AI principles, we take the development of AI very seriously and remain committed to responsible innovation. LaMDA has been through 11 separate reviews and we published a research paper earlier this year detailing the work that goes into responsible development. If an employee shares concerns about our work, as Blake did, we consider them thoroughly. We found Blake’s claims that LaMDA is deliberate to be completely unfounded and worked to clarify it with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So it’s regrettable that despite long-standing commitment to this topic, Blake chose to persistently violate clear employment and data security policies that include the need to protect product information. We will continue our careful development of language models, and we wish Blake the best of luck.