Google fires a software engineer who claimed its AI chatbot is sensitive
Register now for FREE unlimited access to Reuters.com
July 22 (Reuters) – Alphabet Inc’s ( GOOGL.O ) Google said on Friday it has fired a senior software engineer who claimed the company’s artificial intelligence (AI) chatbot LaMDA was a self-aware person.
Google, which put software engineer Blake Lemoine on leave last month, said he had violated company policy and that it found his allegations about LaMDA to be “totally unfounded.” read more
“It is regrettable that despite long-standing engagement on this topic, Blake chose to persistently violate clear employment and data security policies that include the need to protect product information,” a Google spokesperson said in an email to Reuters.
Register now for FREE unlimited access to Reuters.com
Last year, Google said that LaMDA – Language Model for Dialogue Applications – was built on the company’s research showing that transformer-based language models trained on dialogue could learn to talk about essentially anything.
Google and many leading researchers were quick to dismiss Lemoine’s views as erroneous, saying that LaMDA is simply a complex algorithm designed to generate convincing human language.
Lemoine’s resignation was first reported by Big Technology, a technology and community newsletter.
Register now for FREE unlimited access to Reuters.com
Reporting by Akanksha Khushi in Bengaluru; Editing by William Mallard
Our standards: Thomson Reuters Trust Principles.