https://nighthawkrottweilers.com/

https://www.chance-encounter.org/

Business

Google fired Blake Lemoine, the engineer who said LaMDA was sensitive




Comment

Blake Lemoine, the Google engineer who told The Washington Post that the company’s artificial intelligence was sensitive, said the company fired him on Friday.

Lemoine said he received a termination email from the company on Friday along with a request for a video conference. He asked to have a third party present at the meeting, but he said Google declined. Lemoine says he is talking to lawyers about his options.

Lemoine worked for Google’s Responsible AI organization and, as part of the job, began talking with LaMDA, the company’s artificially intelligent system for building chatbots, this fall. He came to believe the technology was sensitive after signing up to test whether the artificial intelligence could use discriminatory or hate speech.

The Google engineer who thinks the company’s AI has come to life

In a statement, Google spokesperson Brian Gabriel said the company takes AI development seriously and has reviewed LaMDA 11 times, as well as publishing a research paper detailing its responsible development efforts.

“If an employee shares concerns about our work, as Blake did, we consider them thoroughly,” he added. “We found Blake’s claims that LaMDA is deliberate to be completely unfounded and worked to clarify that with him for many months.”

He attributed the discussions to the company’s open culture.

“It is regrettable that despite long-standing commitment to this topic, Blake chose to persistently violate clear employment and data security policies that include the need to protect product information,” Gabriel added. “We will continue our careful development of language models, and we wish Blake the best of luck.”

Lemoine’s firing was first reported in the Big Technology newsletter.

Lemoine’s interviews with LaMDA led to a wide-ranging discussion about recent advances in AI, public misunderstanding of how these systems work, and corporate responsibility. Google previously pushed out the heads of its Ethical AI division, Margaret Mitchell and Timnit Gebru, after they warned of the risks associated with this technology.

Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she got fired for it.

LaMDA uses Google’s most advanced large language models, a type of AI that recognizes and generates text. These systems cannot understand language or meaning, researchers say. But they can produce deceptively human-like speech because they are trained on vast amounts of data crawled from the internet to predict the second most likely word in a sentence.

After LaMDA spoke to Lemoine about personhood and its rights, he began to investigate further. In April, he shared a Google Doc with top executives called “Is LaMDA Sentient?” which contained some of his conversations with LaMDA, in which it claimed to be sentient. Two Google executives investigated his claims and dismissed them.

Big Tech builds AI with bad data. So researchers sought better data.

Lemoine was previously placed on paid administrative leave in June for violating the company’s confidentiality policy. The engineer, who spent most of his seven years at Google working on proactive search, including personalization algorithms, said he is considering potentially starting his own AI company focused on a collaborative storytelling video game.



Source link

Back to top button

mahjong slot

https://covecasualrestaurant.com/

sbobet

https://mascotasipasa.com/

https://americanturfgrass.com/

https://www.revivalpedia.com/

https://clubarribamidland.com/

https://fishkinggrill.com/