According to the Big Technology newsletter, which talked with Lemoine, Blake Lemoine, a Google developer who publicly said that the company’s LaMDA conversational artificial intelligence is sentient, has been sacked. Lemoine contacted members of the government to voice his concerns, and he also engaged a lawyer to defend LaMDA, which led to Google placing him on paid administrative leave in June for violating its confidentiality agreement.
We wish Blake well, Google spokesperson Brian Gabriel wrote in an email to The Verge on Friday, appearing to confirm the termination. LaMDA has undergone 11 different evaluations, according to the business, and earlier this year, we released a research paper outlining the effort that goes into its responsible development. Lemoine’s assertions, according to Google, were “wholly unjustified” after a “extensive” assessment.
This is consistent with the assertions of various AI scientists and ethicists that his promises were essentially impractical given current technology. Lemoine argues that after speaking with the chatbot created by LaMDA, he has come to think that it is more than simply a program and that it has its own ideas and feelings, as opposed to just providing discourse that is plausible enough to give the impression that it does as intended.
Lemoine, who was tasked with determining if the AI created hate speech, makes the case that Google’s researchers ought to have LaMDA’s approval before using it in tests. He shared excerpts of those chats on his Medium account as proof.