
Discover more from AI in Business
Another Ethics Firing at Google
Blake Lemoine, who saw “sentient AI” in Google LaMDA chat project, let go over going public without permission; latest in a string of ethics casualties at Google
By John P. Desmond, Editor, AI in Business

Google has let go another software engineer over a dispute about what was said publicly about some developing Google technology, another example in a pattern of a failure to communicate about AI ethics.
In the latest incident, reported on June 11 in The Washington Post, Google scientist Blake Lemoine, working for Google’s Responsible AI organization, reported his impression that the LaMDA models, for Language Model for Dialogue Applications, had the intelligence of a human and thus was sentient.
Lemoine began talking to LaMDA in the fall as part of his job to test whether Google’s large language model was using discriminatory or hate speech. He became concerned about the power of the model. He worked with a collaborator to present evidence to Google that LaMDA was sentient. After his managers investigated and dismissed the claims, Lemoine went public with his concerns. He was then placed on paid administrative leave.
Here are other related incidents:
Google researcher Satrajit Chatterjee led a team of scientists in challenging a research paper that appeared in the scientific journal Nature last year that said computers were able to design certain parts of a computer chip faster and better than human beings. Google told the team it would not publish its paper and subsequently fired Dr. Chatterjee, according to an account in The New York Times.
Google defended the paper in a public statement. “We thoroughly vetted the original Nature paper and stand by the peer-reviewed results,” stated Zoubin Ghahramani, a vice president at Google Research, in a written statement to the New York Times. He added, “We also rigorously investigated the technical claims of a subsequent submission, and it did not meet our standards for publication.”
In December 2020, Google let go Timnit Gebru, a member of its ethical AI team who pushed to publish a research paper that raised concern about flaws in large language models Google was investigating. Google later let go the leader of Gebru’s ethics team, Margaret Mitchell. Gebru subsequently launched her own organization to pursue a research group focused on preventing harm from AI systems. (See AI Ethicist Fired from Google Pursues AI Ethics With Her Own Launch, December 2021.)
Gebru and Driscoll combined to write a recent opinion piece in The Washington Post headlined, “We Warned Google That People Might Believe AI is Sentient.” Of Lemoine’s concerns that the large language model appears to be sentient, the authors stated, “It was exactly what we had warned would happen back in 2020, shortly before we were fired by Google ourselves. Lemoine’s claim shows we were right to be concerned — both by the seductiveness of bots that simulate human consciousness, and by how the excitement around such a leap can distract from the real problems inherent in AI projects.”
The perception that some AI may appear sentient is the concern cited by Gebru and Driscoll, not so much whether the AI is actually sentient. “One of the risks we outlined was that people impute communicative intent to things that seem humanlike,”: the authors stated. “Trained on vast amounts of data, LLMs generate seemingly coherent text that can lead people into perceiving a “mind” when what they’re really seeing is pattern matching and string prediction.”
The writers challenged the media to not engage in the hype declaring the superintelligence of LMMs and challenge Google to discuss the potential harms of such systems. “We urge the media to focus on holding power to account, rather than falling for the bedazzlement of seemingly magical AI systems,” they stated. In that vein, AI in Business wrote to Google press communications to ask if there is any public forum where Google would engage in a dialogue with those challenging Google to ensure potential harmful effects of LLMs are being taken into account. No response as of press time.
One observer likened Google’s track record of dismissing members of its ethics team who voice unwanted challenges to the practice of “shanghaiing” practiced by the British Royal Navy, the kidnapping of people to serve as sailors by trickery. In one account, the shainghaied crew woke up the next morning in the hold of the sailing ship, to hear an officer explaining what they could expect. “Any questions?” the officer asked. A would-be sailor stood to ask a question, and was shot. “Any more questions?” the officer asked. The approach has a way of stifling discussion.
Maybe Lemoine Was Fooled by “the Eliza Effect”
Elsewhere, AI researcher Brian Christian, author of “Algorithms to Live By,” and “The Most Human Human,” suggested in an account he wrote in The Atlantic that Lemoine could have been fooled by “the Eliza effect” into thinking the LaMDA AI was sentient.
That is named for the first chatbot program designed to mimic human conversation, called Eliza by its inventor MIT professor Joseph Weizenbaum in the1960s. The story goes that his secretary came to believe she was having meaningful dialogues with the system, despite Weizenbaum insisting the program was using simple logic, in a form of anthropomorphism that came to be known as the Eliza effect.
Lemoine became convinced after asking LaMDA to describe its soul and listening to its answer. “What may sound like introspection is just the system improvising in an introspective verbal style,” Christian stated, adding, “LaMDA fooled Lemoine.”
It would be easy for Google to engage a set of researchers to vote on whether LaMDA would pass the Turing Test, devised by Alan Turing in 1950 as a way to determine whether a computer could exhibit responses equal to that of a human,” Christian suggested. However, “The question of sentience, the true crux of the issue, begins to stand more apart from mere verbal facility,” he stated.
He described this area of research as a “sprawling, open frontier,” with AI moving the goalposts for what it can do at a rapid pace. Still, “Determining who and what is or is not sentient is one of the defining questions of almost any moral code. And yet, despite its utterly central position in our ethics, this question is utterly mysterious.”
He views the Lemoine episode as an invitation to reckon with how little is understood about this mystery.
Read the source articles and information in The Washington Post, in The New York Times, an opinion column in The Washington Post and in The Atlantic.
(Write to the editor here.)