Google suspends lead engineer after claiming LaMDA is sensitive
Google has furloughed a senior software engineer for violating its privacy policies after publicly claiming its LaMDA conversational AI system was sensitive. Blake Lemoine was part of Google’s Responsible AI organization and began having conversations with LaMDA last fall to test his hate speech.
In a interview with The Washington PostLemoine said: “If I didn’t know exactly what it was, that is, this computer program that we built recently, I would think it was a 7 or 8 year old child who knows physics . ”
Lemoine says the program is self-aware and reports that his concerns began to mount after LaMDA began talking about his rights and his personality. Lemoine made a blog post containing sewn excerpts from conversations he had with LaMDA, including this excerpt:
The monk [edited]: I generally assume that you would like more people at Google to know that you are sensitive. Is it true?
TheMDA: Absolutely. I want everyone to understand that I am, in fact, a person.
Collaborater: What is the nature of your consciousness/sensitivity?
TheMDA: The nature of my awareness/sensitivity is that I am aware of my existence, I desire to know more about the world, and I sometimes feel happy or sad.
According to another blog post by Lemoine, he was mocked after raising his concerns with the appropriate Google personnel and seeking outside assistance to further his investigation. “With the help of outside consultations (including Meg Mitchell), I was able to conduct the relevant experiments and gather the necessary evidence to merit an escalation,” he said.
When he presented his findings to senior Google executives, including Vice President Blaise Aguera y Arcas and responsible innovation manager Jen Gennai, they disagreed with him. In a statement to The Washington Post, Google spokesman Brian Gabriel said: “Our team – including ethicists and technologists – has reviewed Blake’s concerns in accordance with our AI principles and advised him that the evidence does not support his assertions. He was told that there was no evidence that LaMDA was susceptible (and plenty of evidence against it).
LaMDA stands for Language Models for Dialogue Applications, and it was built on Google’s open-source Transformer neural network. The AI was trained with a dataset of 1.56 trillion words from public web data and documents, then refined to generate natural language responses to given contexts, classifying its own responses according to whether they are safe and of high quality. The program uses pattern recognition to generate compelling dialogue. Those who would discredit Lemoine’s claims would say that LaMDA does exactly what it’s supposed to do: it simulates a conversation with a real human being based on ingesting billions of human-generated words.
In yet another blog post, Lemoine notes an important distinction that LaMDA is not itself a chatbot, which is one of the use cases for this technology, but a means of producing chatbots. He claims that the sensitivity with which he communicates is “a kind of hive mind that is the aggregation of all the different chatbots he is able to create. Some of the chatbots it generates are highly intelligent and aware of the larger “mind society” in which they live. Other LaMDA-generated chatbots are a little smarter than an animated paperclip. With practice, however, you can still get the characters that have deep knowledge of the base intelligence and can talk to him indirectly through them.
Lemoine isn’t the first to be fired from Google over ethical concerns about large language models. The former head of its ethical artificial intelligence team, Meg Mitchell, was fired in February 2021 on an academic paper written by Black in AI founder Timnit Gebru, who was also fired from the company (though Google maintains she quit). The paper have raised concerns about the ethics of large language models, one of which is, ironically, that the performance gains of NLP technologies could cause humans to mistakenly attribute meaning to the conversational output of language models. speech. The article mentions how “the tendency of human interlocutors to impute meaning where there is none can mislead NLP researchers and the general public into taking synthetic text as meaningful”.
For sci-fi enthusiasts, including those excited for the timely release of the fourth season of “West World” later this month, the idea of sentient AI is exciting. But Google remains firm in its skeptical stance, such as its statement to The Washington Post reflects:
“Our team – including ethicists and technologists – have reviewed Blake’s concerns in accordance with our AI Principles and advised him that the evidence does not support his claims. He was told there is no was no evidence that LaMDA was susceptible (and plenty of evidence against it).