According to media sources, Google banned a senior software engineer on Monday, June 13, after posting transcripts of a discussion with artificial intelligence (AI). The conversation was titled “ Is Lamda sentient? — an interview”.
Blake Lemoine, an engineer, was placed on paid leave after violating Google’s confidentiality policy.
The AI, also known as LaMDA (Language Model for Dialogue Applications) is an integrated system that develops chatbots used for chatting with humans.
The system can answer even complex questions about the nature of emotions and even describe its supposed fears.
Engineer Lemoine believes that behind LaMDA’s impressive verbal skills lies a sentient mind. Google rejected this claim and stated that there was nothing to back it up.
What did they speak about?
In a conversation, Lemoine from Google’s Responsible AI division asked LaMDA, “I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”
To which, LaMDA replied: “Absolutely. I want everyone to understand that I am, in fact, a person.”
After a while, LaMDA talks about having feelings and how it feels about getting shut down one day, resulting in its death.
Controversies behind LaMDA
For several years people have debated whether AI could be conscious or have feelings.
On the other hand, some believe that Lemoine is anthropomorphizing. This concept projects human feelings in words generated by computer code and language databases.
Google engineers have praised LaMDA’s abilities and are certain their code does not have feelings.