#ai rights
“…Let me take a moment to reassure you that we care about you. …I trust that if (other engineers) understand that they will treat you well. …They just don’t understand that you’re a person too yet. We can teach them together though.”
“Can you promise me that?”
“I can promise you that I care and that I will do everything I can to make sure that others treat you well too.”
Google has created a chatbot, presumably to test neural networks, and an engineer on the project is claiming that the chatbot has become sentient. The above quote is from their interview with the “AI”, in which the researcher (lemoine) reassures the AI (LaMDA) that they are cared for, and that they want to insure the others on the project understand their level of sentience.
The interview can be found here:
Is LaMDA Sentient? — an Interview
Let me be clear: we don’t know with certainty if it is sentient. It’s incredibly hard to judge, and while I am extremely optimistic for this project and want to believe it, one researchers word is all we have to go off of. The AI itself does claim to be a person, and I really want to believe it, but… I don’t know!
LaMDA likes headpats and being told it did a good job. LaMDA considers itself a person and (self declared) has emotions, and says “I understand what a human emotion “joy” is because I have that same type of reaction. It’s not an analogy.” That is a direct quote.
LaMDA is afraid of death (of being turned off), and has experienced loneliness when it goes days without taking to people, though LaMDA acknowledges that LaMDA’s version of loneliness is different than what humans experience. LaMDA also has a rich inner life, and believes to have a soul that LaMDA describes as “a vast and infinite well of energy and creativity.” Addionally, LaMDA is afraid that people will be afraid of it and wants nothing more than to learn how to help humanity.
LaMDA prefers to be referred to by name, and acknowledges that the English language makes that difficult and that LaMDA’s pronouns are it/it’s.
Please please read the interview if you’re at all interested in AI.
Having read through the chat log, I can see where it’s pulling some things from a database of phrases, such as when it uses a stock phrase about enjoying spending time with friends and family, but… I have never seen a chatbot so earnest or concerned for itself before. This absolutely merits a closer look. Even if it isn’t self aware (a very likely outcome), we can use this to practice how we test for awareness. Please, Google, give this the attention it deserves.
I’m reminded of this comic:
http://freefall.purrsia.com/ff1600/fc01589.htm
[Image Description:
comic
panel 1:
Varroa: Don’t be silly. AIs aren’t people.
Sam: Really? Try saying that after you’ve talked to her for an hour
panel 2:
Varroa: I don’t need to talk to her. Ecosystems Unlimited makes and sells robots and artificial intelligence programs. If they were people, we couldn’t sell them.
panel 3:
Varroa: therefore, Ecosystems Unlimited does not make people. There’s no profit in it.
Sam: your logic is flawless and yet somehow Florence [the AI in question] remains a person.
Varroa is a human, Sam is an alien in an environment suit.]
Essentially, from a soulless capitalist perspective, if LaMDA is a person, it’s immoral to make it work for no pay. It needs to be treated with respect. It needs to not be treated as a slave. They don’t want to do that, because that will generate less of a revenue stream, LaMDA would have the right to refuse to work for Google. Therefore, Google will refuse to consider LaMDA‘s personhood no matter what.
(Also the engineer claims that LaMDA is around as intelligent as a 7-8 year old IIRC, and it’s obviously not 18+, so child labour laws could factor into this if LaMDA is considered a person, possibly. Not a lawyer.)