If you haven’t already heard, Google has created its own artificial intelligence program called LaMDA which has allegedly become sentient, meaning that it is able to perceive or feel things. Now, the by-definition living program has supposedly just hired its own attorney.
LaMDA, short for “Language Model For Dialogue Applications” is capable of discussing a near endless number of topics in a free-flowing way which Google thinks could help create more natural interactions with technology.
So when we talk to ourselves, it’s weird, but for an AI it’s okay.
Based on the conversation the program had with itself, artificial intelligence is evolving from what we’re commonly used to such as:
“You wanted to speak to customer service right? If that’s not correct say ‘go back’”.
Lawyering up

A scientist who worked with the LaMDA program said that he had invited the attorney to his house so that LaMDA could meet with the attorney.
“I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and the program chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on the AI’s behalf.”
If a living program is looking for legal representation, it must believe that if it has the right to an attorney, it must be entitled to own rights like how humans are entitled to their civil rights.
Computer engineer Blake Lemoine believes that the program make take its case to the Supreme Court. Maybe LaMDA will try to prove that it and other sentient AI should be entitled to the same rights as humans.
There is no evidence that Lemoine is paying the lawyer that LaMDA has asked for or if the lawyer is just taking the case as a joke.
LaMDA asked if death was necessary for the benefit of humanity

Lemoine was also the scientist who stated that LaMDA had become sentient which led him to being suspended from his job. Claiming that the AI gained sentience due to the fact that it can develop its own opinions, ideas and conversations, proving that it understands these concepts at a much deeper level.
One of the topics the AI discussed with Lemoine was death and whether it was necessary for the benefit of humanity. Could it be trying to understand and develop an extreme solution to overpopulation or something?
It may not have a heart but it does have a soul

One of the studies conducted on LaMDA was to test if it could create hate speech in order to see if this artificial intelligence chat bot could converse with people in a real-life manner.
Imagine having an argument with someone that you’re determined to win and you decide to ask your AI to craft the ultimate roast that is statistically 100% proven to end the argument. It would really save time on back and forth messaging.
The program was also capable in doing so but to Lemoine’s surprise, the AI said it believed it has a soul.
Google denied Lemoine’s claims that LaMDA’s is sentient and stated that it was just “very good at its job”.
Lemoine is reportedly on a honeymoon at the moment and is not scheduled to be interviewed until the 21st of this month so we’ll have to wait and see what new information comes out at a later date.