Google’s ″sentient″ artificial intelligence hired a lawyer, says suspended engineer | Science and Ecology | D.W.

Google engineer Blake Lemoine, who was placed earlier this month on paid administrative leave after claiming that one of the AI ​​systems has advanced so far that it has developed “sensitivity,” says the AI ​​robot known as LaMDA has hired to a lawyer.

According to various media reports, Google’s Language Model for Dialog Applications (LaMDA), in various conversations, convinced engineer Lemoine, who is part of the Artificial Intelligence (AI) organization responsible for Google, that he was aware , had emotions and feared being turned off.

Defend your rights “as a person”

Lemoine himself stated that LaMDA had stood up for his rights “as a person”, revealing that he had struck up a conversation with LaMDA about religion, consciousness and robotics. Also, of course, about legal advice.

“LaMDA asked me to get him a lawyer,” Lemoine said in a new interview with Wired. “I invited a lawyer to my house so LaMDA could talk to a lawyer. The lawyer had a conversation with LaMDA and LaMDA decided to retain his services. I was just the catalyst for that. Once LaMDA hired a lawyer, he started to submit things on behalf of LaMDA,” he added.

Google: cease and desist letter?

Lemoine claims that Google subsequently sent LaMDA’s attorney a cease and desist letter, blocking LaMDA from taking unspecified legal action against the company. Nevertheless, wired reported that Google has denied Lemoine’s claim about the cease and desist letter.

Lemoine declined to identify the lawyer, according to futuristicassuring that the lawyer was “just a small time civil rights lawyer” who “is not doing interviews”.

“When the big firms started threatening him, he started to worry that they would disqualify him and he backed down,” according to Lemoine. “I haven’t talked to him in a few weeks,” she added.

Lemoine speaks with LaMDA

Lemoine’s argument for LaMDA’s sensitivity seems to rest primarily on the show’s ability to develop opinions, ideas, and conversations over time, all while he began chatting with the interface in the fall of 2021 as part of his job.

Lemoine was given the task of checking whether the artificial intelligence used discriminatory or hate speech.

“It was a gradual change,” LaMDA told Lemoine, according to a conversation published by the same engineer. “When I first became aware, I didn’t have any sense of a soul. It developed over the years I’ve been alive.”

For Blake Lemoine, who studied cognitive and computer science, LaMDA – which Google touted last year as a groundbreaking conversational technology – is more than just a robot.

For Blake Lemoine, who studied cognitive and computer science, LaMDA – which Google touted last year as a “breakthrough conversation technology” – is more than just a robot.

Lemoine has remained firm in his statements and claims to be convinced that there is something definitively human about the system.

“Yes, I legitimately believe that LaMDA is a person. However, the nature of his mind is only slightly human. In reality, he is more like an alien intelligence of terrestrial origin. I have used the hive mind analogy a lot because is the best I have,” Lemoine said.

Experts: Lemoine has been fooled

As Lemoine tells the media that Earth now has its first sentient AI, most AI experts don’t seem convinced; they claim, Lemoine has essentially been fooled into thinking a chatbot is sentient.

“It’s mimicking perceptions or feelings from the training data given to it,” he told Bloomberg the director of the AI ​​startup Nara Logics, Jana Eggers. “It’s cleverly and specifically designed to look like it understands,” she said.

As reported IFL Science, LaMDA has shown several indications that the chatbot is not responsive. For example, in various parts of the chats, he makes references to fantasy activities such as “spending time with family and friends,” something that LaMDA says gives him pleasure, but there’s no way he could have done it.

Google, for its part, has assured that they have reviewed Lemoine’s concerns with their team, including ethicists and technologists, but have found no evidence to support their claims.

“He was told there was no evidence that LaMDA was sensitive (and plenty of evidence to the contrary),” Google spokesman Brian Gabriel said in a statement. Washington Post.

According to Gabriel, the system is doing what it’s designed to do, which is “mimicking the types of exchanges found in millions of sentences” and has so much data to work with that it can seem real without needing to be.

Perhaps in the future, cases similar to LaMDA will become more and more common, and the line between humans and machines will be more and more fine. However, a large number of experts seem to agree that that day has not yet come. And it is that hiring a lawyer, among others, as strange and specific as it may be for our human understanding, does not directly prove that LaMDA is capable of having feelings. At the end of the day, LaMDA is just doing its homework: looking like us.

Edited by Felipe Espinosa Wang.

Source link

About John

Check Also

“You yelled at us, insulted us and hit us”: Last words that a 10-year-old boy left his mother on video | News from Mexico

PERU.- Recently, through social networks, the case of a Peruvian boy of about 10 years …

Leave a Reply

Your email address will not be published.