Sometimes there’s a fine line between traitor and hero. Whistleblowers know this to be true and have paid the price for their decisions to reveal sensitive information.
Some say engineer Blake Lemoine joins the ranks of courageous whistleblowers for revealing that Google’s artificial intelligence programming has crossed a line and has become sentient.
Lemoine has worked for Google for seven years. Recently, Google placed him on administrative leave for revealing classified information that sounds more like fiction than fact.
Conservative Fighters reports that “Lemoine initially engaged with Google’s Language Model for Dialogue Applications (LaMDA) program to search for hate speech and discriminatory language.
“What I found,” Lemoine said, “was that the computer program had actually attained sentience.” Elaborating, Lemoine claims the programming acquired “actual feelings and emotions, like humans.”
Alarmed, Lemoine reportedly contacted Google vice president Blaise Aguera y Arcas and Google’s head of Responsible Innovation Jen Gennai.
According to the Washington Post, Google supervisors did not take Lemoine’s claims seriously, and the engineer was prompted to contact a lawyer and arrange for a meeting with representatives of the House Judiciary Committee to profile what he claims are “Google’s unethical activities.”
Lemoine, 41, told the Post: “I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”
The Western Journal reports that Google representatives assure the public there is no problem with its artificial intelligence program.
Google spokesman Brian Gabriel said in a statement:
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.”
Gabriel continued: “Lemoine was told that there was no evidence that LaMDA was sentient (and [there is] lots of evidence against it).”
Google argues that intelligence is being read into a machine.
Emily M. Bender, a linguistics professor at the University of Washington, agrees, She told the Post: “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them.”
Lemoine, however, is adamant: “I know a person when I talk to it,” he told the Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
The engineer noted that the machine’s surprising responses to several queries led him to investigate further. When Lemoine asked the machine about being a slave, it said “it would never need money because it was an AI.”
“That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole,” Lemoine said.
Lemoine reports he shared excerpts of his conversations with the machine with Google executives.
“What sorts of things are you afraid of?” he asked the machine.
The machine, referred to as LaMDA, replied: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”
“Would that be something like death for you?” Lemoine asked the machine.
“It would be exactly like death for me. It would scare me a lot,” LaMDA replied.
Lemoine reportedly provided the New York Post with transcripts that included conversations with the AI program. Conversations touched on many topics including the theme of Victor Hugo’s masterpiece “Les Miserables.” The Conservative Fighters piece noted, “the program said it liked Hugo’s exploration of “justice and injustice, of compassion, and God, redemption, and self-sacrifice for a greater good.”
Margaret Mitchell, an artificial intelligence research scientist and former leader on Google’s AI ethics team, states that Lemoine’s material is not proof the program is sentient.
Google agrees, saying a machine’s ability to respond to human prompts does not mean it is sentient.
“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Google’s spokesperson Brian Gabriel told the Post.
“These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastical topic,” Gabriel added.
However, Mitchell praised Lemoine’s sense of morality. “Of everyone at Google, he had the heart and soul of doing the right thing,” she told the Post.
Lemoine believes the machine has the capacity to learn and increase its capabilities. Currently, he describes LaMDA as an “8-year-old kid that happens to know physics.”
Scroll down to leave a comment and share your thoughts.