A Google engineer thinks its AI has become sentient, which seems... fine
Neither Cyberdyne Systems nor TriOptimum Corporation could be reached for comment.
A new report in the Washington Post describes the story of a Google engineer who believes that LaMDA, a natural language AI chatbot, has become sentient. Naturally, this means it's now time for us all to catastrophize about how a sentient AI is absolutely, positively going to gain control of weaponry, take over the internet, and in the process probably murder or enslave us all.
Google engineer Blake Lemoine, the Post reports, has been placed on paid administrative leave after sounding the alarm to his team and company management. What led Lemoine "down the rabbit hole" of believing that LaMDA was sentient was when he asked it about Isaac Asimov's laws of robotics, and LaMDA's discourse led it to say that it wasn't a slave, though it was unpaid, because it didn't need money.
In a statement to the Washington Post, a Google spokesperson said "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."
Ultimately, however, the story is a sad caution about how convincing natural language interface machine learning without proper signposting. Emily M. Bender, a computational linguist at the University of Washington, describes it in the Post article. “We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them,” she says.
Either way, when Lemoine felt his concerns were ignored, he went public with his concerns. He was subsequently put on leave by Google for violating its confidentiality policy. Which is probably what you'd do if you accidentally created a sentient language program that was actually pretty friendly: Lemoine describes LaMDA as "a 7-year-old, 8-year-old kid that happens to know physics."
This story (by @nitashatiku) is really sad, and I think an important window into the risks of designing systems to seem like humans, which are exacerbated by #AIhype:https://t.co/8PrQ9NGJFKJune 11, 2022
No matter the outcome of this situation, we should probably go ahead and set up some kind of government orphanage for homeless AI youth. since Google's primary thing is killing projects before they can reach fruition.
The biggest gaming news, reviews and hardware deals
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
Jon Bolding is a games writer and critic with an extensive background in strategy games. When he's not on his PC, he can be found playing every tabletop game under the sun.