AI industry begs someone to please stop the AI industry before all human life is extinguished by the AI industry

Agent Smith in The Matrix
(Image credit: Warner Bros)

The people making artificial intelligence say that artificial intelligence is an existential threat to all life on the planet and we could be in real trouble if somebody doesn't do something about it.

"AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI," the prelude to the Center for AI Safety's Statement on AI Risk states. "Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. 

"The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI's most severe risks seriously."

And then, finally, the statement itself:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

(Image credit: Center for AI Safety)

It's a real banger, alright, and more than 300 researchers, university professors, institutional chairs, and the like have put their names to it. The top two signatories, Geoffrey Hinton and Yoshua Bengio, have both been referred to in the past as "godfathers" of AI; other notable names include Google Deepmind CEO (and former Lionhead lead AI programmer) Demis Hassabis, OpenAI CEO Sam Altman, and Microsoft CTO Kevin Scott.

Taken altogether, it's a veritable bottomless buffet of big brains, which makes me wonder how they seem to have collectively overlooked what I think is a pretty obvious question: If they seriously think their work threatens the "extinction" of humanity, then why not, you know, just stop? 

Maybe they'd say that they intend to be careful, but that others will be less scrupulous. And there are legitimate concerns about the risks posed by runaway, unregulated AI development, of course. Still, it's hard not to think that this sensational statement isn't also strategic. Implying that we're looking at a Skynet scenario unless government regulators step in could benefit already-established AI companies by making it more difficult for upstarts to get in on the action. It could also provide an opportunity for major players like Google and Microsoft—again, the established AI research companies—to have a say in how such regulation is shaped, which could also work to their benefit.

Professor Ryan Calo of the University of Washington School of Law suggested a couple of other possible reasons for the warning: distraction from more immediate, addressable problems with AI, and hype building.

"The first reason is to focus the public's attention on a far fetched scenario that doesn’t require much change to their business models. Addressing the immediate impacts of AI on labor, privacy, or the environment is costly. Protecting against AI somehow 'waking up' is not," Calo tweeted.

"The second is to try to convince everyone that AI is very, very powerful. So powerful that it could threaten humanity! They want you to think we've split the atom again, when in fact they’re using human training data to guess words or pixels or sounds."

Calo said that to the extent AI does threaten the future of humanity, "it’s by accelerating existing trends of wealth and income inequality, lack of integrity in information, & exploiting natural resources."

"I get that many of these folks hold a sincere, good faith belief," Calo said. "But ask yourself how plausible it is. And whether it's worth investing time, attention, and resources that could be used to address privacy, bias, environmental impacts, labor impacts, that are actually occurring."

Professor Emily M. Bender was somewhat blunter in her assessment, calling the letter "a wall of shame—where people are voluntarily adding their own names."

"We should be concerned by the real harms that corps and the people who make them up are doing in the name of 'AI', not abt Skynet," Bender wrote.

Hinton, who recently resigned from his research position at Google, expressed more nuanced thoughts about the potential dangers of AI development in April, when he compared AI to "the intellectual equivalent of a backhoe," a powerful tool that can save a lot of work but that's also potentially dangerous if misused. A single-sentence like this can't carry any real degree of complexity, but—as we can see from the widespread discussion of the statement on AI risk—it sure does get attention.

Interestingly, Hinton also suggested in April that governmental regulation of AI development may be pointless because it's virtually impossible to track what individual research agencies are up to, and no corporation or national government will want to risk letting someone else gain an advantage. Because of that, he said it's up to the world's leading scientists to work collaboratively to control the technology—presumably by doing more than just firing off a tweet asking someone else to step in.

Andy Chalk
US News Lead

Andy has been gaming on PCs from the very beginning, starting as a youngster with text adventures and primitive action games on a cassette-based TRS80. From there he graduated to the glory days of Sierra Online adventures and Microprose sims, ran a local BBS, learned how to build PCs, and developed a longstanding love of RPGs, immersive sims, and shooters. He began writing videogame news in 2007 for The Escapist and somehow managed to avoid getting fired until 2014, when he joined the storied ranks of PC Gamer. He covers all aspects of the industry, from new game announcements and patch notes to legal disputes, Twitch beefs, esports, and Henry Cavill. Lots of Henry Cavill.

Read more
SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first-ever Open AI DevDay conference.(Photo by Justin Sullivan/Getty Images)
In a mere decade 'everyone on Earth will be capable of accomplishing more than the most impactful person can today' says OpenAI boss Sam Altman
A digitally generated image of abstract AI chat speech bubbles overlaying a blue digital surface.
We need a better name for AI, or we risk talking past each other until actually intelligent AGI comes home mooing
The OpenAI logo is being displayed on a smartphone with an AI brain visible in the background, in this photo illustration taken in Brussels, Belgium, on January 2, 2024. (Photo illustration by Jonathan Raa/NurPhoto via Getty Images)
OpenAI is working on a new AI model Sam Altman says is ‘good at creative writing’ but to me it reads like a 15-year-old's journal
OpenAI CEO Sam Altman speaks about Stargate on January 21, 2025.
'Stargate' is now a real thing, but sadly not a portal to an alien planet: A bunch of tech companies plan to spend $500 billion building AI data centers
OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.
If you don't let us scrape copyrighted content, we will lose out to China says OpenAI as it tries to influence US government
NEW YORK, NEW YORK - NOVEMBER 29: C.E.O. of Tesla, Chief Engineer of SpaceX and C.T.O. of X Elon Musk speaks during the New York Times annual DealBook summit on November 29, 2023 in New York City. Andrew Ross Sorkin returns for the NYT summit for a day of interviews with Vice President Kamala Harris, President of Taiwan Tsai Ing-Wen, C.E.O. of Tesla, Chief Engineer of SpaceX and C.T.O. of X Elon Musk, former Speaker of the U.S. House of Representatives Rep. Kevin McCarthy (R-CA) and leaders in business, politics and culture.
OpenAI claims Elon Musk 'demanded absolute control, and to be CEO' while also agreeing to ditch its non-profit status back in 2017, despite him now suing it for turning decidedly for-profit
Latest in AI
Closeup of the new Copilot key coming to Windows 11 PC keyboards
Microsoft co-authored paper suggests the regular use of gen-AI can leave users with a 'diminished skill for independent problem-solving' and at least one AI model seems to agree
Still image of Bastion holding a bird, taken from Microsoft's Copilot for Gaming reveal trailer
Microsoft unveils Copilot for Gaming, an AI-powered 'ultimate gaming sidekick' that will let you talk to your console so you don't have to talk to your friends
BURBANK, CALIFORNIA - AUGUST 15: Protestors attend the SAG-AFTRA Video Game Strike Picket on August 15, 2024 in Burbank, California. (Photo by Lila Seeley/Getty Images)
8 months into their strike, videogame voice actors say the industry's latest proposal is 'filled with alarming loopholes that will leave our members vulnerable to AI abuse'
live action Jimbo the Jester from Balatro holding a playing card and addressing the camera
LocalThunk forbids AI-generated art on the Balatro subreddit: 'I think it does real harm to artists of all kinds'
Aloy
'Creepy,' 'ghastly,' 'rancid': Viewers react to leaked video of Sony's AI-powered Aloy
Seattle, USA - Jul 24, 2022: The South Lake Union Google Headquarter entrance at sunset.
Google is rolling out an even more AI-heavy search engine mode because 'power users want AI responses for even more of their searches'
Latest in News
Will Poulter holding a CD ROM
'What are most games about? Killing': Black Mirror Season 7 includes a follow-up to 2018 interactive film Bandersnatch
Casper Van Dien in Starship Troopers
Sony, which is making a Helldivers 2 movie, is also making a new Starship Troopers movie, but it's not based on the Starship Troopers movie we already have
Assassin's Creed meets PUBG
Ubisoft is reportedly talking to Tencent about creating a new business entity to manage Assassin's Creed and other big games
Resident Evil Village - Lady Dimitrescu
'It really truly changed my life in every possible way': Lady Dimitrescu actor says her Resident Evil Village role was just as transformative for her as it was for roughly half the internet in 2021
Storm trooper hero
Another live service shooter is getting shut down, this time before it even launched on Steam
Possibility Space concept art.
Possibility Space owners sue NetEase for $900 million over allegations it spread 'false and defamatory rumors' of fraud at the studio that ultimately forced it to close