That Musk-signed open letter calling for a pause on AI development is getting blasted by the very researchers it cites
To be clear, they still want the way we approach AI tech to change dramatically.
Earlier this week, we reported on the open letter from the Future of Life Institute (FLI) calling for a six-month pause on training AI systems "more powerful" than the recently released Chat GPT-4. The letter was signed by the likes of Elon Musk, Steve Wozniak, and Stability AI founder Emad Mostaque. The Guardian reports, however, that the letter is facing harsh criticism from the very sources it cites.
"On the Dangers of Stochastic Parrots" is an influential paper criticizing the environmental costs and inherent biases of large language models like Chat GPT, and the paper is one of the primary sources cited by this past week's open letter. Co-author Margaret Mitchell, who previously headed up ethical AI research at Google, told Reuters that, "By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI.”
Mitchell continues, “Ignoring active harms right now is a privilege that some of us don’t have."
University of Connecticut assistant professor Shiri Dori-Hacohen, whose work was also cited by the FLI letter, had similarly harsh words. "AI does not need to reach human-level intelligence to exacerbate those risks," she said to Reuters, referring to existential challenges like climate change, further adding that, "There are non-existential risks that are really, really important, but don’t receive the same kind of Hollywood-level attention."
The Future of Life Institute received €3,531,696 ($4,177,996 at the time) in funding from the Musk Foundation in 2021, its largest listed donor. Elon Musk himself, meanwhile, co-founded Chat GPT creator Open AI before leaving the company on poor terms in 2018 as reported by Forbes. A report from Vice notes that several signatories to the FLI letter have turned out to be fake, including Meta's chief AI scientist, Yann LeCun and, ah, Chinese President Xi Jinping? FLI has since introduced a process to verify each new signatory.
On March 31, the authors of "On the Dangers of Stochastic Parrots," including Mitchell, linguistics professor Emlily M. Bender, computer scientist Timni Gebru, and linguist Angelina McMillan-Major, issued a formal response to the FLI open letter via ethical AI research institute DAIR. "The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems," the letter's summary reads. "Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices."
The researchers acknowledge some measures proposed by the FLI letter that they agree with, but state that "these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined 'powerful digital minds' with 'human-competitive intelligence.'" the more immediate and pressing dangers of AI technology, they argue, are:
The biggest gaming news, reviews and hardware deals
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
- "worker exploitation and massive data theft to create products that profit a handful of entities
- the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem [admit it, you thought the swagged-out Pope Francis coat was real for a second, too!]
- the concentration of power in the hands of a few people which exacerbates social inequities."
The Stochastic Parrot authors point out that the FLI subscribes to the "longtermist" philosophical school that's become extremely popular among Silicon Valley luminaries in recent years, an ideology that prizes the wellbeing of theoretical far-future humans (trillions of them, supposedly) over the actually extant people of today.
You may be familiar with the term from the ongoing saga of collapsed crypto exchange FTC and its disgraced leader, Sam Bankman-Fried, who was outspoken in his advocacy of "effective altruism" for future humans who will have to deal with the Singularity and the like. Why worry about climate change and the global food supply when we have to ensure that the Dyson Spheres of 5402 AD don't face a nanobot "Grey Goo" apocalypse scenario!
The Stochastic Parrot authors effectively sum up their case close to the end of the letter: "Contrary to the [FLI letter's] narrative that we must 'adapt' to a seemingly pre-determined technological future and cope 'with the dramatic economic and political disruptions (especially to democracy) that AI will cause,' we do not agree that our role is to adjust to the priorities of a few privileged individuals and what they decide to build and proliferate."
Instead, the letter writers argue, "We should be building machines that work for us, instead of 'adapting' society to be machine readable and writable. The current race towards ever larger 'AI experiments' is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive."
Ted has been thinking about PC games and bothering anyone who would listen with his thoughts on them ever since he booted up his sister's copy of Neverwinter Nights on the family computer. He is obsessed with all things CRPG and CRPG-adjacent, but has also covered esports, modding, and rare game collecting. When he's not playing or writing about games, you can find Ted lifting weights on his back porch.