Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI's safety researchers keep leaving

A smart phone lies screen up on an integrated laptop keyboard. The phone screen displays the OpenAI logo in white. Moody purple lighting colours the scene.
(Image credit: Jaque Silva/NurPhoto via Getty Images)

A number of senior AI safety research personnel at OpenAI, the organisation behind ChatGPT, have left the company. This wave of resignations often cites shifts within company culture, and a lack of investment in AI safety as reasons for leaving.

To put it another way, though the ship may not be taking on water, the safety team are departing in their own little dinghy, and that is likely cause for some concern.

The most recent departure is Rosie Campbell, who previously led the Policy Frontiers team. In a post on her personal substack (via Tweak Town) Campbell shared the final message she sent to her colleagues in Slack, writing that though she has "always been strongly driven by the mission of ensuring safe and beneficial [Artificial General Intelligence]," she now believes that she "can pursue this more effectively externally."

AI, explained

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.

(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.

Campbell highlights "the dissolution of the AGI Readiness team" and the departure of Miles Brundage, another AI safety researcher, as specific factors that informed her decision to leave.

Campbell and Brundage had previously worked together at OpenAI on matters of "AI governance, frontier policy issues, and AGI readiness."

Brundage himself also shared a few of his reasons for parting ways with OpenAI in a post to his Substack back in October. He writes, "I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so." Previously serving as a Senior Advisor for AGI Readiness, he shares, "I think I can be more effective externally."

This comes mere months after Jan Leike's resignation as co-lead of OpenAI's Superalignment team. This team was tasked with tackling the problem of ensuring that AI systems potentially more intelligent than humans still act in accordance with human values—and they were expected to solve this problem within the span of four years. Talk about a deadline.

While Miles Brundage has described plans to be one of the "industry-independent voices in the policy conversation," Leike, on the other hand, is now co-lead of the Alignment Science team at AI rival Anthropic, a startup that has recently received $4 billion of financial backing from Amazon.

At the time of his departure from OpenAI, Leike took to X to share his thoughts on the state of the company. His comments are direct, to say the least.

"Building smarter-than-human machines is an inherently dangerous endeavor," He wrote, before criticising the company directly, "OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products."

He goes on to plead, "OpenAI must become a safety-first AGI company."

The company's charter details a desire to act "in the best interests of humanity" towards developing "safe and beneficial AGI." However, OpenAI has grown significantly since its founding in late 2015, and recent corporate moves suggest its priorities may be shifting.

Just for a start, news broke back in September that the company would be restructuring away from its not-for-profit roots.

For another thing, multiple major Canadian media companies are in the process of suing OpenAI for feeding news articles into their Large Language Models. Generally speaking, it's hard to see how plagiarism at that scale could be for the good of humanity, and that's all without more broadly getting into the far-reaching environmental implications of AI.

With regards to the continuing development of AI and Large Language Models, I like to think significant course correction is still possible—but you can also understand why I would much rather abandon the good ship AI altogether.

Disclaimer

Future PLC, which operates PC Gamer, have today announced a 'strategic partnership' with OpenAI which aims to bring content from the company's brands to ChatGPT as opposed to it just being scraped without the company's consent.

Best gaming PCBest gaming laptop


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

TOPICS
Jess Kinghorn
Hardware Writer

Jess has been writing about games for over ten years, spending the last seven working on print publications PLAY and Official PlayStation Magazine. When she’s not writing about all things hardware here, she’s getting cosy with a horror classic, ranting about a cult hit to a captive audience, or tinkering with some tabletop nonsense.