OpenAI may have lost more than half a billion dollars over the past year
The AI company reportedly pays between $700,000 to $1 million daily in server expenses.
Every time you type a request in ChapGPT for cooking tips, TV recommendations, or to do your homework, it costs OpenAI money. But how much, exactly? Keeping the generative AI chatbot running could cost the tech company up to a million dollars daily.
The Information (via ComputerBase) reports that partly because of the exorbitant costs of running ChatGPT, OpenAI is believed to have roughly doubled its losses to around $540 million last year. It's worth noting that these numbers come from analyst estimates rather than any up-to-date financial reports, so they can't be treated as 100% accurate. But we know that some of the biggest expenses include those painfully high daily infrastructure costs and the fact that the company has been poaching top talent from Google and Apple.
According to a previous report, it takes immense computing power to fulfill the millions of daily queries on OpenAI's ChatGPT. The generative AI chatbot reportedly costs the company somewhere between $700,000 to $1 million daily.
Dylan Patel, the chief analyst at SemiAnalysis, told The Information that most of the cost is "based on expensive servers they require" and that those estimates were based on the GPT-3 model, not the current GPT-4 that OpenAI is using. So those figures are possibly quite conservative considering the extra complexity and, therefore, cost of each GPT-4 query.
Note that these estimates also come from prior to the launch of OpenAI's paid ChatGPT Plus premium service, which went live this year. That service gives queries quicker responses and priority during peak hours. OpenAI expects to reach $200 million in sales this year and predicts $1 billion for next year, which could help offset some of the massive costs and maybe even turn that deficit around.
Jensen Huang, AI enthusiast and CEO of Nvidia (which is making bank off the GPUs that power those expensive servers), recently predicted that future AI models will demand even more power in the next ten years. The Nvidia boss also said we should soon expect to see "AI factories all over the world" powering various AI technologies.
This means Microsoft's recent $10 billion investment into OpenAI couldn't have come at a better time. According to the report, OpenAI CEO Sam Altman is also looking to raise up to $100 billion in the next couple of years.
To help reduce these costs, Microsoft has been working with AMD to produce its own internal AI chip, Athena, for the last four years. This new chip could launch next year and should be more cost-efficient than using Nvidia GPUs to run AI models.
The biggest gaming news, reviews and hardware deals
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
Best CPU for gaming: Top chips from Intel and AMD
Best gaming motherboard: The right boards
Best graphics card: Your perfect pixel-pusher awaits Best SSD for gaming: Get into the game first
Jorge is a hardware writer from the enchanted lands of New Jersey. When he's not filling the office with the smell of Pop-Tarts, he's reviewing all sorts of gaming hardware, from laptops with the latest mobile GPUs to gaming chairs with built-in back massagers. He's been covering games and tech for over ten years and has written for Dualshockers, WCCFtech, Tom's Guide, and a bunch of other places on the world wide web.