OpenAI Scales Back Compute Target to $600 Billion While Eyeing Strengthened Growth Through 2030
- Editorial Team

- 14 hours ago
- 4 min read

Artificial intelligence leader OpenAI is preparing for one of the largest investment phases in tech history as it recalibrates its long-term compute spending plan to approximately $600 billion through 2030 — a figure that reflects both the enormous capital required to support advanced AI systems and a more measured approach to aligning infrastructure costs with expected revenue growth.
According to people familiar with the matter, OpenAI’s new spending target covers the massive infrastructure needed to train and operate generative AI models that underpin services like ChatGPT, which now boasts hundreds of millions of users worldwide. The goal is to strike a balance between fueling AI innovation and maintaining financial discipline, particularly as the company builds toward a potential initial public offering (IPO) that could value it at close to $1 trillion.
A Strategic Shift from Trillion-Dollar Ambitions
Just a year earlier, OpenAI’s leadership — including CEO Sam Altman — had discussed ambitions to invest as much as $1.4 trillion in computing infrastructure and related resources to scale far-reaching projects such as 30 gigawatts of computation capacity — energy enough to power tens of millions of homes.
The revised compute budget of $600 billion suggests a more pragmatic roadmap focused on core infrastructure that directly supports AI model training and deployment while tethering spending more closely to projected revenues. In 2025, OpenAI reported annual revenues of approximately $13 billion, comfortably above its $10 billion projection, and kept operating expenses under target by spending about $8 billion.
Why Compute Spending Matters
Compute spending refers to the costs associated with purchasing, running and maintaining the hardware infrastructure that powers AI systems. These include graphics processing units (GPUs), specialised AI accelerators, cloud service fees, data centre operations, energy costs, cooling systems and networking — all essential components that determine the speed and scale at which models are trained and inference is performed.
Training and operating modern AI models is exceedingly resource-intensive. The underlying hardware must process enormous volumes of data, and the demand for specialised chips — from Nvidia, AMD and custom silicon — continues to outpace supply. This trend has made compute the largest line item in OpenAI’s cost structure, prompting the company to emphasize predictable spending as it matures from a research lab into a competitive business entity.
IPO Preparations and Massive Fundraising Plans
OpenAI’s planned compute investment forms part of a broader fundraising push in which the company aims to raise more than $100 billion from strategic investors. Nvidia is nearing completion of a $30 billion investment, and other major technology firms — including SoftBank, Amazon and Microsoft — are expected to participate in the round. These commitments would make the funding effort one of the most ambitious in private-market history, potentially valuing OpenAI at $830 billion to $1 trillion well before its IPO.
Beefy capital inflows from investors are partly intended to secure long-term access to cutting-edge computing hardware and data-centre capacity, enabling OpenAI to build infrastructure that is scalable, resilient and future-ready. Strategic partnerships with major cloud and chip players also reflect the increasing importance of collaboration in managing the exponential demands of next-generation AI.
Revenue Growth and Financial Context
OpenAI expects its revenue to cross $280 billion by 2030, with income split nearly evenly between consumer subscriptions and enterprise AI services — from API access to specialised tools for businesses.
However, soaring inference costs — expenses tied to running AI models in production — have been a pressure point. Reporting indicates that inference expenses more than quadrupled in 2025, which reduced OpenAI’s adjusted gross margin from 40 percent in 2024 to 33 percent in 2025. This has made the case for disciplined compute spending stronger, as unchecked infrastructure growth could outpace revenue gains and strain profitability well into the future.
Despite this, OpenAI’s business remains on a high-growth trajectory, with usage and subscription figures climbing even as competition intensifies from rivals such as Alphabet’s Google, Anthropic and others. The emphasis on revenue diversification — including advertising tests, API monetisation and enterprise licensing — signals an effort to solidify financial footing before any public listing.
Industry Implications and Competitive Pressure
OpenAI’s compute strategy underscores a broader industry phenomenon: compute is now as strategic as intellectual property in the AI race. Firms with access to vast pools of processors, memory and data-centre infrastructure are better positioned to innovate rapidly and deliver new capabilities. This has sparked a global infrastructure arms race among technology giants, hyperscalers and AI labs alike.
As compute costs grow, so too does the conversation around sustainability, energy usage and supply-chain resilience. These investments will shape where and how AI innovation happens — from dense cloud campuses in the U.S. to GPU-rich facilities in Asia — and influence which regions attract the next wave of digital economic growth. Research suggests that compute spending by multiple firms could collectively exceed trillions of dollars by the end of the decade.
Looking Ahead: A New Era of AI Economics
OpenAI’s $600 billion compute spending target represents both ambition and caution. It acknowledges the massive capital required to maintain leadership in generative AI while signalling a renewed focus on fiscal discipline and investor confidence ahead of a possible IPO. As the AI industry transitions from explosive early growth into a more structured and financially oriented phase, how companies allocate compute — and control its costs — may determine who thrives in the next era of technology leadership.



Comments