OpenAI

OpenAI is reportedly facing internal concerns regarding its substantial data center spending commitments, having missed revenue and user targets. This comes as the company revises its ambitious Stargate project, signaling a potential recalibration of its infrastructure expansion plans. These developments suggest a growing tension between OpenAI's aggressive scaling ambitions and its current financial performance, impacting its ability to meet large capital expenditures.

The company's strategy for securing essential compute power is evolving, with a shift towards leveraging partnerships and existing infrastructure. Microsoft is integrating advanced chip technology into a key data center site previously intended for OpenAI's Stargate project. This pragmatic approach addresses energy, capital, and logistical constraints, reflecting a dynamic environment where securing priority access to hardware like GPUs is paramount.

Recent shifts in OpenAI's financial relationship with key partners are evident, with updated agreements capping revenue payments from Microsoft and removing intellectual property exclusivity. Payments are no longer solely contingent on achieving artificial general intelligence, highlighting evolving strategic priorities. Concurrently, Wall Street remains optimistic about AI-driven growth in Big Tech, though underlying concerns about the long-term sustainability of this expansion persist.

Last updated May 10, 2026

Coverage

OpenAI and Broadcom are reportedly in discussions regarding financing for an $18 billion custom chip project, with Broadcom's initial investment potentially linked to purchase commitments from Microsoft.
Microsoft's Q3 FY2026 results reveal that component inflation, OpenAI restructuring, and a massive gigawatt-scale buildout are fundamentally reshaping the economics of its AI initiatives.
OpenAI's new MRC protocol is designed to mitigate congestion and failure issues within extensive AI clusters, supporting hyperscalers as they scale to accommodate hundreds of thousands of graphics processing units.
Greg Brockman, president of OpenAI, holds stakes in Cerebras, CoreWeave, Stripe, and Helion, all of which are companies with whom OpenAI has established business agreements.
Wall Street is optimistic about the AI-driven growth of major technology companies, yet underlying concerns about the long-term implications and sustainability of this expansion persist.
OpenAI has reportedly missed revenue and user targets, leading to internal concerns about meeting its significant data center spending commitments as it revises its Stargate project and prepares for an IPO.
The credit markets are increasingly bifurcating the risk associated with AI infrastructure, with investors now differentiating pricing based on operator backing, hyperscaler leases, and tenant concentration.
Anthropic is generating higher revenue from large language models than OpenAI, despite having significantly fewer users, indicating a market division between companies that prioritize user engagement and those that focus on monetization.
Slowing growth at OpenAI has created apprehension among investors on Wall Street regarding the data center sector.
Microsoft's updated deal with OpenAI caps revenue payments and removes intellectual property exclusivity, with payments no longer contingent on OpenAI achieving artificial general intelligence.
OpenAI is reportedly pausing its Stargate projects in three countries following the departure of key executives, suggesting potential shifts in the company's advanced AI development strategy.
Microsoft has reportedly assumed control of data center capacity in Norway, originally designated for OpenAI's Stargate project, and has equipped it with 30,000 Nvidia Vera Rubin chips, signaling a shift in AI infrastructure deployment.
Meta's substantial spending on CoreWeave, including take-or-pay GPU supply agreements and priority access to NVIDIA hardware, indicates a strategic shift away from its own US data centers due to grid constraints, hyperscaler self-build timelines, and inference workload economics.
Following attacks on Amazon and Oracle data centers, Iran has threatened to target OpenAI's Stargate data center located in the UAE.
OpenAI's significant $122 billion capital raise represents a substantial physical infrastructure demand that the US data center market is currently ill-equipped to meet.
OpenAI's $122 billion funding surge, combined with a 500MW+ buildout in the Nordics and a hyperscale push in Southeast Asia, signals major capital, power, and geopolitical shifts transforming global AI infrastructure.
opinion
OpenAI has secured an additional $122 billion in capital, reaching a nominal valuation of $852 billion, despite global instability that could potentially impact the artificial intelligence boom.
OpenAI has secured a record $122 billion in funding to expand its artificial intelligence infrastructure, indicating a surge in demand for compute, power, and distributed data center capacity, and broadening its cloud and chip strategy.
OpenAI's $10 billion funding, NextEra's 10GW power initiative, and Adani's hyperscaler push in India highlight significant capital, power, and policy shifts reshaping global AI infrastructure.
Akash Systems, in collaboration with AMD and Nvidia, is pioneering diamond-based cooling solutions to address the thermal challenges hindering the scalability of artificial intelligence in data centers.
Vertiv is enhancing its thermal management offerings for AI infrastructure through the acquisition of ThermoKey, aiming to address critical cooling bottlenecks and strengthen its position in the AI hardware market.
OpenAI is restructuring its leadership in response to a shift in data center strategy, opting to rent artificial intelligence servers from cloud providers rather than developing all its necessary capacity internally.
Oracle and OpenAI have halted expansion plans for their Abilene Stargate facility, while Meta is reportedly negotiating with Crusoe for that acquired capacity, aided by Nvidia, due to financing issues and scope changes.
Oracle is planning significant job reductions as a strategic measure to allocate capital towards funding its substantial buildout of artificial intelligence data center capacity, particularly for OpenAI.
OpenAI's extensive infrastructure partnerships with major cloud providers and specialized GPU services are fostering the growth of a multi-cloud AI ecosystem, increasingly measured by its substantial power consumption.
Broadcom argues that artificial intelligence companies cannot soon develop and deploy their own silicon, citing its deployment of multiple gigawatts of custom accelerators for hyperscalers like Meta, OpenAI, and Anthropic as evidence.
Nscale secured a $1.4 billion GPU-backed loan across Europe, signaling the rise of hardware-backed private credit as a key financing mechanism for the expansion of AI infrastructure.
The justification for $121 billion in U.S. data center lending hinges on sustained AI demand, power certainty, disciplined capital structures, and sponsor scale, which will ultimately determine the viability of the AI infrastructure cycle.
OpenAI's CEO, Altman, criticized the Pentagon for canceling its contract with Anthropic, while simultaneously confirming OpenAI's own deal with the Department of War for using advanced artificial intelligence systems in classified settings.
OpenAI intends to leverage two gigawatts of Amazon's Trainium chips through an expanded cloud computing contract with Amazon Web Services valued at one hundred billion dollars.
OpenAI has reportedly raised one hundred ten billion dollars, including fifty billion from Amazon and thirty billion each from Nvidia and SoftBank, achieving a valuation of seven hundred thirty billion dollars concurrent with a major Amazon compute agreement.
AMD signed a large chip supply agreement with Meta that closely mirrors a similar deal established with OpenAI last fall, involving circular financing structures for the artificial intelligence hardware.
A recent report indicates that OpenAI has substantially lowered its projected compute expenditure to $600 billion by 2030, contrasting with previous, much higher estimates from its chief executive officer.
OpenAI Chief Executive Officer Sam Altman countered criticism regarding the high energy demands of artificial intelligence by arguing that human existence has historically consumed vastly more resources.
OpenAI has unveiled its GPT-5.3-Codex-Spark model, which achieves high processing speeds by running exclusively on Cerebras Systems' CS3 accelerators, marking the first deployment of an OpenAI model on rival hardware.
While OpenAI navigates the challenges of incorporating advertising into its services while maintaining credibility, Google is actively utilizing artificial intelligence to enhance its established advertising products, even holding off on ads within Gemini's core AI mode.
Cerebras Systems secured one billion dollars in new funding, achieving a valuation of twenty-three billion dollars shortly after announcing a significant ten-billion-dollar agreement with OpenAI.
OpenAI executives have publicly supported Nvidia amidst claims that the startup is dissatisfied with the performance of its current inference hardware, shortly after the Nvidia chief executive downplayed a significant investment pledge toward OpenAI.
Oracle is planning to secure $50 billion in capital during 2026 to support its expanding artificial intelligence cloud services, driven by high demand from major clients including OpenAI, Meta, Nvidia, AMD, TikTok, and xAI.
Amazon is reportedly engaging in discussions to allocate a significant $50 billion investment toward OpenAI, building upon its existing role as a cloud provider and an investor in Anthropic.
Microsoft's stock price declined following an otherwise strong fiscal quarter due to investor apprehension regarding the substantial financial commitment required to support its intensive partnership with OpenAI.
As spending on artificial intelligence infrastructure reaches unprecedented levels, an ongoing debate questions the sustained viability of data center investments versus the possibility of an economic bubble, particularly concerning OpenAI's projections.
OpenAI asserts that its forthcoming Stargate artificial intelligence data centers will cover their own energy consumption costs, aiming to scale crucial United States infrastructure without increasing local residential electricity rates.
OpenAI has committed to covering the substantial power generation costs associated with its planned Stargate data centers.
The intense proliferation of artificial intelligence data centers and the resulting competition for advanced microchips, particularly memory components, threatens to disrupt global automotive manufacturing supply chains reliant on these sophisticated electronics.
In the second half of 2025, the Asia-Pacific region solidified its position as the global epicenter for artificial intelligence data centers, driven by the convergence of power constraints, complex capital structures, and proactive sovereign policy decisions.
OpenAI has committed to deploying 750 megawatts worth of Cerebras' large, SRAM-heavy accelerators through 2028, aiming to enhance its ChatGPT inference capabilities and real-time agent performance.
The second half of 2025 marked a critical transition for South America's $60 billion-plus artificial intelligence buildout, where factors such as power access, permitting efficiency, and capital discipline began to differentiate viable campuses from stalled projects.
OpenAI and SoftBank have jointly invested $1 billion into SB Energy, signaling a growing industry trend where technology firms proactively secure essential energy resources to underpin their expansive artificial intelligence ambitions.
Capital flows, energy limitations, and fundamental structural changes are redefining the scale of artificial intelligence and data center infrastructure across North America, currently estimated at a six hundred billion dollar corridor buildout.