Anthropic Unlocks Nuclear Power!

Anthropic has committed to spending hundreds of billions of dollars on computing infrastructure through a series of deals with Google and Broadcom, as the artificial intelligence start-up moves to secure the capacity needed to sustain rapid growth in demand for its models.

The agreements underline the escalating scale of investment required to compete at the frontier of AI, where access to computing power has become as critical as talent or data. For Anthropic, the challenge is no longer simply building advanced systems, but ensuring it has the infrastructure to run them at commercial scale.

The San Francisco-based group said it would draw on “multiple gigawatts” of capacity from Google’s tensor processing units, or TPUs, alongside its cloud services. These chips, designed in-house by Google, are intended to rival the graphics processing units produced by Nvidia, which have become the dominant hardware underpinning the current wave of AI development.

A significant portion of that capacity will come via a parallel arrangement involving Broadcom, which is working with Google to develop custom versions of its TPU hardware. Around 3.5 gigawatts of computing power is expected to be delivered through this partnership from next year, according to filings related to the deal.

In total, Anthropic is set to gain access to close to 5 gigawatts of additional capacity over the coming years, according to a person familiar with the terms. The scale is difficult to overstate. A single gigawatt of AI computing infrastructure is broadly comparable to the output of a nuclear reactor, and the cost of building that capacity is estimated at between $35bn and $50bn, with chips accounting for the majority of the expenditure.

On that basis, Anthropic’s commitments imply a total outlay running well into the hundreds of billions of dollars, placing it among the largest infrastructure spenders in the technology sector despite remaining lossmaking.

The urgency behind these investments reflects the pace at which demand for AI tools is accelerating. Anthropic said its annualised revenue has surged from $9bn at the end of last year to $30bn by the end of March, based on the run rate of the past 28 days. Much of that growth has been driven by strong uptake of its products, including its coding-focused system, Claude Code.

Krishna Rao, the company’s chief financial officer, said the spending was necessary to keep pace with that expansion while continuing to push the boundaries of model development. The group is attempting to balance two competing pressures: scaling its services to meet current demand while investing heavily in training the next generation of systems.

That tension is becoming a defining feature of the AI industry. Building state-of-the-art models requires vast amounts of computing power, which in turn demands long-term commitments to infrastructure providers. At the same time, revenues, while growing quickly, are still catching up with the scale of capital being deployed.

Anthropic is not alone in pursuing this strategy. Its main rival, OpenAI, has struck a series of agreements with chipmakers and cloud providers including Broadcom, Nvidia and AMD, in an effort to lock in supply and avoid bottlenecks.

The result is an increasingly complex web of relationships in which large technology groups act simultaneously as suppliers, customers and investors. Google, for example, has invested billions of dollars in Anthropic and held a stake of around 14 per cent as of last year, even as it supplies the infrastructure on which the company depends.

This overlap has drawn scrutiny from regulators and industry observers, who question whether such arrangements could entrench the dominance of a small number of players. For the companies involved, however, the logic is straightforward. Securing access to scarce computing resources is seen as essential to maintaining a competitive position in a rapidly evolving market.

Google, for its part, is using the partnership to expand the reach of its in-house chips. Its TPUs already power its own AI models, including those within its Gemini family, and the deal with Anthropic provides an opportunity to demonstrate their effectiveness to external customers.

This puts the company in more direct competition with Nvidia, whose GPUs have become the standard for training and running large AI systems. While Nvidia retains a strong lead, alternative architectures are gaining traction as developers look for ways to reduce costs and diversify supply.

Broadcom’s role adds another layer to the arrangement. The company has agreed to develop and supply custom TPU designs for Google under a long-term agreement that runs through to 2031. Its shares rose following news of the deal, reflecting investor confidence in the growing demand for specialised AI hardware.

Yet the scale of these commitments raises questions about sustainability. Anthropic has raised tens of billions of dollars from investors, including a $30bn funding round earlier this year that valued the company at $380bn. Like its peers, it is betting that continued growth and eventual market dominance will justify the heavy upfront spending.

Critics argue that the model relies on a degree of circularity. Technology groups invest in AI labs, supply them with infrastructure and, in some cases, act as customers for their products. This can create the appearance of strong demand while masking the underlying economics.

Supporters counter that such arrangements are a necessary feature of an industry in its early stages, where the costs of entry are exceptionally high and the potential rewards uncertain. In their view, the ability to deploy capital at scale is itself a competitive advantage.

Anthropic’s latest deals build on an earlier agreement with Google announced last October, which was described at the time as being worth tens of billions of dollars and expected to bring more than a gigawatt of capacity online in 2026. The expansion announced this week significantly increases that commitment.

The company has also been active elsewhere. In November, it agreed to spend $50bn on new data centres in Texas and New York with cloud infrastructure group Fluidstack, and committed a further $30bn to capacity purchases from Microsoft and Nvidia. Taken together, these agreements point to a strategy of securing supply across multiple providers, reducing reliance on any single partner.

This diversification reflects both opportunity and risk. On one hand, spreading commitments across different suppliers can help mitigate shortages and improve resilience. On the other, it adds complexity and increases the financial burden, particularly if demand fails to meet expectations.

For now, the trajectory remains firmly upward. The rapid increase in Anthropic’s revenue suggests that businesses are willing to pay for access to advanced AI tools, particularly in areas such as software development where productivity gains can be significant.

The broader question is how long that growth can be sustained, and whether it will be sufficient to offset the enormous costs of building and operating the required infrastructure.

The answer will depend in part on how the competitive landscape evolves. As more companies enter the market and existing players expand their offerings, pricing pressure could increase. At the same time, improvements in model efficiency may reduce the amount of computing power required for certain tasks.

There is also the issue of regulation. Governments are beginning to take a closer interest in the AI sector, focusing on areas such as data use, competition and systemic risk. Any changes to the regulatory environment could affect both the cost and the pace of development.

For Anthropic, the immediate priority is clear. Securing access to computing power at scale is seen as a prerequisite for staying in the race. Without it, even the most advanced models risk becoming commercially irrelevant.

The company’s willingness to commit vast sums to infrastructure suggests it believes the opportunity justifies the risk. Whether that judgement proves correct will depend on factors that remain difficult to predict, from the trajectory of demand to the actions of competitors and regulators.

What is clear is that the economics of AI are entering a new phase. The focus is shifting from experimentation to industrialisation, with the leading players investing at levels more commonly associated with energy or transport infrastructure.

In that context, Anthropic’s deals with Google and Broadcom are less an outlier than a sign of what is to come. As the race for AI dominance intensifies, the ability to secure and fund computing capacity at scale is likely to become one of the defining factors separating winners from also-rans.

RECENT NEWS

KPMG Cuts UK Jobs As Big Four Struggle

KPMG is preparing to cut close to 600 jobs in the UK, underscoring mounting pressure across the Big Four accountancy fir... Read more

Cambridge Aerospace Seeks $1bn Valuation

A British defence start-up approaching a $1bn valuation says as much about the changing nature of warfare as it does abo... Read more

Claxton Losses Mount, $1.3bn And Counting

The losses at Caxton Associates are not just a bad month for a macro fund. They are an early indication of how quickly t... Read more

Beijing Worries About AI Leaving Its Shores

Meta’s push into artificial intelligence has collided head-on with Beijing’s tightening grip on strategic technology... Read more

Europe Paralysed As Middle East War Exposes Strategic Weakness

Europe likes to describe itself as a geopolitical power. The war spreading across the Middle East has revealed something... Read more