Anthropic has secured 3.5 gigawatts of next-generation Google TPU compute, a move that signals a massive escalation in the AI infrastructure arms race.
Back
Anthropic has secured 3.5 gigawatts of next-generation Google TPU compute, a move that signals a massive escalation in the AI infrastructure arms race.

Anthropic has secured 3.5 gigawatts of next-generation Google TPU compute, a move that signals a massive escalation in the AI infrastructure arms race.
In a significant expansion of its partnership with Google, AI startup Anthropic has committed to a deal securing 3.5 gigawatts of computing capacity, supplied by Google's custom Tensor Processing Units (TPUs) co-developed with Broadcom Inc. The agreement, disclosed in a Broadcom securities filing, shows the immense capital investment now required to compete at the highest level of artificial intelligence development.
"For 2027, this demand is expected to surge in excess of 3 gigawatts of compute," Broadcom CEO Hock Tan said on the company's recent earnings call, referencing the demand from the Google-Anthropic partnership. This follows a strong start in 2026, with Anthropic on track to use 1 gigawatt of compute from Google's TPUs. Broadcom shares rose 3 percent in extended trading following the disclosure.
The deal carries substantial financial implications for Broadcom, which is a key partner in manufacturing the custom Google chips. Analysts at Mizuho have estimated the partnership could generate $21 billion in AI-related revenue for Broadcom in 2026, potentially growing to $42 billion in 2027. The filing itself did not specify a dollar amount for the expanded commitment.
This aggressive infrastructure investment is critical for Anthropic to keep pace with its primary rival, OpenAI. Both AI model builders are securing vast computational resources, which are essential for training and running increasingly complex models. While Anthropic is aligning with Google's TPU architecture, OpenAI is currently reliant on Nvidia Corp.'s dominant GPUs, accessed through cloud providers like Microsoft Azure and Amazon Web Services.
The scale of capital expenditure in the AI sector is reaching historic levels. The 3.5 GW figure represents a city-scale power draw, highlighting the physical-world constraints of digital innovation. This trend is forcing AI hyperscalers like Google, Amazon, and Microsoft to aggressively tap debt capital markets to fund the buildout of data centers and specialized hardware.
This CapEx boom benefits the entire semiconductor supply chain. While Nvidia has been the most prominent beneficiary, the custom silicon strategy pursued by Google with Broadcom shows a growing desire to diversify hardware and optimize performance for specific AI workloads. This deal solidifies Broadcom's position as a major player in the AI infrastructure buildout, moving beyond its traditional networking and connectivity markets.
The competitive landscape is not limited to two players. OpenAI has also made commitments to purchase GPUs from Advanced Micro Devices Inc., further diversifying the hardware ecosystem. This intense competition for computing power underscores the foundational belief that access to massive-scale training and inference capabilities will be a key determinant of which companies lead the next wave of AI innovation. For investors, this signals a multi-year cycle of heavy investment in the enablers of artificial intelligence, from chip designers to data center operators.
This article is for informational purposes only and does not constitute investment advice.