Anthropic wants over 3.5 gigawatts of computing capacity with Google's TPUs
Google finds a major customer for its own AI accelerators with Anthropic. Broadcom will help with further development until at least 2031.
Google's current TPU v7 with multiple chiplets and HBM memory stacks.
(Image: Google)
Anthropic intends to use Google's Tensor Processing Units (TPUs) on a large scale to run Claude models (inference). “We have signed a new agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity that we expect to come online starting in 2027,” announces Anthropic in a blog post.
A Broadcom SEC filing names an initial concrete figure: Broadcom will provide TPUs from 2027 onwards, which are expected to ultimately reach a computing capacity of 3.5 gigawatts. According to the announcement, this is part of the “several gigawatts” announced.
Broadcom is involved in the agreement because the company has been significantly co-developing the TPU accelerators since 2016. Behind the scenes, Broadcom has become an AI giant: it designs most of the AI accelerators for cloud hyperscalers, including Amazon's Trainium and Microsoft's Maia. In its own announcement, Broadcom also announces a new long-term agreement with Google, which runs until 2031 and includes the development of new TPU generations.
The current TPU v7, also known as Ironwood, consumes around 1000 watts of electrical power. 3.5 gigawatts would correspond to 3.5 million accelerators. However, the next generation is likely to consume more electrical power per chip.
Videos by heise
Amazon and Nvidia remain involved
Anthropic emphasizes that Amazon AWS remains the main partner for training AI models and that Nvidia GPUs will continue to be used. The Google TPUs, on the other hand, will run fully trained AI models; they answer questions that users ask Claude, for example. Anthropic urgently needs more computing power to serve all Claude services. Recently, the company removed OpenClaw from its subscriptions, among other things, to reduce the computing load.
As early as October 2025, Anthropic announced its intention to increase its computing capacity to up to one million TPUs. The Financial Times quotes a source close to the company as saying that Anthropic's total computing capacity is expected to rise to five gigawatts in the coming years.
One gigawatt of computing capacity is estimated to cost around 35 billion to 50 billion US dollars, mainly for hardware purchases. At the same time, Anthropic announces a revenue milestone for financing: extrapolating current revenue to the full year, the company lands at 30 billion US dollars in annual revenue. At the end of 2025, according to its own figures, it was still 9 billion US dollars. The current agreement builds on Anthropic's commitment from November 2025 to invest a total of 50 billion US dollars in US computing infrastructure.
(mma)