Anthropic-Google-Broadcom Multi-Gigawatt TPU Deal
TECH

Anthropic-Google-Broadcom Multi-Gigawatt TPU Deal

34+
Signals

Strategic Overview

  • 01.
    Anthropic signed a deal with Google and Broadcom for approximately 3.5 gigawatts of next-generation TPU computing capacity starting in 2027, adding to 1 GW already coming online in 2026 for a total of 4.5 GW. Anthropic's official announcement on X drew massive engagement -- 20,000 likes, 1,800 retweets, and 2.8 million views -- reflecting the deal's significance across the tech and finance communities.
  • 02.
    Anthropic's annual revenue run rate has surpassed $30 billion, more than tripling from approximately $9 billion at the end of 2025, with over 1,000 business customers each spending more than $1 million annually.
  • 03.
    Broadcom entered a long-term agreement to design and supply custom TPUs for Google through 2031, with Mizuho analysts estimating Broadcom will earn $21 billion in AI revenue from Anthropic in 2026 and $42 billion in 2027. Financial commentator Gabriel Horwitz's viral X thread (10,000 likes) argued the real story is the financial exposure for $GOOG and $AVGO, not the AI angle, after noting that Broadcom quietly filed an 8-K with three separate agreements.
  • 04.
    Broadcom stock surged over 6% on the news, while the vast majority of new compute infrastructure will be sited in the United States. Video coverage from CNBC (4,200 views) and Bloomberg (3,300 views) focused on the deal's implications for Broadcom's custom silicon strategy and Anthropic's $30 billion run rate respectively, while an earlier CNBC segment on the October 2025 deal context drew 42,000 views. Reddit discussion was notably absent, reflecting how recently the news broke (April 6-7, 2026).

Deep Analysis

The $37 Billion Bet That Rewrites the AI Hardware Pecking Order

The $37 Billion Bet That Rewrites the AI Hardware Pecking Order
Mizuho estimates Broadcom AI revenue from Anthropic will double from $21B in 2026 to $42B in 2027

For years, NVIDIA has been the unchallenged kingmaker of AI compute. Every major lab -- OpenAI, Google DeepMind, Anthropic, Meta -- built their training runs on NVIDIA GPUs, and Jensen Huang's company captured the vast majority of AI silicon revenue. This deal fundamentally challenges that monopoly. Anthropic's largest-ever compute commitment goes not to NVIDIA but to Google's custom TPUs manufactured by Broadcom, and the numbers are staggering: approximately $37 billion flowing to Broadcom, with financial analyst Rohan Paul noting on X (42 retweets, 492 replies) that Google captures an additional $13-14 billion from the arrangement.

The strategic logic centers on purpose-built silicon. Custom ASICs like Google's TPUs are designed from the ground up for specific AI workloads -- training and inference on transformer architectures -- rather than serving as general-purpose compute engines. With Broadcom locked in as Google's TPU design and supply partner through 2031, Anthropic gains long-term access to silicon that is architecturally optimized for Claude's specific model requirements. For a company whose revenue just tripled and whose inference workloads are about to explode with agentic AI, securing a dedicated hardware pipeline at this scale provides both cost predictability and performance optimization that general-purpose GPUs cannot match at equivalent power budgets. This is the largest commitment by a major AI lab to custom ASICs over general-purpose GPUs, and it sends a clear signal: the era of NVIDIA-or-nothing is ending. NVIDIA remains in Anthropic's stack -- Claude trains and runs on NVIDIA GPUs alongside TPUs and AWS Trainium -- but the center of gravity is shifting toward purpose-built silicon optimized for specific workloads.

From Mystery Customer to $30 Billion Anchor Tenant in Six Months

The timeline of Anthropic's compute appetite tells a story of acceleration that even insiders likely did not predict. In September 2025, Broadcom CEO Hock Tan teased Wall Street with a mystery customer who had placed a $10 billion order for custom TPU racks. By December, he revealed it was Anthropic -- and casually mentioned that an additional $11 billion order had already followed. Now, just four months later, the deal has expanded to 3.5 GW of new capacity on top of the 1 GW already contracted, a commitment valued in the tens of billions more.

What justifies this exponential ramp? The demand numbers. Anthropic's annual revenue run rate went from roughly $9 billion at the end of 2025 to over $30 billion by early April 2026 -- more than tripling in approximately four months. The company now has over 1,000 business customers each spending at least $1 million annually, a figure that doubled in under two months since February. IDC Research VP Dave McCarthy frames this as preparation for 'the upcoming wave of agentic inference,' where AI systems don't just answer questions but take sustained actions that require continuous, high-throughput compute. The capacity Anthropic is locking in today is not for current Claude usage -- it is for the always-on agentic workloads of 2027 and beyond.

The Caveat in Broadcom's 8-K That Wall Street Barely Noticed

Buried in Broadcom's securities filing is a sentence that deserves more attention than it has received: 'The consumption of such expanded AI compute capacity by Anthropic is dependent on Anthropic's continued commercial success.' This is not boilerplate. It is a material risk disclosure that makes the deal's revenue projections conditional rather than guaranteed.

Mizuho analysts project Broadcom will earn $21 billion in AI revenue from Anthropic in 2026 and $42 billion in 2027. These figures assume Anthropic's revenue trajectory continues its current steep ascent. But Anthropic is a private company in a market where competitive dynamics shift quarterly. OpenAI, Google DeepMind, and a growing roster of open-source models are all competing for the same enterprise budgets. If Anthropic's growth stalls or reverses, Broadcom is left with committed manufacturing capacity and a customer who may not consume it. Gabriel Horwitz's viral X thread (10,000 likes) captured this nuance precisely, arguing that the real story is not about AI but about Google and Broadcom's financial exposure -- he noted Broadcom quietly filed an 8-K containing three separate agreements that warrant closer scrutiny. The 6% stock surge prices in the upside; the filing language hedges the downside.

4.5 Gigawatts and the Unsolved Physics of AI Power

To put 4.5 gigawatts in perspective: that is roughly the output of four to five large nuclear power plants, or enough electricity to power a city of several million people. Anthropic is committing to consume this much power for AI compute alone, and it is just one company. The broader AI industry's power demands are scaling at a rate that existing energy infrastructure simply cannot match without massive new investment.

Anthropic has stated that the vast majority of new compute will be sited in the United States, expanding on its November 2025 commitment to invest $50 billion in American AI infrastructure. But siting data centers is one thing; powering them is another. Google's projected 2026 capex of $180 billion reflects the enormous capital required to build this infrastructure, but capital alone does not solve permitting bottlenecks, grid interconnection timelines, or the growing political debate about whether AI companies should receive preferential access to energy resources. The deal accelerates a collision between the AI industry's compute ambitions and the physical constraints of energy generation and distribution that will define the next decade of the industry.

Anthropic's Multi-Cloud Chess Game

One of the most strategically interesting aspects of this deal is what it reveals about Anthropic's vendor management. Anthropic trains and runs Claude on three different hardware platforms simultaneously: AWS Trainium, Google TPUs, and NVIDIA GPUs. Claude remains available on all three major cloud platforms -- AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry. The Google-Broadcom TPU deal sits alongside, rather than replacing, Anthropic's AWS relationship under Project Rainier, a Trainium 2-based supercluster in Indiana.

This is a deliberate diversification strategy. By spreading its compute across multiple hardware architectures and cloud providers, Anthropic avoids the concentration risk that would come from depending on any single supplier. It also creates competitive tension among its providers -- AWS, Google, and NVIDIA each know that Anthropic has alternatives, which gives Anthropic leverage on pricing and capacity commitments. For a company that barely existed five years ago, Anthropic is now large enough to play the three biggest technology infrastructure companies against each other, extracting favorable terms from all of them simultaneously. The question is whether this balancing act is sustainable as each provider pushes for deeper lock-in.

Historical Context

2025-09
CEO Hock Tan disclosed during an earnings call that a mystery customer had placed a $10 billion order for custom TPU racks.
2025-10
Anthropic struck a deal with Google and Broadcom for more than 1 gigawatt of compute capacity (approximately 1 million TPUs), valued in the tens of billions of dollars, to be delivered by end of 2026.
2025-11
Anthropic committed to invest $50 billion in American AI infrastructure.
2025-12-11
CEO Hock Tan publicly confirmed that the mystery $10 billion customer was Anthropic and revealed an additional $11 billion order had followed.
2026-04-06
Broadcom filed a securities disclosure confirming the expanded 3.5 GW TPU deal with Anthropic and the long-term supply agreement with Google through 2031.

Power Map

Key Players
Subject

Anthropic-Google-Broadcom Multi-Gigawatt TPU Deal

AN

Anthropic

The anchor customer consuming all new TPU capacity. With a $30B revenue run rate and 1,000+ enterprise clients spending $1M+, Anthropic's commercial trajectory validates the entire deal -- but Broadcom's own filing notes capacity consumption is contingent on Anthropic's continued success.

BR

Broadcom (AVGO)

Primary designer and manufacturer of Google's TPU silicon through 2031. Projected to earn up to $42B from Anthropic alone in 2027, making this deal a cornerstone of Broadcom's AI revenue strategy and its stock's 6% single-day surge.

GO

Google / Alphabet

Owner of TPU intellectual property and provider of Google Cloud infrastructure. With projected 2026 capex of $180 billion, Google is bankrolling the physical infrastructure that Anthropic will consume while retaining Anthropic as a major Google Cloud customer.

AM

Amazon Web Services (AWS)

Remains Anthropic's primary cloud partner under Project Rainier, a Trainium 2-based supercluster in Indiana. The Google-Broadcom deal sits alongside -- not replacing -- the AWS relationship, reflecting Anthropic's multi-cloud strategy.

NV

NVIDIA

Incumbent GPU supplier facing competitive pressure as Anthropic's largest compute commitment goes to custom ASICs. While Anthropic still trains and runs Claude on NVIDIA GPUs alongside TPUs and AWS Trainium, this deal marks the most significant shift by a major AI lab toward purpose-built silicon alternatives.

THE SIGNAL.

Analysts

"Described the partnership as Anthropic's most significant compute commitment to date, calling it a continuation of the company's disciplined approach to scaling infrastructure to keep pace with unprecedented growth."

Krishna Rao
CFO, Anthropic

"Sees the deal as a direct response to rapidly growing enterprise demand, noting that by securing TPU capacity, Anthropic is solving for the upcoming wave of agentic inference workloads."

Dave McCarthy
Research VP, IDC

"Said the deals should ease recent nervousness around TPU competition and indicate that Broadcom's largest customer sees meaningful demand visibility well into the future."

Matt Britzman
Analyst, Hargreaves Lansdown

"Maintained a buy recommendation on Broadcom, estimating $21B in AI revenue from Anthropic in 2026 and $42B in 2027, stating that the tighter TPU partnership strengthens Broadcom's competitive position."

Mizuho Analysts
Equity Research, Mizuho

"Characterized the deal timing as a 'flight to quality' amid macro volatility, calling Broadcom a structural beneficiary of the generative AI era and the indispensable architect of the next phase of computing."

Wolfe Research
Equity Research
The Crowd

"We've signed an agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, coming online starting in 2027, to train and serve frontier Claude models."

@@AnthropicAI20000

"Everyone's reading Anthropic's announcement today as an AI story. I'd look at GOOG and AVGO instead. Broadcom quietly filed an 8-K tonight with three separate agreements in it."

@@gabriel_horwitz10000

"Broadcom turns Google's TPU IP into chips and networking, while Google earns from its IP without doing the hardware build itself. Broadcom is making ~37B from 1GW of Anthropic's orders, whereas Google is capturing ~13-14B of that."

@@rohanpaul_ai0
Broadcast
Broadcom agrees to expanded chip deals with Google and Anthropic

Broadcom agrees to expanded chip deals with Google and Anthropic

Anthropic Tops 30 Billion Run Rate, Seals Broadcom Deal

Anthropic Tops 30 Billion Run Rate, Seals Broadcom Deal

Google, Anthropic agree to cloud deal worth tens of billions of dollars

Google, Anthropic agree to cloud deal worth tens of billions of dollars