Google TPU v7 launch and Marvell inference chip partnership
TECH

Google TPU v7 launch and Marvell inference chip partnership

57+
Signals

Strategic Overview

  • 01.
    Google unveiled its seventh-generation TPU, codenamed Ironwood, at Google Cloud Next as the first TPU purpose-built for inference, scaling up to 9,216 liquid-cooled chips per pod for 42.5 Exaflops with 192 GB HBM per chip.
  • 02.
    Google is in talks with Marvell Technology to co-design two new chips — a memory processing unit to sit alongside existing TPUs and a new inference-specific TPU — adding Marvell to a supply chain that already includes Broadcom and MediaTek with TSMC for fabrication.
  • 03.
    Anthropic plans to use up to one million TPUs as part of an expansion worth tens of billions of dollars, bringing well over 1 GW of capacity online in 2026 and roughly 3.5 GW of next-gen TPU capacity via Broadcom starting 2027.
  • 04.
    On April 20, 2026 Marvell shares jumped roughly 6% to record highs while Broadcom shares fell on fears of redirected design work; Barclays upgraded Marvell to overweight and raised its price target from $105 to $150.

Deep Analysis

Multi-supplier strategy, not a Broadcom replacement

The headline reaction to the Marvell talks has been to read them as Google walking away from Broadcom, but the underlying structure tells a more deliberate story. Google is not replacing Broadcom; it is adding a third design partner to a supply chain that already includes Broadcom for high-performance chip variants, MediaTek for cost-optimized 'e' variants at 20 to 30 percent lower cost, and TSMC for fabrication. The Marvell scope, as reported, is narrow: a memory processing unit to work alongside existing TPUs and a new TPU built specifically for AI inference.

That structure looks much more like an automotive-style tiered supplier model than a binary vendor swap. Each partner competes for a defined slice — high-end accelerators, cost-optimized variants, memory-side silicon — rather than for the entire program. The result is leverage: Google gets cross-checks on pricing, schedule, and roadmap from multiple ASIC houses while keeping a single fabrication partner. JPMorgan's pushback that 'Broadcom remains the clear incumbent in TPU-related design work' fits this picture, as does Broadcom's separately announced long-term TPU and networking agreement with Google through 2031, which would be incompatible with an actual divorce.

The inference economics behind Ironwood and Anthropic's million-chip bet

Ironwood is the first TPU Google has explicitly designed for inference, and the specs explain why an inference-only chip suddenly makes sense. Each chip delivers 4,614 TFLOPs and ships with 192 GB of HBM at 7.37 TB/s of bandwidth, while a full pod links 9,216 liquid-cooled chips for 42.5 Exaflops across nearly 10 MW of power. Google says Ironwood is roughly 10x peak performance versus TPU v5p, more than 4x per-chip versus Trillium, and delivers 2x performance per watt versus Trillium and roughly 30x more power efficient than the original Cloud TPU.

Those numbers make Anthropic's commitment to use up to one million TPUs — backed by tens of billions of dollars in spend, well over 1 GW online in 2026 and around 3.5 GW of next-generation TPU capacity via Broadcom starting in 2027 — economically rational rather than aspirational. SemiAnalysis estimates that custom silicon can shave on the order of 30 percent off compute fleet costs versus merchant GPUs, and Google Cloud CEO Thomas Kurian explicitly framed Anthropic's expansion as a vote for TPU price-performance. For a frontier lab whose marginal cost is dominated by inference at scale, a chip with 192 GB of HBM and 2x perf-per-watt against the prior generation is the difference between defending margins and not.

Broadcom holds the pie, Marvell takes a slice

Broadcom holds the pie, Marvell takes a slice
Custom AI ASIC sales are projected to grow nearly 3x faster than GPU shipments in 2026 (+45% vs +16%).

The market is doing two things at once. The custom ASIC pie is growing fast — projected at 45 percent year-on-year in 2026 versus 16 percent for GPU shipments — and within that pie Broadcom currently holds more than 70 percent share, with Mizuho estimating $21 billion of AI revenue from Google and Anthropic alone in 2026 rising to $42 billion in 2027. Hock Tan has projected Broadcom's overall AI chip business will exceed $100 billion in 2027. So even if Marvell genuinely wins the memory processing unit and an inference-TPU socket, Broadcom's absolute revenue can still grow.

That is why the market reaction was bifurcated rather than zero-sum on absolute terms: Marvell shares jumped about 6 percent to record highs on April 20 and Barclays upgraded the stock to overweight, raising its price target from $105 to $150, while Broadcom traded down on fears of share loss. Bloomberg projects Marvell could capture 20 to 25 percent of a $118 billion custom ASIC market by the early 2030s, which is consistent with both the bull case for Marvell and JPMorgan's note that Broadcom remains the clear incumbent. Community discussion on Broadcom-focused investor forums echoed the JPMorgan framing, emphasizing the 2031 Broadcom deal, the 3.5 GW Anthropic capacity from 2027, and the narrower role being negotiated for Marvell.

What Ironwood and the Marvell talks mean for Nvidia's pricing power

For Nvidia the structural risk is not that any single hyperscaler abandons GPUs — Meta's reported multibillion-dollar TPU procurement deal explicitly does not preclude continued Nvidia spend, and SemiAnalysis still credits Nvidia's flagships with leadership in training and prototyping. The risk is that inference, the fastest-growing slice of compute, increasingly runs on bespoke silicon co-designed by hyperscalers and merchant ASIC vendors. SemiAnalysis frames Ironwood as the iteration where Google 'nearly completely closes the gap to the corresponding Nvidia flagship GPU,' and the 45 percent projected growth in custom ASIC sales versus 16 percent for GPUs is the quantitative version of that statement.

Sentiment on X and Reddit captures the same tension: technical commentary praised Ironwood's order-of-magnitude generational gains and large optical-switched superpods, while skeptics argued Google still depends on Marvell and Broadcom to actually build the chips and therefore cannot fully wean off the Nvidia ecosystem. The honest read is somewhere in the middle. Google has demonstrated that hyperscaler-led custom silicon with multiple ASIC partners and TSMC can produce inference economics that are good enough to attract Anthropic, Meta, Citadel, and G42 — and that fact, more than any single chip, is what compresses Nvidia's long-run pricing power on the inference side of the workload mix.

Historical Context

2015
Google began deploying TPU v1 internally, manufactured on a 28nm process, marking the start of its custom AI accelerator program.
2018-02-12
Google made TPU access available externally via Google Cloud, opening custom silicon to third-party customers for the first time.
2024-05
TPU v6e (Trillium) launched with 4.7x performance versus v5e, becoming Google's primary cost-optimized accelerator.
2025-04
TPU v7 'Ironwood' announced at Google Cloud Next 2025 as the first inference-optimized TPU.
2025-10
Anthropic and Google publicly confirmed plans to expand TPU usage to up to one million chips, anchoring Google's external TPU demand.
2026-01
Google confirmed its custom TPUs outshipped general-purpose GPUs in volume for the first time.
2026-04-07
Broadcom and Google announced a long-term agreement to design and supply TPUs and networking through 2031.
2026-04-20
Reports surface that Google is in talks with Marvell on two new AI chips; Marvell shares jumped about 6% to record highs while Broadcom shares fell.

Power Map

Key Players
Subject

Google TPU v7 launch and Marvell inference chip partnership

GO

Google (Alphabet)

TPU program owner and system designer; expanding a multi-supplier strategy that adds Marvell to existing Broadcom and MediaTek partnerships and unveiling Ironwood as the first inference-optimized TPU.

MA

Marvell Technology

Prospective design partner for two new Google chips — a memory processing unit and an inference TPU; shares surged about 6% on the news to record highs.

BR

Broadcom

Incumbent TPU co-developer since the program's inception; signed a long-term TPU and networking agreement with Google through 2031 and commands more than 70% of the custom AI accelerator market.

ME

MediaTek

Handles I/O and the cost-optimized 'e' variants of TPUs at 20-30% lower cost, while coordinating manufacturing with TSMC.

AN

Anthropic

Largest external TPU customer; expanding to up to one million TPUs and contracting roughly 3.5 GW of next-gen TPU capacity via Broadcom starting 2027.

ME

Meta, Citadel Securities, and G42

External TPU customers extending Google's reach beyond Anthropic — Meta has signed a multibillion-dollar multi-year procurement deal, while Citadel and G42 are testing TPU deployments.

NV

Nvidia

Dominant AI GPU incumbent now confronting growing custom-silicon competition for inference workloads, even as it retains leadership in training and prototyping.

TS

TSMC

Fabrication partner producing Google's TPU silicon across generations regardless of which design house leads each variant.

THE SIGNAL.

Analysts

"Positions Ironwood as a breakthrough purpose-built for the 'age of inference' with major performance-per-watt gains over the previous Trillium generation."

Amin Vahdat
VP/GM, ML, Systems & Cloud AI, Google

"Frames Anthropic's expanded TPU usage as validation of TPU price-performance leadership relative to merchant GPUs."

Thomas Kurian
CEO, Google Cloud

"Sees the TPU expansion as core to Anthropic's frontier-compute roadmap and to scaling its model-training and serving footprint."

Krishna Rao
CFO, Anthropic

"Argue that TPU v7 closes the gap to Nvidia's flagship and that Google's system-level engineering has produced a genuine merchant-hardware challenger to Nvidia for inference at scale."

Dylan Patel, Myron Xie, and Daniel Nishball
Analysts, SemiAnalysis

"Upgraded Marvell to overweight on the news and raised the price target from $105 to $150, citing material upside from a TPU partnership."

Tom O'Malley
Analyst, Barclays

"Pushed back on framing that Marvell has 'won' Google business and characterized Broadcom as the clear TPU incumbent for high-performance design work."

JPMorgan analysts
Sell-side, JPMorgan

"Projects Broadcom's AI chip business will exceed $100 billion in revenue by 2027, anchored by hyperscaler custom-silicon programs."

Hock Tan
CEO, Broadcom
The Crowd

"Google $GOOGL is reportedly in talks with Marvell $MRVL to develop two new AI related chips. One of the chips is a memory processing unit designed to work with Google's TPU and the other chip is a new TPU built specifically for running AI models - The Information"

@@StockMKTNewz264

"Impressive progress for the Google TPU family with today's announcement of Ironwood (v7). The headline performance is 10x improvement generation to generation and while the chip is built for inference (Big memory improvement with more HBM), but it has training capacity as well."

@@danielnewmanUV0

"Wonder how this plays out. $GOOGL has been hammering away at these TPUs for many years, but still seems to end up at the teat of $NVDA. The most famous suppliers of Google's TPUs are Marvell and Broadcom. Remarkable how they've performed versus Google recently."

@@SharestepAI95

"Google in talks with Marvell to build new AI chips for TPUs, aiming to rival Nvidia GPUs"

@u/callsonreddit881
Broadcast
Google's 400,000-Chip Monster Tensor Processing Unit Just Destroyed NVIDIA's Future!

Google's 400,000-Chip Monster Tensor Processing Unit Just Destroyed NVIDIA's Future!

Introducing 7th Generation TPUs: Ironwood

Introducing 7th Generation TPUs: Ironwood

Google Cloud unboxes seventh generation Ironwood TPU

Google Cloud unboxes seventh generation Ironwood TPU