Google Cloud Next '26 AI Announcements
TECH

Google Cloud Next '26 AI Announcements

76+
Signals

Strategic Overview

  • 01.
    At Google Cloud Next '26 (April 22-24, 2026, Mandalay Bay), CEO Thomas Kurian framed the event around 'The Agentic Enterprise,' declaring agentic AI has moved from experiment to production. The marquee launches: the eighth-generation TPU family split into TPU 8t (training) and TPU 8i (inference), the Gemini Enterprise Agent Platform, Workspace Intelligence, an Agentic Data Cloud standardized on Apache Iceberg, a Wiz-powered Agentic Defense suite, and a $750M partner fund.
  • 02.
    TPU 8t delivers 3x the processing power of Ironwood, 121 exaflops FP4 per superpod, and scales to 9,600 chips with 2 petabytes of HBM. TPU 8i is tuned for latency-sensitive agent inference: 1,152 chips per pod, 3x more on-chip SRAM than Ironwood, 19.2 Tbps bidirectional per-chip bandwidth, and 80% better performance per dollar for LLM inference.
  • 03.
    The financial backdrop makes the announcements a genuine catalyst: Google Cloud ended 2025 with a $240B backlog (up ~160% YoY), Q4 2025 cloud revenue hit $17.7B (up 48% YoY), and Alphabet guided 2026 capex to $175-185B. JPMorgan's Doug Anmuth calls Cloud Next '26 more material than prior years because agentic deployments are recurring and harder to rip out.
  • 04.
    Roughly 75% of Google Cloud customers use its AI products, 330 customers processed over 1T tokens in the past year, 35 crossed 10T, and Google's first-party models now handle 16B tokens/minute via direct API—up from 10B the prior quarter. Gemini Enterprise paid monthly active users grew 40% quarter-over-quarter in Q1 2026.

Deep Analysis

Why Google Split the TPU in Two

The most consequential hardware decision in Next '26 is not that Google shipped a new TPU — it's that Google shipped two of them. TPU 8t (codename 'Sunfish') is a training monster: 9,600 chips per superpod, 2 petabytes of HBM, 121 exaflops of FP4, 3x the processing power of last year's Ironwood, and double Ironwood's interchip bandwidth. TPU 8i is a different animal entirely — 1,152 chips per pod, 11.6 exaflops FP8, 3x more on-chip SRAM than Ironwood, 19.2 Tbps bidirectional scale-up bandwidth per chip, and 80% better performance per dollar for LLM inference. The architectural split says the quiet part out loud: training and inference have diverged enough as workloads that one chip can no longer be optimal for both.

The contrast with Nvidia is structural, not marketing. Nvidia's Rubin NVL72 caps its NVLink coherence domain at 576 accelerators; TPU 8t goes to 9,600. Google is trading per-chip peak performance for system-scale bandwidth, betting that frontier training runs are now bottlenecked by how many chips can share memory at speed, not by what a single chip can do. On the inference side, the 8i's enlarged SRAM cache and higher-capacity memory pool specifically target the memory-bandwidth wall that agent inference hits — long context windows, multi-step tool calls, and the kind of always-on reasoning fleets Pichai described when he said the industry has gone from 'Can we build an agent?' to 'How do we manage thousands of them?' The 8t/8i split is what 'managing thousands' looks like in silicon.

The $240 Billion Backlog That Changes the Math

Pretty much every product recap of Next '26 buries the number that actually matters. Google Cloud ended 2025 with a $240 billion backlog — up roughly 160% year-over-year. Q4 2025 cloud revenue hit $17.7B, up 48% YoY. Alphabet guided 2026 capex to $175-185B. Those numbers reframe the keynote from a product showcase into a capital-allocation story: Google has already sold the capacity it's now building, and the event's job was to convince enterprises to convert signed commitments into live, recurring agent workloads. JPMorgan's Doug Anmuth caught this explicitly when he wrote, 'While Cloud Next has not been a major catalyst for GOOG/L shares in the past, we believe the event carries more weight this year' — not because the chips are flashier but because agentic deployments are 'recurring & much harder for customers to rip out.'

The stickiness is the point. An enterprise that runs its sales ops, code migration, and SOC on Gemini Enterprise agents — grounded in a Knowledge Catalog over an Apache Iceberg lakehouse and governed by Google's four-pillar agent platform — is not switching clouds on a pricing page comparison. It's switching operational substrate. That's why the $750M partner fund (widely misreported as $7.5B) is structured to subsidize the proof-of-concept phase: Gemini POC credits, Google forward-deployed engineers, cloud credits, and deployment rebates. Google is paying to remove the adoption friction because every enterprise that converts becomes a compound interest machine for the backlog. The announcements are downstream of the balance sheet, not the other way around.

The Nvidia Framing Trap

Every major outlet has run some version of 'Google vs Nvidia' this week, and Futurum's Daniel Newman pushed back hard and early: 'I strongly suggest not spinning wheels on the TPU versus GPU debate. Right now all capacity is good capacity because we don't have close to enough compute.' The point is worth taking seriously. Google is still positioned to be among the first clouds to offer Nvidia's Vera Rubin NVL72 systems in the second half of 2026, and its own multi-vendor ASIC supply chain — Broadcom, MediaTek, Marvell, Intel — reads less like an anti-Nvidia play and more like de-risking single-vendor dependency while the entire industry is compute-starved. Supply-chain analyst Dan Nystedt has flagged the TPU v8 split as a strategic capacity move as much as a technical one.

The more interesting competitive frame comes from SiliconANGLE's John Furrier: 'Models are becoming a commodity. Inference is getting cheaper by the month. The leverage is moving up the stack.' If that's right, the Next '26 fight isn't silicon-versus-silicon at all — it's control-plane-versus-control-plane. The Gemini Enterprise Agent Platform, Workspace Intelligence as a semantic layer across Gmail/Docs/Sheets/Slides/Drive/Chat, BigQuery reframed as a reasoning surface rather than a storage surface, and the Agentic Defense fusion with Wiz are all bids for the layer where enterprise execution actually happens. Nvidia doesn't compete there; Microsoft, AWS, Salesforce, and ServiceNow do. Reading Next '26 as a hardware keynote misses which fight Google is actually picking.

The Implementation Gap and the Agent-as-Unit Shift

Fortune's Big Technology column named the subtext no one on the keynote stage said out loud: 'AI's abilities have outpaced humans' capability to implement them.' The headline stats from Google itself are almost too on-the-nose — 75% of new Google code is AI-generated and engineer-approved (up from 50%), code-migration tasks run 6x faster with agentic workflows, SOC agents cut threat mitigation time by more than 90%. Google has already eaten its own dog food; most of its customers haven't. The entire Next '26 product surface — the four-pillar agent platform (Build, Scale & Orchestrate, Govern, Optimize), the Data Agent Kit, the Deep Research Agent, the $750M partner fund — is engineered against that implementation gap. If Google can close it for enterprises, Fortune argues, 'the payoff could be further gains against Amazon Web Services and Microsoft's Azure.'

The second-order effect is the business-model reshuffle Furrier flagged bluntly: 'The app is no longer the product. The agent is.' If cross-app agents run the workflow and the SaaS vendor becomes a tool call, the per-seat licensing model that powers Salesforce, ServiceNow, and Workday starts looking structurally soft. The human cost of the pivot is visible on r/googlecloud, where a Xoogler-run LLM classifier found that 89% of Next '26's 1,052 sessions are AI-focused — only 119 are not. Practitioners are asking, in u/netcommah's phrasing, whether the event has become 'Gemini Next' at the expense of the boring infrastructure work (IAM, networking, GKE tail latency) enterprises actually need. The discontent is real, but it's also the sound of a conference catching up to a capital allocation that has already chosen a side.

Historical Context

2018
Launched the first Cloud TPU (TPU v2), establishing the baseline for later TPU generations. Ironwood is nearly 30x more power-efficient than this first Cloud TPU.
May 2024
Introduced Trillium, the sixth-generation TPU (TPU v6).
April 2025
Unveiled Ironwood (TPU v7) as the first TPU 'for the age of inference,' offering 2x perf/watt over Trillium.
December 2025
Ended 2025 with Google Cloud backlog of $240B (up ~160% YoY) and Q4 2025 cloud revenue up 48% YoY to $17.7B — setting the financial backdrop for Next '26.
March 2026
Completed its $32B acquisition of cloud security company Wiz, setting up the Next '26 Agentic Defense announcements fusing Google Threat Intelligence with Wiz's platform.
April 22, 2026
Opened Google Cloud Next '26 at Mandalay Bay, Las Vegas, with TPU 8t/8i, Gemini Enterprise Agent Platform, Agentic Data Cloud, Workspace Intelligence, Agentic Defense with Wiz, and a $750M partner fund as headline announcements.

Power Map

Key Players
Subject

Google Cloud Next '26 AI Announcements

SU

Sundar Pichai

CEO, Alphabet/Google. Framed the industry shift as moving past whether agents can be built to how thousands can be coordinated and governed in production.

TH

Thomas Kurian

CEO, Google Cloud. Keynoted Next '26 under 'The Agentic Enterprise' banner and positioned the event as production-readiness for agents.

GO

Google DeepMind

Co-designed TPU 8t and TPU 8i with purpose-built architectures for training, inference, and agent workloads.

NV

Nvidia

Incumbent AI-silicon rival. Rubin GPUs offer higher per-chip FP4 performance but a 576-accelerator NVLink domain; Google is still slated to be among the first clouds to offer Nvidia Vera Rubin NVL72 systems in H2 2026.

BR

Broadcom, MediaTek, Marvell, Intel

Silicon design partners for Google's multi-vendor TPU supply chain, including Broadcom 'Sunfish' for training and MediaTek 'Zebrafish' for inference.

WI

Wiz

Google-owned (since March 2026) cloud and AI security platform powering the Agentic Defense announcements; demonstrating AI Security Agents onsite.

THE SIGNAL.

Analysts

"The conversation has gone from 'Can we build an agent?' to 'How do we manage thousands of them?' — framing the shift from agent creation to agent fleet management as the defining enterprise problem."

Sundar Pichai
CEO, Alphabet/Google

"Models are becoming a commodity. Inference is getting cheaper by the month. The leverage is moving up the stack. Furrier argues the real battleground at Next '26 is the control plane over data and execution, with BigQuery reframed as a reasoning surface rather than a storage surface, and the agent replacing the app as the unit of software."

John Furrier
Industry analyst, SiliconANGLE

"While Cloud Next has not been a major catalyst for GOOG/L shares in the past, we believe the event carries more weight this year — because agentic deployments are recurring and much harder for customers to rip out."

Doug Anmuth
Analyst, JPMorgan (Overweight, $395 PT on Alphabet)

"By this, they essentially mean that AI's abilities have outpaced humans' capability to implement them. If Google is betting that further growth will come from helping customers solve these problems, the payoff could be further gains against Amazon Web Services and Microsoft's Azure."

Fortune (Big Technology column)
Business media analysis

"Full-stack integration through custom silicon (TPU) and BigQuery innovation underwrites a structural shift toward agentic, always-on execution. Vellante frames Google's ownership of TPU silicon, BigQuery, and Gemini as a cost and latency moat competitors can't match with rented infrastructure."

Dave Vellante
Chief Analyst, theCUBE Research / SiliconANGLE
The Crowd

"Google's next-generation TPU v8 chips are expected to feature prominently at its Google Cloud Next event this week (April 22-24, 2026), according to media and supply-chain reports. The v8 series includes two main variants: TPUv8t 'Sunfish' — for Training: the high-performance..."

@dnystedt0

"$GOOGL will launch its next generation TPUs this week. The narrative in the press will be Google vs. $NVDA. I strongly suggest not spinning wheels on the TPU versus GPU debate. Right now all capacity is good capacity because we don't have close to enough compute."

@danielnewmanUV0

"Introducing Gemini Enterprise, a new platform that brings the best of Google AI to every employee, for every workflow. Gemini Enterprise works across all the tools and data you use every day to make workflows easier—giving you time back in your day"

@googlecloud0

"Is it just me, or is Google Cloud Next becoming "Gemini Next"?"

@netcommah47
Broadcast
Google Cloud Next '26 Opening Keynote

Google Cloud Next '26 Opening Keynote

312 | Breaking Analysis | As AI Powers Google, What's Next for Google Cloud

312 | Breaking Analysis | As AI Powers Google, What's Next for Google Cloud

Specialized AI Hardware, Contextual Workspace, and Enterprise Agents

Specialized AI Hardware, Contextual Workspace, and Enterprise Agents