Google launches Gemini Deep Research and Deep Research Max
TECH

Google launches Gemini Deep Research and Deep Research Max

35+
Signals

Strategic Overview

  • 01.
    Google DeepMind launched two autonomous research agents on April 21, 2026 — Deep Research (optimized for speed) and Deep Research Max (optimized for comprehensive synthesis) — both built on Gemini 3.1 Pro and available in public preview through paid tiers of the Gemini API.
  • 02.
    The new agents add Model Context Protocol (MCP) support, native chart and infographic generation, multimodal inputs (PDFs, CSVs, images, audio, video), real-time streaming of intermediate reasoning, and a collaborative planning step before execution.
  • 03.
    Deep Research Max scores 93.3% on DeepSearchQA, 54.6% on Humanity's Last Exam and 85.9% on BrowseComp, with Google positioning it at the top of public agentic-research benchmarks.
  • 04.
    Estimated pay-as-you-go pricing is roughly $1–$3 per task for Deep Research and $3–$7 per task for Deep Research Max, depending on the underlying Gemini models and tools used.

Under the Hood: A Two-Tier Agent and the Toggle That Defines the Enterprise Pitch

Google's most consequential design decision with this release isn't a new model — it's the deliberate split of its research agent into two SKUs running on the same Gemini 3.1 Pro brain. Deep Research is tuned for low-latency, real-time client experiences where a user is waiting on the other end of a chat surface. Deep Research Max spends extended test-time compute on asynchronous, long-horizon synthesis, the kind of multi-hour investigation a human analyst would otherwise schedule. Same backbone, two operating points — and that bifurcation is itself a product statement: research is no longer a single feature, it's a primitive with a latency-cost-quality dial.

The second design choice quietly does more enterprise work than the headline numbers. Both tiers ship with Model Context Protocol (MCP) support, letting developers connect proprietary or third-party data systems, and — critically — both can disable web access entirely so the agent runs only against private corpora. That's the sentence regulated industries have been waiting for. Combined with multimodal inputs (PDFs, CSVs, images, audio, video), file-store connectors, real-time streaming of intermediate reasoning steps, and a collaborative planning step before the agent executes, the platform now looks less like a chatbot and more like a programmable analyst. Google's framing — 'search the web, arbitrary remote MCPs, file uploads and connected file stores — or any subset of them' — is the unlock: the same agent runs in a public-research mode for one customer and an air-gapped private-data mode for the next, without changing model strings.

By the Numbers: A Four-Month Benchmark Sprint With Asterisks

By the Numbers: A Four-Month Benchmark Sprint With Asterisks
BrowseComp benchmark scores: Deep Research Max leads Google's comparator slate (85.9%) but trails OpenAI GPT-5.4 Pro (89.3%).

The headline metrics are striking. Deep Research Max hits 93.3% on DeepSearchQA, up from 66.1% for the December 2025 Gemini Deep Research agent — a jump of more than 27 points in roughly four months on a benchmark of 900 hand-crafted causal-chain tasks across 17 fields. On Humanity's Last Exam, Max climbs to 54.6% (Deep Research is at 50.4%) versus 46.4% for the prior generation. On BrowseComp the spread is wider: Max posts 85.9%, Deep Research 61.9%, last December's agent 59.2%, and Google's listed OpenAI GPT-5.4 score 58.9%. By Google's slate, Max is the new state of the art on retrieval-and-reasoning research tasks.

The asterisks matter. The Decoder notes that Google's comparison methodology differs from how competitors report their own numbers, which warrants careful interpretation. Outside Google's chosen comparator set, OpenAI's GPT-5.4 Pro is reported at 89.3% on BrowseComp and Anthropic Opus 4.6 at 84% — meaning Max's 85.9% is genuinely competitive but not the runaway lead the bar chart implies. The most pointed criticism on r/singularity was that Google's slide deck conspicuously omitted GPT-5.4 Pro. The honest read: this is a real generational improvement on Google's own prior agent, plausibly the new leader on BrowseComp depending on how you score it, but the gap to OpenAI's top tier is single-digit, not categorical. Anyone making a procurement decision on Max-versus-Pro should run their own evals.

Follow the Money: Sell-Side Analysts Are the Real Customer

The signal that says the most about who Google is actually building this for isn't in the benchmark deck — it's in the partner list. FactSet, S&P Global and PitchBook are working with Google on MCP server designs so shared customers can pipe proprietary financial data into Deep Research workflows. Those three names triangulate one buyer profile: investment banks, asset managers, private-equity research desks. The work product of a junior analyst on those desks — pulling filings, reconciling competing source data, generating chartable comparables, drafting a memo with citations — maps almost one-to-one onto what Deep Research Max now does natively, including the inline charts and infographics.

Third-party industry commentary makes the labor implication explicit. The TokenRing analysis published via FinancialContent argues Gemini Deep Research 'will be remembered as the moment AI became a worker rather than a tool,' with 'potential profound impacts on the labor market for junior analysts and researchers as tasks that once took three days can now be completed during a lunch break.' At $3–$7 per Max task, that's a price point that survives a procurement conversation when the alternative is an analyst-day. The economic logic of the two-tier pricing snaps into focus when you map it against analyst workflows: cheap Deep Research for the dozens of quick lookups in a research day, expensive Max for the one or two deep dives that previously consumed an afternoon. Google isn't selling a research agent; it's selling a wedge into the spend that currently funds entry-level knowledge work.

What the Skeptics See: A Public Surface That's Glossier Than the Developer Experience

Underneath the official launch choreography, the developer-community reception has been notably mixed. Power users on r/singularity mocked the naming convention ('Ultra Deep Pro Deep Research Max Deep Plus' was one of the higher-voted jokes), pressed on why GPT-5.4 Pro was missing from the comparison slate, and floated the theory that Google rushed the announcement to land before an expected OpenAI release. Whether or not the timing read is right, the framing tells you something: this is a developer audience that's seen enough launches to grade them on more than the bar chart.

The more concrete grievance came from the Gemini subreddit, where AI Pro and AI Ultra subscribers reported widespread 'at capacity' errors lasting 10–24 hours after the launch. That gap — top-of-the-leaderboard claims on the public surface, throttled-out behavior for the customers who actually paid — is the real story for builders evaluating whether to depend on this agent in production. A research agent's value is integral over many runs, not one demo, and 'great when you can get it' is a hard pitch to put in a roadmap. Combined with The Decoder's caveat about benchmark methodology, the skeptical reading is straightforward: the technology is genuinely a step forward on Google's own prior agent, the enterprise plumbing (MCP, private-data toggle, financial-data partners) is the most strategically interesting part of the release, but the launch-week reliability and the marketing-versus-comparator gap mean serious teams will want to wait for a stable preview window and run their own benchmarks before betting workflows on it.

Historical Context

2024-12-11
Google first launched Gemini Deep Research as an experimental feature inside the Gemini app, making it the earliest mainstream consumer 'deep research' agent.
2025-02-02
OpenAI announced Deep Research, an agent that plans 5–30 minute investigations and returns cited reports, popularizing the 'deep research' agent category.
2025-02-14
Perplexity released Sonar Deep Research, completing most runs in under three minutes and scoring 21.1% on Humanity's Last Exam.
2025-12-11
Google released the previous-generation Gemini Deep Research agent to developers via the Interactions API, scoring 46.4% HLE, 66.1% DeepSearchQA and 59.2% BrowseComp.
2026-02-19
Gemini 3.1 Pro released, the underlying model that now powers the new Deep Research and Deep Research Max agents.
2026-04-21
Google launches Deep Research and Deep Research Max in public preview via the Gemini API, adding MCP, native charts, multimodal inputs and collaborative planning.

Power Map

Key Players
Subject

Google launches Gemini Deep Research and Deep Research Max

GO

Google DeepMind

Developer of Gemini 3.1 Pro and the Deep Research / Deep Research Max tiers; ships the Interactions API in Google AI Studio that exposes both agents and controls the rollout to startups and enterprises via Google Cloud.

LU

Lukas Haas

Product Manager at Google DeepMind and co-author of the official launch blog post; frames Deep Research Max as a step change in autonomous research quality combining MCP, visualizations and long-horizon planning.

LO

Logan Kilpatrick (@OfficialLoganK)

AI Studio lead at Google publicly positioning the release as the company's biggest Deep Research API upgrade yet — emphasizing Max as Google's SOTA system, MCP, native visuals, planning mode and full multimodal inputs.

FA

FactSet, S&P Global and PitchBook

Financial-data partners co-designing MCP servers so shared customers can plug proprietary financial datasets into Deep Research workflows — signaling that Google is targeting sell-side analyst pipelines, not just consumer research.

OP

OpenAI and Anthropic

Direct competitors in the agentic-research category whose Deep Research and Claude Research products set the comparator benchmarks Google is aiming to surpass with Deep Research Max.

THE SIGNAL.

Analysts

"Built with Gemini 3.1 Pro, the new Deep Research agents bring MCP support, native visualizations and unprecedented analytical quality to long-horizon research workflows across the web or custom sources."

Lukas Haas & Srinivas Tadepalli
Product Manager / Program Manager, Google DeepMind

"Deep Research Max consults more sources, resolves conflicting evidence, and produces nuanced, cited reports, setting it apart from earlier releases and comparable offerings in the market."

TestingCatalog editorial
Tech publication covering AI product launches

"Gemini Deep Research will be remembered as the moment AI became a 'worker' rather than a 'tool,' with potential profound impacts on the labor market for junior analysts and researchers as tasks that once took three days can now be completed during a lunch break."

FinancialContent / TokenRing analysis
Industry commentary on Gemini Deep Research's economic impact

"Deep Research Max shows a big jump on retrieval and reasoning tasks, but Google's comparison methodology differs from competitors and warrants careful interpretation."

The Decoder editorial
AI news publication
The Crowd

"The next evolution of our autonomous research agent is here. Today, we're introducing Deep Research and Deep Research Max via the Gemini API. Powered by Gemini 3.1 Pro, you can now trigger comprehensive research workflows with unprecedented control and transparency."

@@Google1800

"Introducing our biggest upgrades to the Deep Research API yet... including Deep Research Max (our SOTA system), MCP support, Native charts & infographics, planning mode, full tool support (including Google tools), full multi-modal input support, & real-time progress streaming!"

@@OfficialLoganK1500

"Gemini Deep Research and Deep Research Max update! Collaborative planning, native charts/infographics, MCP, multimodal inputs, code execution and #1 on DeepSearch QA and BrowseComp. - deep-research-preview-04-2026 - deep-research-max-preview-04-2026"

@@_philschmid3600

"Introducing Deep Research and Deep Research Max"

@u/ShreckAndDonkey123179
Broadcast
Gemini Just Got a HUGE Update! (Deep Research Visuals)

Gemini Just Got a HUGE Update! (Deep Research Visuals)

Building a Research Agent with Gemini 3 + Deep Agents

Building a Research Agent with Gemini 3 + Deep Agents

Gemini 爆改研究功能!「建立功能」讓 Deep Research 能直接做網頁、圖表、測驗!甚至是能自創客製化應用程式

Gemini 爆改研究功能!「建立功能」讓 Deep Research 能直接做網頁、圖表、測驗!甚至是能自創客製化應用程式