Google DeepMind Releases Gemini Robotics-ER 1.6
TECH

Google DeepMind Releases Gemini Robotics-ER 1.6

24+
Signals

Strategic Overview

  • 01.
    Google DeepMind launched Gemini Robotics-ER 1.6 on April 14, 2026, a major upgrade to its embodied reasoning model that achieves 93% accuracy on instrument reading tasks using a new agentic vision approach — up from 23% on the prior version. The model specializes in visual and spatial understanding, task planning, and native tool calling.
  • 02.
    Boston Dynamics has already integrated the model into its Orbit AIVI-Learning platform for Spot robot facility inspections, going live for enrolled customers on April 8 — six days before the public announcement. The collaboration focused on enabling robots to autonomously read complex gauges, sight glasses, and digital readouts.
  • 03.
    The model is described as the safest robotics model to date, with hazard identification improving by 6% on text and 10% on video compared to baseline. It is available via the Gemini API and Google AI Studio, supporting over 1 million input tokens across text, images, video, and audio.

From 23% to 93%: How Agentic Vision Solves the Instrument Reading Problem

The headline number — instrument reading accuracy jumping from 23% on the prior model to 93% with agentic vision on ER 1.6 — represents a qualitative shift from a capability that was essentially broken to one that is production-ready. The 86% baseline accuracy alone would be notable, but the agentic vision pipeline pushes it into territory where autonomous facility inspection becomes commercially viable.

What makes this technically distinctive is the multi-step reasoning approach. Rather than attempting to read a gauge in a single inference pass, the model takes intermediate steps: first zooming into an image to get a better read of small details, then using pointing and code execution to estimate proportions, and finally applying world knowledge for interpretation. As the DeepMind blog explains, reading instruments requires the model to "precisely perceive a variety of inputs — including the needles, liquid level, container boundaries, tick marks." This decomposition of a perceptual task into an agentic workflow — where the model decides what additional information it needs and acts to gather it — is a fundamentally different architecture than simply scaling up a vision model. For comparison, Gemini 3.0 Flash achieved only 67% on the same task, suggesting that raw model scale alone does not solve this problem without the specialized embodied reasoning and agentic pipeline.

Boston Dynamics Already Shipped: What Pre-Announcement Deployment Reveals

One of the most commercially significant details is buried in the timeline: Boston Dynamics went live with AIVI-Learning powered by Gemini Robotics-ER 1.6 on April 8, 2026 — a full six days before Google DeepMind's public announcement. This means paying enterprise customers were already running production workloads on the new model before the broader developer community even knew it existed.

This deployment pattern signals that the Boston Dynamics partnership is not a promotional arrangement but a genuine co-development relationship. Marco da Silva, VP and General Manager of Spot at Boston Dynamics, stated that "capabilities like instrument reading and more reliable task reasoning will enable Spot to see, understand, and react to real-world challenges completely autonomously." The focus on thermometers, pressure gauges, and chemical sight glasses points to specific industrial verticals — energy, chemical processing, manufacturing — where autonomous inspection has immediate ROI. The fact that Google Cloud serves as the infrastructure partner for this integration suggests a broader enterprise cloud strategy, where robotics AI models become a differentiated offering within Google's cloud platform rather than a standalone research project.

A Robotics Brain Available via Standard API: Platform Implications

Perhaps the most strategically important aspect of ER 1.6 is its availability through the Gemini API and Google AI Studio under the model identifier gemini-robotics-er-1.6-preview. This is not a closed research model or a hardware-locked capability — it is a cloud API that any developer can call with text, images, video, and audio inputs, supporting over 1 million input tokens. The model's native tool calling capability, including the ability to invoke Google Search and vision-language-action models, positions it as an orchestration layer for physical AI systems.

This API-first approach has significant ecosystem implications. The Gemini Robotics family has already attracted partners including Apptronik (humanoid robots), Agile Robots, Agility Robotics, and Enchanted Tools as trusted testers. By making the embodied reasoning model broadly accessible, Google is effectively trying to establish Gemini Robotics-ER as the default reasoning backbone for third-party robotics platforms — much as cloud AI APIs became the default for software applications. The competitive pressure this creates in the physical AI market is substantial: competitors must now match not just model capability but API accessibility and the growing integration ecosystem that comes with it.

Historical Context

2023-12-01
Demis Hassabis revealed DeepMind was exploring combining Gemini with robotics during the Gemini 1.0 announcement.
2025-03-12
Launched the initial Gemini Robotics and Gemini Robotics-ER models, based on Gemini 2.0, in partnership with Apptronik.
2025-06-24
Released Gemini Robotics On-Device, a variant optimized to run locally on robotic hardware.
2025-09-01
Released Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, described as the most capable vision-language-action model at the time.
2026-04-08
AIVI-Learning powered by Gemini Robotics ER 1.6 went live for all enrolled customers, six days before the public model launch.
2026-04-14
Publicly launched Gemini Robotics-ER 1.6 with enhanced spatial reasoning, instrument reading, and improved safety via the Gemini API.

Power Map

Key Players
Subject

Google DeepMind Releases Gemini Robotics-ER 1.6

GO

Google DeepMind

Developer of Gemini Robotics-ER 1.6, advancing physical AI through embodied reasoning models that bridge digital intelligence and physical robotic action.

BO

Boston Dynamics

Key partner that collaborated on instrument reading capability and integrated the model into Orbit AIVI-Learning for Spot robot facility inspections, live for customers since April 8, 2026.

GO

Google Cloud

Infrastructure partner enabling cloud-based model delivery for the Boston Dynamics AIVI-Learning integration.

AP

Apptronik, Agile Robots, Agility Robotics, Enchanted Tools

Early partners and trusted testers of the broader Gemini Robotics model family, signaling a growing commercial ecosystem.

THE SIGNAL.

Analysts

"Capabilities like instrument reading and more reliable task reasoning will enable Spot to see, understand, and react to real-world challenges completely autonomously."

Marco da Silva
VP and General Manager of Spot, Boston Dynamics

"Introducing Gemini Robotics ER 1.6, our new SOTA robotics model which excels at visual and spacial reasoning, now available via the Gemini API!"

Logan Kilpatrick
Google (Gemini API team)
The Crowd

"Introducing Gemini Robotics ER 1.6, our new SOTA robotics model which excels at visual and spacial reasoning, now available via the Gemini API!"

@@OfficialLoganK0
Broadcast
Gemini Robotics: Bringing AI to the physical world

Gemini Robotics: Bringing AI to the physical world

Gemini Robotics 1.5: Enabling robots to plan, think and use tools to solve complex tasks

Gemini Robotics 1.5: Enabling robots to plan, think and use tools to solve complex tasks

Gemini Robotics: Developing the next generation of humanoid robots with Apptronik

Gemini Robotics: Developing the next generation of humanoid robots with Apptronik

Google DeepMind Releases Gemini Robotics-ER 1.6 | Agentic Brew