Google DeepMind UK workers unionize over military AI
TECH

Google DeepMind UK workers unionize over military AI

44+
Signals

Strategic Overview

  • 01.
    UK-based Google DeepMind workers voted to form the world's first union at a frontier AI lab after Google signed a classified Pentagon deal allowing the Department of Defense to use Gemini models for 'any lawful purpose.'
  • 02.
    Among CWU members below VP level, 98% backed unionization; the bid covers approximately 1,000 employees at DeepMind's London office.
  • 03.
    Workers gave Google 10 working days to recognize the Communication Workers Union and Unite as joint representatives or accept mediated negotiations before legal escalation.
  • 04.
    Demands include ending Google AI use by US and Israeli militaries, restoring the pre-2025 weapons/surveillance pledge, an independent ethics oversight body, stronger whistleblower protections, and the right to refuse work on moral grounds.

Deep Analysis

The Fine Print: Why Google's Pentagon Deal is 'Strictly Weaker' Than OpenAI's

The catalyst for the union is not military AI contracting in the abstract — it is a specific phrase in Google's classified Pentagon deal: the Department of Defense can use Gemini models for 'any lawful purpose.' On its face that sounds like boilerplate. In practice, AI policy researchers point out, it is doing extraordinary work. Charlie Bullock of LawAI argues that OpenAI's analogous Pentagon contract 'seemed like it did give some kind of contractual guarantee that the models weren't going to [be] used for certain kinds of mass domestic surveillance.' Google's deal, in his reading, contains no such carveout — and may in fact oblige Google to remove safety filters at government request. Cambridge's Seán Ó hÉigeartaigh calls the agreement 'strictly weaker' than OpenAI's.

That is the technical hook the workers are pulling on. Google's public statement responding to the backlash leans on a public-private 'consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.' But a consensus is not a contractual term. The union's demand for an independent ethics oversight body and the right to refuse work on moral grounds is, in effect, an attempt to install the guardrails the contract itself does not contain. Reading the dispute this way reframes it: the fight isn't about whether AI labs should ever work with militaries — Anthropic, OpenAI, Microsoft, AWS, Nvidia, SpaceX, and Reflection AI are all listed in DoD AI projects. It's about whether the contractual surface area of a single Gemini deal is too broad for the workers who built the model to live with.

Maven 2.0, Built Differently

On the surface this looks like a sequel: in 2018, roughly 4,000 Google employees signed an internal petition against Project Maven, and the contract was allowed to lapse. Eight years later, workers are again massing against a Pentagon deal. But the structure of dissent has changed — and the change is the point. Laura Nolan, the engineer who resigned over Maven, notes that Google has since dismantled the internal infrastructure that made 2018 possible: company-wide mailing lists were curtailed, layoffs thinned the activist core, and management has reorganized in ways 'they think... may even be able to replace engineers.' The informal channels that turned a moral objection into a corporate-wide petition no longer exist at the same scale.

That is why the response in 2026 is institutional rather than viral. Workers are not running a Google-Docs petition; they are filing for statutory recognition with the UK Central Arbitration Committee through CWU and Unite, two unions with experience converting workplace grievances into binding bargaining. As an anonymous DeepMind research scientist put it, 'one of the things we can look at through unionization is restoring that leverage.' The phrasing matters: the goal is not to revive the 2018 model but to substitute for it. Where Maven mobilized a culture, the DeepMind union is trying to manufacture leverage that culture alone can no longer produce — and to lock it in so that future contract decisions are not a function of which mailing lists happen to still exist.

By The Numbers: The Leverage Paradox

By The Numbers: The Leverage Paradox
Google worker mobilization against Pentagon / military AI contracts: 2018 Project Maven (~4,000 petition signers, contract canceled) vs. 2026 DeepMind union (1,000 covered), Google-wide Pichai letter (600+), and DeepMind-specific letter (100+). The 2026 totals are smaller but more institutionally durable. Source: Fortune, Gizmodo, ComputerWeekly, The Next Web, Arms Control Association.

The numbers from this episode look, at first glance, like an escalation. The DeepMind UK union covers about 1,000 London-office employees and passed with 98% of CWU members below VP level voting yes. A separate open letter against the Pentagon deal collected 600+ signatures from Google employees including directors and VPs. An earlier DeepMind-only letter against weapons / autonomous targeting use drew 100+ signatures. Add the Andreas Kirsch–style public dissent from a senior research scientist who said he is 'incredibly ashamed' of the deal, and the appearance is of a workforce in revolt.

And yet the 2018 Maven petition — the one that actually killed the contract — had roughly 4,000 signatures. The 2026 mobilizations, even summed, do not match that scale. The disconnect is structural: Maven-era Google had broader internal communication tooling and a less hardened management posture; the post-layoff company has both fewer organizers and less appetite for compromise. The union is the workers' attempt to convert a smaller, more legible coalition into the kind of standing power a 4,000-signature flash movement no longer is. Whether that conversion succeeds depends almost entirely on what Google does inside the 10-working-day recognition window — which is why the mechanical clock now matters more than the headline percentages.

Why Now: The Pledge, the Friction, and the Anthropic Tell

Three things had to converge in early 2026 to produce a frontier-AI-lab union, and missing any one of them probably kills the campaign. The first is the February 2025 deletion of Google's explicit pledge against AI weapons and surveillance from its public AI Principles. DeepMind staff have repeatedly said they felt 'duped' — they joined under one set of stated values and were now being asked to ship under a quieter set. That makes the dispute personal in a way 'we don't like this contract' alone never could. The second is the visible escalation of Project Nimbus and DeepMind technology in active military use; one anonymous DeepMind worker told ComputerWeekly that 'we don't want our AI models complicit in violations of international law, but they already are.' Worker letters and tweets repeatedly tie the deal to Gaza, which raises the moral stakes from policy abstraction to ongoing conflict.

The third — and most counterintuitive — is the Pentagon-Anthropic 'supply chain risk' episode. After Anthropic declined certain Pentagon work, the company was reportedly hit with a supply-chain risk designation. Union organizers are explicitly citing this as evidence that the U.S. government is, in their words, 'not a responsible partner.' That argument is a tell: pre-2025, public discourse treated AI-lab cooperation with the Pentagon as the sober choice and refusal as the activist choice. By 2026, dissenting workers are using the Pentagon's own friction with Anthropic to argue the opposite — that contracting under terms this loose is the unstable position. The combination of dropped pledge, visible deployment, and a real-world example of Pentagon arbitrariness is what made a private grumble into a 98% yes vote.

What Happens Next: The 10-Day Clock and the Contrarian Read

The mechanics from here are tighter than most readers will appreciate. The unions gave Google 10 working days to voluntarily recognize CWU and Unite as joint representatives. If Google declines, the case escalates to the UK Central Arbitration Committee, a statutory body with the power to compel recognition where a clear majority of an appropriate bargaining unit supports a union — a bar 98% comfortably clears. Unlike a US activist campaign, which ends when employees are quietly reassigned or laid off, a CAC ruling produces an enduring legal counterparty inside the company. That is the structural reason this story does not just fade if Google stays silent.

The contrarian read, surfaced repeatedly in community discussion of the vote, is that recognition does not equal product control: managerial prerogative over which contracts to sign is, in most jurisdictions, not subject to collective bargaining. Skeptics point out that all frontier AI is dual-use and that geopolitical realism means a Chinese lab would never permit equivalent dissent. Both points are real — and both are, in a way, the union's own thesis in disguise. Workers are not claiming a union can veto a Pentagon contract directly. They are claiming that an independent ethics oversight body, contractual whistleblower protections, and a documented right to abstain on moral grounds will change the cost calculus of signing the next 'any lawful purpose' deal. The next 10 working days will reveal whether Google treats that as a threat to be managed or a precedent to be set.

Historical Context

2018-04
Roughly 4,000 Google employees signed a letter to CEO Sundar Pichai demanding the company end Project Maven, a Pentagon contract that used Google AI to analyze drone surveillance footage.
2018-06
Google published its AI Principles, pledging not to develop technologies likely to cause overall harm and explicitly excluding weapons and surveillance violating internationally accepted norms.
2021
Google signed a $1.2 billion cloud and AI contract with the Israeli government known as Project Nimbus, later cited by union organizers as a primary reason for opposing military uses of DeepMind technology.
2025-02-04
Google quietly removed its explicit pledge to avoid AI weapons and surveillance from its public AI Principles; Demis Hassabis and James Manyika reframed the policy around democratic leadership in AI.
2025-04-26
The Financial Times first reported that around 300 London-based DeepMind employees were seeking to unionize over the dropped weapons pledge and Israeli military ties; staff said they felt 'duped' and at least five had quit.
2026-04
Workers held the formal unionization ballot, with 98% of CWU members below VP level voting in favor.
2026-04-29
More than 600 Google employees, including directors and VPs, signed an open letter to Sundar Pichai urging him to refuse the classified Pentagon AI deal.
2026-05-05
Workers sent a formal recognition letter to Google management asking it to recognize CWU and Unite as their joint representatives within 10 working days.

Power Map

Key Players
Subject

Google DeepMind UK workers unionize over military AI

CO

Communication Workers Union (CWU)

Lead union representing DeepMind workers; ran the ballot in which 98% of members below VP voted to unionize and submitted the formal recognition request to Google.

UN

Unite the Union

Co-representative alongside CWU, named in the worker letter asking Google for joint recognition; expands the campaign's reach into broader UK industrial bargaining muscle.

GO

Google DeepMind / Alphabet

The employer; signed the classified Pentagon Gemini deal, removed the explicit AI weapons pledge in February 2025, and now decides whether to voluntarily recognize the unions or be forced into negotiation.

U.

U.S. Department of Defense (Pentagon)

Counterparty in the classified deal allowing DoD to use Gemini models for 'any lawful purpose'; its terms — and lack of contractual carveouts — are the trigger event for unionization.

SU

Sundar Pichai (Google CEO)

Recipient of the 600+ employee open letter urging him to refuse the classified Pentagon AI deal; the executive whose response will determine whether this becomes Maven 2.0 or a new equilibrium.

UK

UK Central Arbitration Committee

Statutory body that the unions will petition if Google declines voluntary recognition; the legal pressure point that makes a UK-based union, rather than US-style activism, structurally significant.

Source Articles

Top 5

THE SIGNAL.

Analysts

"Frames unionization as tech workers using collective leverage to push Google away from ethically problematic military contracts: 'By exercising their rights to collectivise, they are in a strong position to demand their employer stop circling the ethical drain.'"

John Chadfield
National Officer for Tech Workers, Communication Workers Union (CWU)

"Argues Google's Pentagon contract is weaker than OpenAI's because it lacks contractual guarantees against domestic mass surveillance use, noting 'the OpenAI contract seemed like it did give some kind of contractual guarantee that the models weren't going to [be] used for certain kinds of mass domestic surveillance.'"

Charlie Bullock
Senior Research Fellow, LawAI U.S. Law and Policy team

"Calls Google's Pentagon agreement 'strictly weaker' than OpenAI's and finds the limited public attention to the deal troubling."

Seán Ó hÉigeartaigh
Research Professor, Centre for the Future of Intelligence, University of Cambridge

"Argues that cost-cutting, layoffs, and Google's post-Maven dismantling of internal mailing lists have weakened workers' informal organizing power, making the formal union route more important now: 'The companies want to redirect money into AI, and they think that this may even be able to replace engineers.'"

Laura Nolan
Former Google software engineer who resigned over Project Maven

"Casts unionization as restoring a power asymmetry that has eroded since Maven: 'One of the things we can look at through unionization is restoring that leverage.'"

Anonymous DeepMind research scientist
Research scientist, Google DeepMind
The Crowd

"I do not understand how this is "doing the right thing," and I think this violates "don't be evil" quite clearly on many levels. I personally feel incredibly ashamed right now to be Senior Research Scientist at Google DeepMind and I wonder how I'm supposed to do my work today"

@@BlackHC0

"What I find the most upsetting and disappointing though is that autonomous weapons and mass surveillance raise serious questions that warrant transparent discussion instead of intransparent contracts full of weasel words and what seems to be misleading public statements"

@@BlackHC0

"This is a research scientist at Google's DeepMind speaking out against the tech giant's reported deal with the Pentagon to sell it AI tools, including for use in autonomous weapon and mass surveillance:"

@@bcmerchant0

"Google Deepmind staff plan to join union against military AI"

@u/Gari_305723
Broadcast
Google's military AI deals get DeepMind UK staff to start unionising

Google's military AI deals get DeepMind UK staff to start unionising

DeepMind Workers Vote 98% to Unionize - TCR 05/05/26

DeepMind Workers Vote 98% to Unionize - TCR 05/05/26

Google DeepMind Staff Unionize Over AI Ethics Concerns in Gaza Conflict

Google DeepMind Staff Unionize Over AI Ethics Concerns in Gaza Conflict