The Fine Print: Why Google's Pentagon Deal is 'Strictly Weaker' Than OpenAI's
The catalyst for the union is not military AI contracting in the abstract — it is a specific phrase in Google's classified Pentagon deal: the Department of Defense can use Gemini models for 'any lawful purpose.' On its face that sounds like boilerplate. In practice, AI policy researchers point out, it is doing extraordinary work. Charlie Bullock of LawAI argues that OpenAI's analogous Pentagon contract 'seemed like it did give some kind of contractual guarantee that the models weren't going to [be] used for certain kinds of mass domestic surveillance.' Google's deal, in his reading, contains no such carveout — and may in fact oblige Google to remove safety filters at government request. Cambridge's Seán Ó hÉigeartaigh calls the agreement 'strictly weaker' than OpenAI's.
That is the technical hook the workers are pulling on. Google's public statement responding to the backlash leans on a public-private 'consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.' But a consensus is not a contractual term. The union's demand for an independent ethics oversight body and the right to refuse work on moral grounds is, in effect, an attempt to install the guardrails the contract itself does not contain. Reading the dispute this way reframes it: the fight isn't about whether AI labs should ever work with militaries — Anthropic, OpenAI, Microsoft, AWS, Nvidia, SpaceX, and Reflection AI are all listed in DoD AI projects. It's about whether the contractual surface area of a single Gemini deal is too broad for the workers who built the model to live with.




