The Equal Protection Inversion: DOJ Uses an Anti-Discrimination Clause Against an Anti-Discrimination Law
The headline angle of the DOJ's intervention is not that it sided with Elon Musk — it is the constitutional theory the Civil Rights Division chose to do it with. SB 24-205 is, on its face, an anti-discrimination law: it requires developers and deployers of high-risk AI to use 'reasonable care' to prevent algorithmic outputs that disadvantage protected groups in employment, housing, healthcare, education, and finance. The Department of Justice's filing argues that this very structure violates the Fourteenth Amendment's Equal Protection Clause, because the law obligates companies to attend to disparate impact on protected groups while exempting discrimination intended to advance 'diversity' or to 'redress historic discrimination.' In DOJ's framing, that asymmetry transforms a neutral-sounding statute into a state mandate to discriminate.
This is a striking inversion of how Equal Protection has historically been deployed. The Civil Rights Division — the office created to enforce post-Reconstruction anti-discrimination guarantees — is now using that constitutional toolkit against a state law explicitly designed to surface bias in algorithmic decision-making. Assistant Attorney General Harmeet K. Dhillon's public framing leaves no doubt about the political theory underneath the legal one: 'Laws that require AI companies to infect their products with woke DEI ideology are illegal.' Whether courts adopt that reading is a separate question, but the maneuver itself rewires which side of the bias debate gets to claim the Fourteenth Amendment. If it succeeds, every state algorithmic-fairness statute that treats disparate impact as actionable will face the same equal-protection vulnerability.



