The ouroboros: a policy meant to regulate AI was itself broken by AI
The defining feature of this incident is not that a government document contained errors — governments produce flawed documents constantly — but that the failure mode was a precise enactment of the very risk the document existed to address. South Africa's Draft National AI Policy proposed an entire regulatory architecture for AI risk, including an AI Ethics Board, an AI Ombudsperson, and an AI Insurance Superfund modeled on the Road Accident Fund. It was meant to be the country's foundational answer to questions like 'How do we prevent generative AI from passing fabricated content into consequential public processes?' Toby Shapshak crystallised the irony: 'The draft government policy intended to regulate AI has itself produced precisely the outcome it is meant to prevent.'
The specific shape of the failure is instructive. News24's verification work identified one citation — 'Müller Schmidt 2024,' supposedly in a European journal on AI regulation — that pattern-matches the classic LLM hallucination signature: a reference statistically plausible at every level (author surnames common in the field, journal name correctly themed, year fitting the literature) yet corresponding to nothing that exists. Three of the six pillars of the document — Capacity and Talent Development, Economic Transformation, and Responsible Governance — were affected, meaning the rot was not localised to one author or section but distributed across the drafting workflow. That distribution is what makes the incident a governance failure rather than a personnel failure: the same pattern of unverified AI-assisted drafting was repeated by multiple hands and survived multiple sign-offs, including a Cabinet approval on March 25 with a special sitting on April 1.



