South Africa AI policy withdrawn over AI-hallucinated citations
TECH

South Africa AI policy withdrawn over AI-hallucinated citations

34+
Signals

Strategic Overview

  • 01.
    On April 26, 2026, South Africa's Minister of Communications and Digital Technologies Solly Malatsi withdrew the Draft National Artificial Intelligence Policy after officials confirmed that its reference list contained fictitious sources that appeared to have been generated by AI, voiding a public-comment window that was meant to run until June 10.
  • 02.
    A News24 investigation found that at least 6 of 67 academic citations were unverifiable; editors at the South African Journal of Philosophy, AI & Society, and the Journal of Ethics and Social Philosophy independently confirmed they had never published the articles credited to them, and one fabricated citation 'Müller Schmidt 2024' conflated real authors and journals into entirely synthetic scholarship.
  • 03.
    The errors affected three of the document's six policy pillars — Capacity and Talent Development, Economic Transformation, and Responsible Governance — comprising more than a third of the 86-page draft that proposed five new oversight bodies including a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, and an AI Insurance Superfund modeled on the Road Accident Fund.
  • 04.
    Cabinet had approved the policy on March 25, 2026 with an additional sitting on April 1, and it was gazetted on April 10 — meaning the document was withdrawn just 16 days after publication, with Malatsi pledging 'consequence management' for those responsible for drafting and quality assurance.

Deep Analysis

The ouroboros: a policy meant to regulate AI was itself broken by AI

The defining feature of this incident is not that a government document contained errors — governments produce flawed documents constantly — but that the failure mode was a precise enactment of the very risk the document existed to address. South Africa's Draft National AI Policy proposed an entire regulatory architecture for AI risk, including an AI Ethics Board, an AI Ombudsperson, and an AI Insurance Superfund modeled on the Road Accident Fund. It was meant to be the country's foundational answer to questions like 'How do we prevent generative AI from passing fabricated content into consequential public processes?' Toby Shapshak crystallised the irony: 'The draft government policy intended to regulate AI has itself produced precisely the outcome it is meant to prevent.'

The specific shape of the failure is instructive. News24's verification work identified one citation — 'Müller Schmidt 2024,' supposedly in a European journal on AI regulation — that pattern-matches the classic LLM hallucination signature: a reference statistically plausible at every level (author surnames common in the field, journal name correctly themed, year fitting the literature) yet corresponding to nothing that exists. Three of the six pillars of the document — Capacity and Talent Development, Economic Transformation, and Responsible Governance — were affected, meaning the rot was not localised to one author or section but distributed across the drafting workflow. That distribution is what makes the incident a governance failure rather than a personnel failure: the same pattern of unverified AI-assisted drafting was repeated by multiple hands and survived multiple sign-offs, including a Cabinet approval on March 25 with a special sitting on April 1.

The 16-day collapse and what it reveals about quality-assurance gaps

The compressed timeline from publication to withdrawal is itself a data point. Cabinet approved the policy on March 25, 2026; it was gazetted on April 10; it was withdrawn on April 26 — 16 days after publication and roughly six weeks before the planned June 10 close of public comment. In a normal policy lifecycle, the public-comment window is the quality-control mechanism. Here, that mechanism was bypassed: an outside investigation by a newsroom did the verification work that should have happened inside the Department of Communications and Digital Technologies before Cabinet ever saw the document.

That sequencing exposes two distinct failure layers. The first is the drafting layer — whoever wrote the citation list either generated references via an LLM and never verified them, or accepted unsourced material from a contractor without spot-checking. The second is the assurance layer — Cabinet, with its special April 1 sitting, signed off on an 86-page document whose bibliography would not survive a single-evening audit by a journalist with access to academic databases. Parliamentary committee chair Khusela Sangoni-Diko's line about not seeking a 'scape-bot' targets exactly this layer: blaming the model is convenient because it deflects attention from the human signatures on the approval chain. The structural question her committee will now have to answer is why no automated citation-verification step exists between 'consultant submits draft' and 'Cabinet approves' — given that such a check could plausibly be performed with the very LLMs the policy was meant to regulate.

Mbuvha vs Ndlovu: easily remediable bug, or systemic governance failure?

The expert commentary divides cleanly along a fault line that will shape the redraft. Professor Rendani Mbuvha at Wits frames the episode as a marker of progress — 'I think these sorts of blunders are going to be the mainstay of the adoption of AI' — and as straightforwardly fixable: 'It seems to appear that when we drafted the policy, we let the AI hallucinate, but I think it's something that we can easily remedy.' His prescription is consultative — bring academia in, study the roughly seven African countries that have already adopted national AI policies, and rewrite. In this view, the citations were a process bug, not a constitutional flaw in how government uses AI.

TechCentral's Nkosinathi Ndlovu rejects the easy-fix framing. His core argument — 'the firing of an individual in the procedural chain will not magically produce a sound governance framework' — is that without three structural changes, the same incident will recur in a different department in six months. Those changes are: mandatory disclosure when AI tools are used to draft official documents; explicit citation-verification protocols built into the publication pipeline; and procurement transparency requiring outside consultants to declare which AI tools they used and how outputs were validated. The South African subreddit conversation amplifies this view, with the dominant argument that 'responsibility flows uphill' — the minister cannot offload accountability onto a junior official because the systemic gap is what allowed any junior official to slip fabricated material past Cabinet at all. The redraft will reveal which framing won: a quick rewrite with better reference-checking is the Mbuvha path; a published AI-use-disclosure standard for all departmental drafting is the Ndlovu path.

Why this is a global cautionary tale, not a South African anomaly

Read in isolation, the South Africa story looks like a domestic embarrassment. Read in context, it is the first high-profile case of an AI-hallucination scandal striking the policy layer rather than the litigation or consulting layers — and the prior cases in those layers point to where the policy layer is heading. Mata v. Avianca in 2023 established that US courts would sanction lawyers for filing AI-fabricated citations. By April 2026, NPR was reporting that hallucination penalties were stacking up routinely across the US legal system, with monetary sanctions including a $109,700 order against an Oregon attorney. An AI-hallucination case database maintained by researcher Damien Charlotin tracks more than 1,200 documented cases globally of fabricated AI-generated material in legal and government documents, roughly 800 of them in US courts. The South African Legal Practice Council had already begun developing an AI governance framework in 2025 after local cases of fabricated case law surfaced.

The Deloitte Australia precedent from August 2025 is the closest analogue and the one Reddit commenters keep raising: a major consultancy refunded fees to a government department after its report contained fabricated academic references and a quote attributed to a non-existent court judgment. The pattern is now stable enough to predict — wherever knowledge work is being accelerated by LLMs without verification protocols, hallucinated citations will eventually surface at high cost. South Africa's contribution to that pattern is to demonstrate that the policy-drafting workflow itself is no more immune than litigation or consulting, and that an entire national AI strategy can be voided by a single under-checked bibliography. For governments racing to publish their own AI frameworks — and the Wits expert noted at least seven African peers already have — the cheapest lesson is to install the citation-verification step before publishing, not after a journalist forces the issue.

Historical Context

2023-06
A Manhattan federal judge sanctioned two lawyers who filed a brief citing six entirely invented court decisions generated by ChatGPT — the first major US 'AI hallucination' sanction case and a foundational reference point for every subsequent fabricated-citation scandal.
2025-01
After fabricated case law surfaced in two local South African legal matters, the Legal Practice Council began developing an AI governance framework for legal professionals — a domestic warning shot that preceded the policy scandal by more than a year.
2025-05
A US federal judge ordered two attorneys representing MyPillow CEO Mike Lindell to each pay $3,000 after they submitted an AI-generated court filing with more than two dozen mistakes, including hallucinated cases.
2025-08
Deloitte refunded fees to the Australian Department of Employment and Workplace Relations after a consulting report contained fabricated academic references and a quote attributed to a non-existent court judgment — a precedent commentators are now using to demand similar refunds and disclosure protocols in South Africa.
2026-04-03
NPR reported that AI-hallucination penalties were stacking up across US courts, with monetary sanctions on attorneys becoming routine as the technology spreads through the legal system.
2026-04-10
The Draft National AI Policy was officially gazetted for a 60-day public comment period running to June 10, 2026 — a window cut short 16 days later by News24's investigation and Minister Malatsi's withdrawal order.

Power Map

Key Players
Subject

South Africa AI policy withdrawn over AI-hallucinated citations

SO

Solly Malatsi

Minister of Communications and Digital Technologies who ordered the withdrawal of the draft AI policy and pledged 'consequence management' for those responsible for drafting and quality assurance, conceding the lapse proves the necessity of human oversight over AI in government workflows.

DE

Department of Communications and Digital Technologies (DCDT)

The government department that drafted the National AI Policy and now must redraft it, facing internal accountability review over its quality-assurance failures and a parliamentary directive to produce a replacement without using ChatGPT.

NE

News24

South African publication whose investigation surfaced the fictitious AI-generated references in the draft policy and triggered the withdrawal within days of publication, having independently checked citations against academic databases and contacted the named journals' editors.

KH

Khusela Sangoni-Diko

Chair of the parliamentary portfolio committee on communications who publicly urged withdrawal and called for the redraft to avoid using ChatGPT, criticizing officials for seeking a 'scape-bot' instead of fixing systemic accountability issues.

CA

Cabinet of South Africa

Approved the flawed draft AI policy on March 25 and April 1, 2026 before its publication in the Government Gazette, raising broader questions about cross-government quality control on AI-assisted policy documents.

SO

South African Journal of Philosophy, AI & Society, and Journal of Ethics and Social Philosophy

Academic journals whose editors confirmed to News24 that articles credited to their publications in the policy reference list had never been published there, providing the on-record verification that escalated the scandal from suspicion to confirmed fabrication.

Source Articles

Top 5

THE SIGNAL.

Analysts

"Frames the failure as proof that human verification is irreplaceable when generative AI is used in government drafting workflows, calling the lapse 'unacceptable' and committing to consequence management against those who drafted and signed off on the document."

Solly Malatsi
Minister of Communications and Digital Technologies, South Africa

"Argues this kind of blunder will become 'the mainstay' of AI adoption — itself a marker of growing AI use in policymaking — and contends the fix is straightforward: consult academia and the roughly seven African countries that have already adopted national AI policies before redrafting."

Professor Rendani Mbuvha
Wits University School of Statistics and Actuarial Science

"Highlights the recursive irony that a draft policy explicitly designed to regulate AI was itself contaminated by precisely the harmful AI behaviour — hallucinated content presented as fact — that the policy was meant to prevent."

Toby Shapshak
South African technology journalist and analyst

"Pushes back on individual blame, arguing that 'the firing of an individual in the procedural chain will not magically produce a sound governance framework,' and proposes three structural fixes: mandatory disclosure of AI use in policy drafting, citation-verification protocols, and procurement transparency on consultants' AI tooling."

Nkosinathi Ndlovu
TechCentral columnist

"Insists that any redraft must be produced without leaning on ChatGPT, warning against the search for a 'scape-bot' to absorb blame and arguing accountability must rest with the human officials who signed off on a defective document."

Khusela Sangoni-Diko
Chair, Parliamentary Portfolio Committee on Communications
The Crowd

"South Africa has withdrawn its first draft national AI policy after revelations that it contained fictitious sources in its reference list which appeared to have been AI-generated."

@@ReutersAfrica0

""Rather ironically, some of the citations from South Africa's Draft National Artificial Intelligence Policy appear to be fabricated.""

@@RobertFreundLaw0

"[ON AIR] Government's plans to regulate artificial intelligence in South Africa are under scrutiny, after an explosive investigation by News24 revealed that parts of the country's Draft National AI Policy may be built on fictitious research. @CommsZA Minister @SollyMalatsi was [interviewed]"

@@SAfmRadio0

"Malatsi withdraws AI policy after fictitious sources scandal"

@u/Beyond_the_one64
Broadcast
Government Caught Red Handed Using AI In Their Draft AI Policy

Government Caught Red Handed Using AI In Their Draft AI Policy

South Africa's Fatal AI Policy Mistake

South Africa's Fatal AI Policy Mistake

SA introduces draft AI regulation policy

SA introduces draft AI regulation policy