
Credibility, Legitimacy and South Africa’s AI Policy Moment
This morning I spoke on the radio about South Africa’s decision to withdraw its draft national AI policy after fabricated references were discovered in the text. It is an uncomfortable moment, but it may be a useful one. Ultimately, while the credibility of the policy may be jeopardized, its legitimacy should not be in question.
The AI Policy That Matters
The document in question is the Draft South Africa National Artificial Intelligence (AI) Policy, published for public comment in April 2026 after Cabinet approval in late March.
Importantly, this is not just an AI strategy, but an AI policy. A strategy sets direction and priorities, but a policy defines the rules, allocates authority, and establishes the basis for regulation and enforcement. Across Africa, the majority of AI governance instruments are strategies, not policies. Nigeria, Ghana, Kenya and Zimbabwe, for example, have all recently published AI strategies. Rwanda is formulating a policy. South Africa had, until this last weekend, actually published a draft policy.
For this reason, together with its substantive content and what it aims to address, the Draft South Africa National Artificial Intelligence Policy is critically important,intended to set the direction for how South Africa governs AI across the public sector, the economy, and society more broadly.
The draft policy itself is explicit about what is at stake. Without a coherent national approach, it warns that South Africa risks deepening inequality, falling behind economically, and losing the ability to safeguard its own sovereignty in a rapidly evolving technological landscape.
These are not abstract concerns, but ongoing and live issues. As Nathan Ross-Adams points out in an excellent piece, there are a series of ongoing negotiations between the South African government and key players like AWS and Microsoft who are looking to make major AI investments in the country. These are precisely the negotiations that the policy will govern, in terms that reflect the country’s best interests. As Ross-Adams highlights “every day without a framework shifts negotiating power to counterparties already transacting without conditions.”
The draft policy sets out a clear national vision of “AI for inclusive economic growth, job creation, cost reduction, and a developing Africa.”
It is not vague about where this will land. It identifies sectors such as education, healthcare, agriculture, and public administration as immediate sites of AI deployment, where responsible AI use could help advance access to basic services across the country.
The draft policy is designed to shape real decisions on how public services are delivered, how data is governed, how South Africa negotiates with large technology companies, and how economic value is distributed.
A Serious and Contextual Policy Effort
It is important to say plainly: much of this draft policy is very good. At the Global Center on AI Governance, we lead the Global Index on Responsible AI and see how often AI policies and strategies pay lip service to the ethical and responsible use of AI, fail to meaningfully address human rights, ignore the implications of AI on inequality, disability and gender empowerment, and leave out the development of mechanisms to provide redress and remedy for those negatively impacted by the technology.
The draft policy does all of these things. It is rooted in South Africa’s constitutional framework and explicitly ties AI governance to the protection and advancement of rights. It recognises the country’s historical inequalities and frames AI as both an opportunity and a risk in that context.
It does something many national AI strategies avoid, which is to name the structural constraints of the digital divide, institutional inertia and uneven access to infrastructure. The draft policy treats these structural issues as central to policy design rather than peripheral concerns.
There is also real institutional ambition here. The policy proposes the establishment of an AI Ethics Board, a National AI Commission, and even an AI Insurance Superfund to ‘provide a safety net in cases where liability is difficult to determine, especially for systems operating in ambiguous decision-making spaces’.
It is not a lightweight document, nor was it developed lightly.
The draft policy emerges from a multi-year process, building on the 2024 framework, public submissions, and extensive consultation across government, academia, and industry. I’ve been part of those consultations for a number of years, and experts from the Global Center on AI Governance have participated in shaping the draft policy.
Given that many governments around the world have rushed to publish AI strategies and then struggled to implement them (see research from the Global Index on Responsible AI, forthcoming July 2026), the process to develop South Africa’s draft policy has, in many respects, been a rigorous and deliberative process. This is the kind of process that ensures the foundations are laid to ensure a wide range of stakeholders stand behind it and support its implementation.
Thus, what has been lost in some of the early reactions to the fictitious references found in the policy is that this is, in many respects, a strong policy. It reflects South Africa’s history and institutional context. It grapples with inequality, capacity constraints, and the realities of governing technology in a middle-income country with global ambitions. The provisions are substantive. The process that produced them has been lengthy and consultative, involving government, academia, civil society, and international partners. This was not a rushed or superficial effort.
What Went Wrong
On the 26th April 2026, the Minister of Communications and Digital Technologies published a statement withdrawing the draft policy from public comment. The withdrawal followed the discovery by News24 that some of the references cited in the draft policy could not be verified and were fictitious. In plain terms, sources appear to have been generated that do not exist.
This points to the likely use of generative AI tools in drafting or research support, without sufficient verification. Tools of this kind are known to produce fabricated citations, which appear convincing. AI models “hallucinate” - that is, make up facts - when they do not have enough training data to arrive at the right conclusions.
What went wrong was that the department’s internal processes failed to pick up on the fictitious references. The draft policy was tabled before Cabinet and ultimately gazetted for public comment, without sufficient internal checks to ensure the document’s integrity.
This has resulted in a number of experts calling into question the credibility, integrity and legitimacy of the entire draft policy. The fictitious references effectively indicate that the evidentiary basis of the policy document is questionable and compromised. How can the government govern AI use across the country, when it cannot use AI responsibly itself?
There’s a number of points to make in response to this question, set out below.
South Africa’s Broader Position
In the first edition of the Global Index on Responsible AI, published in May 2024, South Africa ranked highest on the African continent. That ranking reflected relatively strong performance across areas such as governance frameworks, data protection and public sector readiness. It is also consistent with a broader pattern of engagement. South Africa has been active in multilateral discussions on AI, and has sought to position itself as a regional leader. The current incident should not negate that position.
Governments are Using AI
We should not be naive to think that governments should not be using AI. Governments should use it responsibly and should be held to the highest standards in so doing. It is also now becoming expected for governments to use AI.
In some countries, virtual AI ministers are being appointed. In September 2025, Albania appointed the virtual AI minister Diella, tasked primarily to ensure zero corruption in public procurement processes. There are also many countries with government units tasked with integrating generative tools into policy workflows. The boundary between governing AI and governing with AI is becoming increasingly thin. See here for a policy brief outlining how governments should be using AI.
But without clear protocols - such as verification of sources, disclosure of AI use, and accountability for outputs - these tools can degrade the quality of decision-making. The South African case is simply an early and visible example.
Going forward, this kind of error may be corrected in future by the development and implementation of departmental policy that sets out the appropriate and responsible use of AI within the department (and any government agency), including requirements to fact-check all generative AI outputs.
But it is also important to recognise the structural issues that make using AI riskier in African contexts.
Heighten Risk of AI Use in Africa
AI systems are more prone to error in contexts where data is sparse or uneven. African contexts are routinely underrepresented in the datasets that underpin these models. The result is not just bias in outputs, but a higher likelihood of hallucinations. In practice, this means that when you ask an AI model for sources on African policy or research, the model is more likely to “fill in the gaps” with plausible-sounding but nonexistent material. Emerging research confirms this point.
This is not widely understood outside technical circles, but it should be. It has direct implications for how these tools are used in government.
Ironically, these are precisely the kinds of issues that the draft policy was attempting to correct by promoting the development of locally-driven AI capabilities that would work to fill these kinds of gaps.
Don’t Go Back to the Drawing Board
The response from the South African government has been notably direct. The decision to withdraw the policy signals a willingness to prioritise credibility over expediency. It should be stated that this is a bold choice, given that - as Ross-Adams has highlighted - this kind of thing has happened before, and governments have corrected the error and continued to publish the document. The South African government's response has demonstrated a high degree of institutional accountability, even if it comes with short-term reputational cost.
It is important, however, that this does not trigger a return to the beginning. There is no need to go back to the drawing board. Much of the policy is sound. The task now is to restore confidence in its foundations. This will involve checking and verifying sources, tightening internal processes of quality control, and putting in place clear protocols for the use of AI in drafting and research. Tools for source verification exist. More importantly, so do well-established practices of academic and policy due diligence.
There is also the issue that the matters the draft policy seeks to govern are live and ongoing. We would not want to see the policy review process extended longer than it needs to be, as the policy will be directly and immediately applicable to a number of key ongoing negotiations (again as Ross-Adams has pointed out).
An Opportunity for Strengthening Public Debate on Responsible AI
It is also worth paying attention to how this debate is landing publicly. Research by the Global Center on AI Governance on public perceptions of AI in Africa shows a mix of optimism and concern. People see potential in these technologies, but they are wary of misuse and exclusion. The research also indicated that one third of South Africans have never heard of AI, and that another third of South Africans have very limited understanding of the concept.
Incidents like this will help shape and strengthen public debate on AI. The dynamic debate that has ensued following this incident will help prompt a more informed public conversation about what these systems can and cannot do.
In this way, risk incidents like this where there is no material impact on individuals or communities can also serve a constructive function. They make visible the limitations of these systems. They force a more informed conversation about how AI is being used, and where and why human oversight remains essential.
Going Forward
Going forward, the priority should be to restore confidence in the document without losing momentum. Its credibility has been undermined, but perhaps not its legitimacy. Credibility rests on the accuracy of its evidence, while legitimacy comes from the authority of its process and mandate. The task now is to verify and correct, and to move quickly to republish a revised draft and finalise the policy so that this moment does not stall what is, in many respects, a strong and necessary framework.
