Operationalizing AI Policy in Africa with the NIST AI Risk Management Framework

Artificial intelligence (AI) is increasingly recognized as a technology that can improve government operations in Africa. It enhances policy processes, strengthens decision-making, increases efficiency, and transforms public service delivery. Yet, public sector adoption of AI systems in African countries remains low.

While challenges such as limited infrastructure and skills gaps play a role, a major constraint lies in the policy landscape. As of 2025, 29 African countries had launched or drafted national AI policies. However, many of these policies are not actionable. While they emphasize the need for responsible and ethical AI systems, they fail to explain how to achieve it; they are too high-level and abstract to guide implementation.

Take the African Union’s Continental Artificial Intelligence Strategy, which emphasizes inclusivity, cultural representation, and linguistic diversity in AI systems. Or Kenya’s national AI strategy, which states that “the strategy will be implemented with transparency, equity, explainability, and accountability in AI systems.” What do all these terms mean, especially for many public officials who are not versed in this technical jargon? How does one ensure and measure fairness and accountability? When a procurement officer evaluates a third-party AI system, what tests prove the system is transparent? What documentation demonstrates accountability?

A major reason these policies remain unactionable is that non-technical experts largely write them. Legal and policy professionals understand the principles AI systems should uphold, which is essential, given the socio-technical nature of AI systems. Nevertheless, they often lack the technical expertise to translate those principles into operational requirements.

Consequently, officials must evaluate AI systems without clear operational criteria for assessing whether they meet the high-level principles. This challenge is compounded when systems are developed by third-party vendors who have different standards. The absence of actionable policy and enforceable guardrails creates uncertainty for public agencies. Faced with high-level standards, unclear operational pathways, and uncapped liability, public officials have strong incentives to delay or avoid AI adoption altogether.

Bridging Policy and Practice

What public officials need is an operational layer between policy and practice. The Artificial Intelligence Risk Management Framework, developed by the National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce, provides precisely this.

The Risk Management Framework is a flexible, non-prescriptive framework designed to help organizations manage AI risks and responsibly build and deploy trustworthy AI systems. Unlike high-level strategies that state what should be achieved, the framework provides a structured process for achieving it. It translates broad principles (e.g., fairness, accountability) into concrete processes for designing, evaluating, and deploying AI systems.

The framework organizes AI risk management into four functions:

  • Govern: Establish oversight structures, accountability, and organizational policies.
  • Map: Identify the context, purpose, and potential impacts of AI systems.
  • Measure: Analyze, assess, benchmark, and monitor AI risks.
  • Manage: Prioritize and implement risk treatments and safeguards.

For public officials, these functions provide a clear operational pathway: Govern answers “who is accountable, the agency or the vendor?”; Map answers “what will this system do in our context, and what could go wrong?”; Measure answers “is performance consistent across all demographics?”; and Manage answers “what safeguards must be in place before deployment?”

The framework is a tool, not an ideology. It does not prescribe what principles AI systems must uphold, whether individualistic Western values like autonomy and privacy, or communitarian African values like Ubuntu and collective welfare. Instead, it provides a process for operationalizing whatever principles policymakers decide. For example, if a government decides to uphold Ubuntu values, the Map function would identify community impacts and collective harms; the Measure function would assess effects on social cohesion, and the Manage function would prioritize safeguards that protect communal interests.

Think of national and continental AI strategies as a country’s constitution, which outlines broad principles and values but does not detail how to achieve them in practice. Just as governments rely on legislation to translate constitutional values into enforceable rules, the risk management framework acts as the regulatory machinery that turns high-level AI policy into operational reality.

How Sector-Wide Profiles Work

Profiles are customized implementations of the Risk Management Framework tailored to specific sectors, use cases, or organizational contexts. They can operate at different levels of governance. Individual agencies can create their own profiles. However, a more efficient approach is for central authorities (e.g., ministries, departments) to create sector-wide profiles that subordinate agencies implement.

In the health sector, for example, the national health authority can create a comprehensive risk profile for the entire sector, rather than every hospital independently determining how to responsibly deploy AI-assisted diagnostic tools. This sector-wide profile would:

  • Specify who within healthcare facilities is accountable for AI decisions (e.g., IT directors or medical officers).
  • Identify common risks across all health facilities (e.g., misdiagnosis, privacy breaches).
  • Set clear testing requirements that AI vendors must meet (e.g., bias audits).
  • Mandate safeguards such as human clinical review for AI-generated diagnoses.

Once the central authority establishes this sector-wide profile, all public officials need to do is reference the profile when procuring or developing AI diagnostic tools.

Sector-wide profiles provide several advantages. First, AI experts need to be hired once at the central authority level rather than every agency attempting to recruit experts in an already constrained talent market. Second, all agencies follow the same rules, making it easier to ensure compliance. Third, AI vendors know exactly what requirements to meet when developing their tools. Finally, if an AI system causes harm, it is clearer whether the fault lies with the agency’s implementation of the profile, the vendor’s system, or the sector-wide profile itself.

Conclusion

While AI has the potential to transform public administration in Africa, current policy frameworks are creating operational paralysis instead of enabling action. To break this deadlock, central authorities (e.g., ministries, departments) must create sector-wide AI risk profiles, adopting and contextualising key learnings from the Risk Management Framework. These ready-to-adopt guardrails are what will empower public officials to procure and deploy AI confidently. The first step is for governments to convene sector leads in health, education, and public finance to develop pilot profiles within the next twelve months. The era of debating what responsible AI should look like is over; it is time to build the operational tools that make it a reality.

About the author:

Steven Yirenkyi is a doctoral researcher at the Kwame Nkrumah University of Science and Technology, where his work centers on developing AI systems for data-scarce environments. His research lies at the intersection of artificial intelligence and climate science, with a focus on time-series forecasting of natural hazards. His work in AI governance advances practical approaches for responsible and trustworthy AI development and deployment. This article draws on his engagements with public officials working to initiate the deployment of responsible AI systems.

Sign up to our newsletter
Stay updated with the latest news and exciting updates from GCG!
By subscribing, you agree to receive occasional emails and newsletters from the Global Center on AI Governance. We respect your privacy and will never share your information outside our organization. For more details, please see our terms & conditions.