Artificial intelligence is reshaping economies, politics, and societies worldwide, and Africa is no exception. While AI promises benefits for health, education, agriculture, and economic growth, this report highlights how its risks are especially acute on the continent given fragile institutions, limited infrastructure, and geopolitical dependencies.

Key risks identified include:

  • Malicious use: deepfakes and AI-powered disinformation threaten elections and social cohesion; AI-enabled surveillance raises concerns for human rights; cybercrime and online gender-based violence are growing.
  • Malfunction: reliability issues from models trained on Western data, bias in AI tools, and risks to critical sectors like healthcare and education.
  • Systemic risks: labour disruptions in sectors such as business process outsourcing, worsening environmental pressures from compute and data centres, and escalating e-waste.

Despite the African Union’s Continental AI Strategy, national-level efforts remain fragmented. Only a handful of states (Kenya, Ghana, Morocco, Rwanda, South Africa) show meaningful activity on AI safety, and no African country currently hosts a dedicated AI Safety Institute.

The paper advances a five-pillar action plan to build a robust African AI safety ecosystem:

  1. Grounding AI governance in human rights to protect the most vulnerable.
  2. Establishing an African AI Safety Institute to lead research, testing, and policy innovation.
  3. Expanding public AI literacy through education and targeted campaigns.
  4. Building early warning systems with multilingual benchmarks for at least 25 African languages.
  5. Convening an annual AU-level AI Safety & Security Forum to align policy, coordinate responses, and amplify Africa’s voice in global governance.

By embedding African perspectives into global AI safety frameworks, the agenda seeks to safeguard communities, assert technological sovereignty, and shape a fairer, more resilient AI future.

This paper was launched during a webinar, AI Safety: African Perspectives on the 9th of September, 2025.

Key takeaways from the webinar;
1. AI Safety is a Broad, Holistic, and Contextual Concern for Africa: The discussion emphasized that AI safety goes beyond narrow technical or "superintelligence" fears, encompassing a holistic range of issues from democratic risks and human rights to socioeconomic and environmental impacts unique to the African continent. It requires an interdisciplinary approach, addressing complex local nuances like resource scarcity, cultural context, and existing governance challenges, and cannot be limited to a single silo.

2. The Risks of AI Misuse are Immediate and Significant, Undermining Global Assumptions: The work of Mino Health AI Labs highlighted a crucial and often overlooked risk, an AI system trained on biological data could be used to design novel, highly toxic agents, immediately challenging the assumption that frontier AI systems are not yet capable of creating biological warfare agents. This demonstrates that developers, even those building AI for good, must proactively address dangerous misuse cases ("red teaming") because global actors may already be doing so, making reactive measures "too late".

3. Continental Coordination and Institutional Capacity are Essential for Safety and Economic Opportunity: Africa needs to build robust local institutional capacity like the proposed AI Safety Institute and increase public literacy to effectively manage AI risks. Furthermore, AI safety is linked to economic market access; by failing to establish appropriate safety and governance standards especially regionally, African countries risk being excluded from global markets and missing out on vast economic opportunities, making investment in safety a commercial imperative.

Sign up to our newsletter
Stay updated with the latest news and exciting updates from GCG!
By subscribing, you agree to receive occasional emails and newsletters from the Global Center on AI Governance. We respect your privacy and will never share your information outside our organization. For more details, please see our terms & conditions.