Context

Following consultations with several experts and institutions working at the intersection of AI and society (broadly defined) on the African continent, the African Observatory on Responsible AI makes the below submission to the UN’s call for written submissions for the Global Dialogue on AI Governance.

Our position is anchored on the myriad of existing policy discussions and scholarship that have surfaced Africa’s unique approach to AI development as well as the unique opportunities and challenges that AI brings for the African people, which are different from those elsewhere. We recognise the importance of AI as a technology with immense potential and impacts, but the direction of that impact greatly depends on whether we take actionable and collective steps to steer AI in public interest. We believe the current geopolitical and global economic climate in which AI technologies are being developed risks steering AI toward benefitting a handful of powerful states, actors, and private interests. Africa faces severe risks of exclusion both in the technological design, development and deployment as well as its governance. We define these risks in four categories, as articulated in the African Union Continental AI Strategy: environmental risks (rising energy demands and water consumption of AI data centres alongside the risk of African countries becoming a dumpsite for used AI hardware); system level risks such as AI’s potential to distort democratic processes or exacerbate inequality; structural risks such as cyber security threats, bias and data privacy risks, intellectual property risks, and job displacements; and risks to African values (such as the displacement of indigenous knowledge as a result of the insufficient inclusion of the corpus of African languages and cultures in frontier AI models).

Guiding Questions

1. Proposed Outcomes

AI development trajectories vary significantly across regions, shaped by diverse socioeconomic and political conditions and uneven distribution of capabilities, resources, and capital. Given this, one meaningful outcome is that the Dialogue institutes processes for region-specific appraisal of evidence on key AI trends, such as public perceptions, progress in AI use, access and capabilities, and safety and testing practices. The UN Scientific Advisory Panel on AI is well placed to lead this evidence-gathering work.

Secondly, rather than pursuing granular, prescriptive rules, the Dialogue should focus on curating an inventory of best practices, flexible policy frameworks, and use cases, while building alignment on high-level principles such as fairness, transparency, and security. This approach supports interoperability and allows African nations to craft governance models suited to their socio-economic realities. Outputs such as jointly developed policy toolkits for responsible AI development would equip African developers and policymakers to build and govern AI at scale.

Third, the dialogue should pursue clear commitments to redlines grounded in human rights, ideally through a global convention on AI and human and peoples’ rights. Although challenging to implement in the current geopolitical climate, such protections are vital for a rapidly expanding global user base.

Fourth, a dedicated global fund for compute development in the Global South, particularly for fragile states, is needed to address inequality and mitigate environmental harm from unchecked compute expansion. A clear AI diffusion strategy is also essential to ensure equitable development and use, including support for locally relevant AI systems.

Fifth, discussions should recognise that AI futures are increasingly shaped not only by states but also by tech companies, international organisations, and community movements. Dialogue outcomes must therefore include commitments from these different stakeholders.

Finally, forum engagements must be geographically inclusive, with sessions hosted in locations genuinely accessible to African (and other Global South) participants.

2. Proposed priorities

Building safe, secure and trustworthy AI in Africa is crucial as AI systems are increasingly used in diverse social, economic and political contexts. For example, preserving electoral integrity and protecting Africa’s democracy is urgent given the rise of deepfakes, AI-facilitated gender based violence, and misinformation.

AI capacity building for the African public — including, especially, underserved communities and the public sector must also take centre stage, as a number of African countries have announced national AI strategies with ambitious goals that cannot be met until the public sector is effectively equipped with the tools and knowledge they need. Such capacity building efforts must also include AI literacy programmes for the general public.

The consideration of AI’s social, economic, ethical, cultural, linguistic and technical implications is important due to the underrepresentation of African data, perspectives, value systems and language in frontier AI models. African value systems that promote human dignity, and positive relations with the non-human environment should be leveraged to guide development and governance processes.

The protection and promotion of human rights in the AI context is a multilayered agenda. An economically just AI future must recognise the contributions of Africans as innovators and data producers, as well as the continent as a source of critical minerals that AI systems rely on. This means implementing robust governance frameworks for safe working conditions, fair wages and career prospects, and addressing the use of child labour in mining. The inclusion of Africans’ data, knowledge and experiences in AI development should be coupled with a fair share in the consequent economic value.

The Dialogue should advocate reducing friction between different national and regional AI governance approaches by promoting interoperability of governance approaches based on shared principles.

3. Other emerging issues for consideration

Environmental dimension: While somewhat embedded in the “social and economic implications of AI”, the current framing misses serious engagement with AI's impact on the environment, such as increasing and exploitative energy impact, water and land use. Many Global South countries stand at the receiving end of this, given their reliance on nature-based livelihoods, coupled with existing disproportionate distribution of climate change harms.

Systemic and structural inequality and injustice: An explicit theme focusing on structural inequalities of AI is required. This should address issues like market dominance, monopoly and regional inequality in access to AI resources and use.

Monitoring and evaluation of tools, frameworks, and use cases: Need for post-deployment governance and decommissioning protocols, including procedures to discontinue AI tools that do not deliver rights-based outputs. Real evidence on AI performance against context-specific benchmarks is also essential; this aligns with the New Delhi Frontier AI Commitments which aim to expand evidence on frontier AI use, safety and testing benchmarks in Global South contexts. Strengthening its implementation is essential.

Literacy: While currently baked into “AI capacity-building”, expanding AI literacy is an underappreciated agenda. It must go beyond skills development to empowering citizens to address power and inequality, because only citizens with adequate knowledge of AI technologies can assert their rights, hold governments and corporations accountable, and participate meaningfully in governance dialogues.

Safe, secure, and trustworthy AI from an African perspective: This means protecting people from harm by building resilience into both the AI infrastructure and society at large, rather than treating security as individual responsibility. For example, measures to protect vulnerable populations should not only rely on the safety of AI systems, but also upscaling traditional crisis response protocols to prepare for AI-facilitated harms.

4. Governance gaps unique to Africa

Across the African continent, governance gaps in AI are creating both acute challenges and meaningful opportunities. At the moment, the most significant challenge is the absence of a coherent institutional framework for AI governance encompassing enforceable standards, guidelines, and regulation. Although numerous national and regional frameworks exist, their lack of clear enforceable mandates at continental levels often potentially risks regulatory arbitrage and stalls implementation, accountability and resourcing. The current approach often requires initiatives to start from ground zero, build up, run out of funds, and collapse, only for the next effort to repeat the cycle. Relatedly, there is also a "pilot culture", in which promising AI solutions rarely progress beyond proof-of-concept, with many innovators lacking the sandboxes and responsible pathways for scaling.

Another challenge is that existing global AI governance frameworks assume similarity in AI governance challenges across regions. In reality, key domains (data, privacy, computation, model development, and deployment) require distinct, case-by-case approaches that reflect regional and national realities. Reflection on how to maintain interoperability practically in the context of national realities is needed.

These gaps also create clear opportunities for Africa. There is a clear opportunity to establish a governmental organ or institution that embeds multistakeholder AI governance continuously, rather than convening only when issues arise. A coordinated structure that is anchored in existing institutions (e.g AUDA-NEPAD, Smart Africa), or a purpose-built body could unify fragmented efforts, pool funding, and provide the institutional home that AI governance currently lacks in Africa. Any such structure must be multistakeholder and interdisciplinary.

There is also an opportunity to build a robust continental roadmap that accounts for digital transformation alongside AI adoption, since the continent is advancing both simultaneously. If designed well, this framework could enable African contributions to global AI governance to be distinct, context-driven, and genuinely influential.

5.Advancing International Cooperation

The UN's role in AI governance should extend beyond convening to actively shaping norms, including mandating minimum standards for AI development and deployment that are globally acceptable and enforceable. The Global Dialogue is well positioned to act as a bridge-builder, ensuring international cooperation accelerates inclusive development that closes the global AI divide.

A few steps can advance this goal are:

  • Operationalise existing norms: Significant work has already been done on norm development. The Dialogue should now focus on concrete implementation in context, such as drawing on advisory input from the African Commission on Human and Peoples' Rights and partnering with the Network of African National Human Rights Institutions (NANHRI) to translate norms into country-level oversight and accountability.
  • Championing interoperability: The Dialogue should champion interoperability to reduce fragmentation between national and regional AI governance approaches. Promoting regulatory interoperability based on shared principles can prevent a fragmented landscape that would hinder African businesses seeking to scale across borders.
  • International cooperation must catalyse capacity building in Africa: The UN can convene governments, the private sector, and civil society to mobilise investment in digital and data infrastructure, AI skills and talent development, and access to open-source tools, open data, and computing resources.
  • Participation must be genuinely inclusive and multistakeholder: The Dialogue should move beyond symbolic inclusion and place African policymakers, entrepreneurs, academics, and civil society at the centre of shaping the governance agenda, while amplifying African initiatives already in place.
  • Investment should be mobilised to support what is already working on the continent, such as the Smart Africa AI Council.
  • Finally, a global framework for redress and remedy is essential, given that AI harms cross borders. Even strengthening interim, non-governmental measures, such as the AI Risk Mitigation Fund could meaningfully strengthen protections for affected communities worldwide.

6. Existing initiatives and partnerships relevant to the Global Dialogue

The Dialogue should, broadly, build upon frameworks and the growing body of scholarship centring public interest and emphasising a decolonial approach to AI development and governance in Africa.

In terms of frameworks, the Dialogue should engage governance instruments like the African Union AI Strategy and the Smart Africa framework which both promote interoperability in African digital development. Subregional frameworks within the continent, particularly with respect to non-English speaking regions of the continent deserve crucial engagement on a global level. This includes, for example, the first francophone guidelines for ethical, inclusive, and responsible AI for francophone Africa, developed by six West African countries (Benin, Burkina Faso, Cote d'Ivoire, Senegal, Guinea-Conakry, and Mali) in collaboration with AUDA-NEPAD and research institutions (IPAR and CITADEL).

Other frameworks that should be engaged include South-South collaboration networks that promote a move toward ensuring equitable AI development by addressing frontier AI market dominance and combining market integration with rights protection, such as the Africa–Asia AI Policymaker Network. The African Observatory on Responsible AI’s work on AI Governance alignment in the AMET region provides a necessary background to this.

Strengthening this approach is an emerging body of scholarship on decolonisation, grounded in real-world evidence and use cases; for example, a recent inquiry into allegations of unfair racial discrimination and procedural unfairness by digital and AI medical schemes in South Africa. The report shows that dominated parties are often incorporated into a system through corporate power, beyond formal state-to-persons relations. Hence, an appropriate governance regime that focuses mainly on the state risks mis-specifying the governance target — which is to deliver commensurate transnational accountability from large corporate entities. .

Regarding implementation coordination and institutional support, African research labs and start-ups are part of consortium-based initiatives that aim to foster collaboration, reduce silos, and pool resources amongst a diverse group of innovators working on context-driven AI solutions. One of such critical mechanisms is the AI4D consortium which supports more than twenty different projects, labs and research clusters in eighteen African countries to develop responsible AI solutions for critical sector deployment. Locally-led, bottom-up communities such as the Deep Learning Indaba and Masakhane communities have been key to setting the research and innovation agenda in these contexts.

7. Structure for stakeholder contributions

To ensure a truly inclusive, multi-stakeholder process, the AI Dialogue should take a regional approach, recognising that even such broad bundling of countries is insufficient to mainstream contextual differences in how AI is developed across countries and cultures.

States should be encouraged to put forward regional positions on AI that reflect their own internal consultation processes, which can then be aggregated as regional AI development agenda. Non-binding statements and positions especially have global effects beyond the regions they originate from.

The current climate shows that it has become increasingly critical for the Global Dialogue to include private actors in AI development (including founders and funders), so that messaging around right-based AI governance is well distilled to the builders of the technology.

Representatives of grassroot startups, civil society organisations, and community movements must also be well represented in such dialogues to ensure public interest issues are adequately addressed.

8. Representation at the Global Dialogue

At the moment, the voices of local movements and community AI builders from Africa are not well represented in AI governance discussions. Grassroot organisations, particularly those working at the intersection of human rights, tech justice, and community advocacy, are often marginalised in global dialogues. Other underrepresented constituencies include indigenous communities, speakers of low-resource languages, women and gender-diverse technologists from the Global South, workers in the AI data supply chain (such as data labellers and content moderators), small and medium enterprises building AI locally, and young people who will inherit the long-term consequences of today's governance decisions. Civil society representation is also minimal.

The Dialogue should explicitly integrate the perspectives, expertise, guidelines and policy frameworks developed in diverse linguistic contexts in Africa — including Lusophone, Francophone and Portuguese-speaking African countries, providing further contextual nuancing and local realities within regional conversation.

Adequate inclusion requires deliberate, structural interventions through:

  • Resourced participation: Provide travel funding and logistical support so representatives from under-resourced regions can meaningfully attend and contribute to global forums, rather than relying on self-funded participation that privileges well-established actors.
  • Decentralised convenings: Host consultations and working sessions across African cities and other Global South locations, ensuring participation is truly accessible. Hybrid formats with strong regional anchoring can broaden reach.
  • Partnership with existing networks: Collaborate with established African governance entities such as the AUDA-NEPAD, African Union, Smart Africa, and the African Commission on Human and Peoples' Rights (ACHPR), regional civil society coalitions, and university networks to channel diverse voices upward.
  • Integration of policy frameworks, context-driven inputs, developed in non-dominant languages (e.g. Francophone, Lusophone, and Portuguese-speaking African countries), such as the Guidelines for Ethical, Inclusive and Responsible AI for Francophone Africa.
  • Accountability mechanisms: Track and publicly report on whose input shaped final decisions, creating pressure for better inclusion mechanisms where gaps persist.

9. Innovative engagement formats

One useful strategy is a consortium-based approach, in which the AI Dialogue is coordinated by a central host (UN-based) entity that is part of a larger network of different organisational representatives across regional and sectoral representations. A good example here is the Nigeria AI Collective which convenes a cluster of organisations with complementary functions including policy advocacy, research and knowledge sharing, AI innovation and design, international collaboration, and capacity building. This approach includes sub-national engagement platforms sessions, which are ready-made channels for meaningful contribution. Existing local organisations are well positioned to issue calls for proposals, surface the most pressing issues, and drive strong participation.

A similar model could work effectively for the AI Dialogue. The growing network of AI centres across African countries offers a valuable entry point for substantive, grounded engagement. Further, the work of AUDA-NEPAD and the Smart Africa AI Council, which convenes sessions with key government stakeholders on AI-related issues, including stakeholders who are not always involved in technology conversations, should be replicated. Joining forces with these existing efforts rather than duplicating them would make the Dialogue both more inclusive and more impactful.

Finally, the AI Dialogue should embed within it a consultation format that enables stakeholders to request regional and global dialogues on pressing governance issues (e.g., governance of agentic AI) on an ongoing basis.

10. Examples of policies and practices that promote AI governance

Several African-led approaches, policies, and practices are shaping AI governance in promising ways:

  • The African Union's Continental AI Strategy and the accompanying work of AUDA-NEPAD provide a coordinating framework for harmonising AI policy across member states, while engaging ministries and agencies often overlooked in technology discussions.
  • Smart Africa and its AI Council bring together governments, private sector, and civil society to align on practical priorities, from infrastructure to skills development.
  • The Network of African Data Protection Authorities (NADPA) conveys national data protection authorities to ensure that data protection frameworks across the continent do not stymie interoperability and the regulatory harmony needed to advance continent-wide digital progress and prevent policy arbitrage.
  • The AI4D consortium supports a network of research labs, SMEs and university institutions to conduct fundamental research and develop responsible AI solutions for critical sector deployment.
  • Nigeria's AI Collective offers a strong consortium model, too. Built as part of the country's AI strategy implementation, it organises academia, developers, funders, and civil society around clear pillars, creating a structured pathway for multistakeholder input into governance.
  • Community-driven initiatives such as Masakhane — a grassroots research community advancing natural language processing for African languages demonstrate how bottom-up, open-source approaches can directly support linguistically diverse and context-driven AI solutions.

Beyond these, there are valuable lessons from the ACHPR and the Network of African National Human Rights Institutions (NANHRI), which can anchor AI governance in regionally rooted human rights frameworks. Existing regulatory and operational sandboxes in sector-specific areas (e.g fin tech) in countries like Kenya, Egypt, and South Africa offer practical models for implementing regulatory sandboxes for AI products.

Rather than creating new regional instruments, we believe, overall, that an effective inclusion mechanism for AI governance for Africa will likely emerge from strengthening, resourcing, and connecting existing platforms while ensuring their insights shape global norms.

Consultation Attendees

  • Abiodun Modupe (Department of Computer Science, University of Pretoria)
  • Akintunde Babatunde (Centre for Journalism, Innovation and Development - CJID)
  • Alison Tilley (Judges Matter)
  • Ana Florido (Global Center on AI Governance)
  • Anye Nyamnjoh (University of Cape Town)
  • Ayantola Alayande (Global Center on AI Governance)
  • Baker Birikujja (Personal Data Protection Office, Government of Uganda)
  • Chinasa T Okolo (Technecultura)
  • Damien Hanyurwimfura (College of Science and Technology, University of Rwanda)
  • Daphine Nkunda (Global Center on AI Governance)
  • Dirk Brand (University of Stellenbosch School of Public Leadership)
  • Dorcas Tsebee (Meta)
  • Eileen Carter (South African Human Rights Commission - SAHRC)
  • Emma Ruttkamp-Bloem (Department of Philosophy, University of Pretoria)
  • Fiona Chawana (Global Center on AI Governance)
  • Fola Adeleke (Global Center on AI Governance)
  • Gbenga Jayeola (Global Center on AI Governance)
  • Gelan Ayana (The Global South Artificial Intelligence for Pandemic, Epidemic, Preparedness & Response - AI4PEP)
  • George Orjih (mymdoc)
  • Hilda Manzi (TomorrowNow)
  • Jake Effoduh (Lincoln Alexander School of Law, Toronto Metropolitan University)
  • Kebene Wodajo (University of St.Gallen)
  • Laetitia Badolo (NIYEL)
  • Leah Junck (Global Center on AI Governance)
  • Mehdi Taher (Meta)
  • Melanie George (Global Center on AI Governance)
  • Mohamed Gele (Somalia Federal Ministry of Fisheries and Blue Economy)
  • Nasubo Ongoma (Qhala)
  • Olayinka David-West (Lagos Business School, Pan-Atlantic University)
  • Pren-Tsilya Boa (Google)
  • Rachel Adams (Global Center on AI Governance)
  • Ridwan Oloyede (Tech Hive Advisory)
  • Sade Dada (Meta)
  • Samuel Segun (Global Center on AI Governance)
  • Sarah Sakha (Meta)
  • Selamawit Engida Abdella (Global Center on AI Governance)
  • Sharon Essilfie (Global Center on AI Governance)
  • Shyline Muthoni (Global Center on AI Governance)
  • Simisola Akintoye (Northumbria University School of Law)
  • Solomon Chinedu (mymdoc)
  • Terry Moraa (Global Center on AI Governance)
  • Victor Odumuyiwa (Department of Computer Science, University of Lagos)
  • Yohannes Ayalew (Three Generations of Digital Human Rights Project, The Hebrew University of Jerusalem)
  • Yossra Kallali (NIYEL).

Sign up to our newsletter
Stay updated with the latest news and exciting updates from GCG!
By subscribing, you agree to receive occasional emails and newsletters from the Global Center on AI Governance. We respect your privacy and will never share your information outside our organization. For more details, please see our terms & conditions.