The African Hub on AI Safety, Peace and Security is the first initiative of its kind on the continent focused on addressing the real-world risks, harms, and opportunities of artificial intelligence through an African lens.

Established under the AI for Development (AI4D) programme funded by the UK Foreign, Commonwealth & Development Office (FCDO) and Canada’s International Development Research Centre (IDRC) the Hub is led by the University of Cape Town (UCT) in partnership with Global Center on AI Governance.

The Hub will advance research on AI safety and governance, build local capacity through interdisciplinary training and mentorship, and influence policy to ensure African priorities are represented in global AI safety frameworks.

By connecting researchers, policymakers, and innovators, the Hub aims to make AI development in Africa safe, inclusive, and aligned with the continent’s values strengthening Africa’s role as a key player in shaping the global AI future

Global South Network for Trustworthy AI

Speakers at the launch of the Global South Network for Trustworthy AI, India AI Impact Summit
Speakers at the launch of the Global South Network for Trustworthy AI, India AI Impact Summit

The Global Center on AI Governance is a founding partner of the Global South Network for Trustworthy AI, an independent, civil society-led initiative that brings together organisations across the Global South to build locally grounded AI safety infrastructure that reflects the realities of communities most affected by AI deployment. The Network brings together five organisations from across Africa, India, and Latin America, the Global Center on AI Governance (South Africa), CERAI at IIT Madras, Digital Futures Lab, International Innovation Corps (India), and ITS Rio (Brazil). The organizations are united by a shared commitment to AI safety rooted in local realities.

The network was created in response to a persistent gap in the global AI safety ecosystem. While AI systems are being deployed at scale across the Global South, mediating access to welfare, healthcare, employment, and public services, the institutions shaping AI safety standards and governance remain overwhelmingly based in the Global North. The network exists to ensure that the voices, contexts, and priorities of the Global South are embedded in how AI safety is defined, researched, and governed.

Through collaborative research, community-led impact assessments, and knowledge sharing across regions, the network generates contextualised evidence on AI-related harms, tests and adapts safety tools across diverse linguistic and cultural settings, and creates structured feedback loops between global safety institutions and on-the-ground deployment realities.

The network operates on the conviction that safety is the foundation of inclusive AI development. By strengthening independent civil society capacity across the Global South, the network aims to ensure that countries and communities can benefit from AI on their own terms, without bearing a disproportionate share of its risks.

Sign up to our newsletter
Stay updated with the latest news and exciting updates from GCG!
By subscribing, you agree to receive occasional emails and newsletters from the Global Center on AI Governance. We respect your privacy and will never share your information outside our organization. For more details, please see our terms & conditions.