Research Areas
At the Global Center on AI Governance, our research explores how artificial intelligence can serve people and societies fairly. We study the policies, systems, and values that shape how AI is designed, governed, and used across the world.
Our goal is to generate evidence and insights that help governments, researchers, and communities build responsible, inclusive, and sustainable AI ecosystems. Through practical research and collaboration, we work to ensure AI supports human rights, social justice, and equitable development.
AI and Inclusive & Sustainable Development
AI holds transformative potential for addressing the world’s most pressing challenges, from food security and health to climate resilience and education.
GCG’s work on AI and Inclusive & Sustainable Development investigates how technology can drive equitable growth while avoiding new forms of exclusion or environmental harm. We focus on practical pathways to integrate AI into development agendas in ways that advance the Sustainable Development Goals (SDGs). Our research explores local innovation ecosystems and responsible AI scaling across key domains including health, education, agriculture, service delivery and financial inclusion.
By linking evidence to policy, we help governments, development partners, and industry actors design responsible innovation strategies and practices that leave no one behind.
This area embodies the Global Center on AI Governance’s belief that AI must serve people, planet, and prosperity together.
Related Projects
Responsible AI Governance
Effective and ethical governance is the cornerstone of just, equal and accountable digital futures.
Our work on Responsible AI Governance explores how states, institutions, and multilateral bodies can design adaptive regulatory systems and dynamic policy environments that promote innovation while protecting people and societies. Our work in this area is expansive, covering deep dives on specific regulatory approaches and policy areas, as well as global comparative research on AI governance and country readiness assessments. Through tools like the Global Index on Responsible AI (GIRAI), policy toolkits, and capacity-building initiatives, we assess how governance frameworks translate into real-world accountability and inclusion.
This area also serves as the foundation for GCG’s training and advisory work with governments, regulators, and development agencies, and includes research and advisory on effective South-South cooperation on AI governance. Our goal is to help shape responsive, transparent, and globally equitable models of AI oversight that reflect diverse social values and regional contexts.
Related Projects
AI Safety, Peace and Security
As AI adoption increases and as AI technologies become more powerful, its risks - from misinformation and cyber threats to geopolitical instability - demand urgent attention from diverse global communities.
GCG’s research on AI Safety, Peace and Security addresses how societies across Africa can anticipate, prevent, and mitigate AI-driven harms. We study emerging threats such as deepfakes and tech facilitated gender based violence and election interference, while exploring opportunities to harness AI for peacebuilding and humanitarian action. Through the AI4D African Hub on AI Safety, Peace and Security, we are working to connect researchers, policymakers, and technologists to build risk-awareness and institutional resilience across the continent.
Our work also contributes to global debates on AI safety, advocating for inclusive approaches that recognise the unique vulnerabilities and capacities of Africa and the Global Majority. We aim to ensure that AI contributes to peace, stability, and shared human security.
Related Projects
AI Ethics
Our work on AI ethics is both exploratory and normative.
GCG’s research on AI ethics advances new frameworks that are globally informed yet locally grounded, drawing on African philosophies such as Ubuntu to enrich global debates on fairness, justice, and responsibility in technology. We examine how ethical principles are embedded (or neglected) in the design and deployment of AI systems, and how diverse moral traditions can inform better governance. Our initiatives include the development of an African AI Ethics Board, academic research on communitarian ethics and decolonial approaches, and curriculum design for universities and policymakers.
This area bridges philosophical reflection with practical guidance, helping institutions operationalise ethics in real-world contexts. Through dialogue, research, and education, we promote an ethics of solidarity and inclusion in the age of AI.
Related Projects
AI and Society
At the heart of just and equitable futures are the people and societies that AI technologies should be designed to serve.
AI is transforming how people live, work, learn, and relate to one another. GCG’s research on AI and Society investigates these profound shifts - from automation and labour disruption to algorithmic bias, social trust, and public participation. We study how AI systems intersect with human rights, societal values, and cultural identity, and how societies can ensure that AI benefits rather than marginalises communities. Our projects include surveys of public perception, studies on AI and inequality in Africa, and initiatives to build AI literacy and civic engagement.
Through partnerships with educators, journalists, and community organisations, we seek to make AI understandable, accessible, and accountable to the people it affects.
Related Projects
Data Governance
Data is the foundation of AI, and its governance determines whether AI serves the public good or deepens inequality.
GCG’s research on Data Governance focuses on the rules, institutions, and values that shape how data is collected, shared, and protected. We analyse issues of data protection, privacy, and digital rights; data sovereignty and localisation; and the balance between innovation and individual autonomy. Through comparative studies and toolkits, we support the design of rights-based and development-oriented data ecosystems that strengthen trust and accountability. Our projects explore how ethical and open data practices can enable innovation while safeguarding people’s rights.
This area provides the normative and institutional backbone of a just digital future.












