
Africa and the Big Debates in AI
For some years a debate has ensued among the AI community regarding how seriously we should be taking the risk of AI systems usurping human control. Experts wildly disagree about the potential of a super-intelligence going rogue and removing human oversight and control. Yet, in recent months, this debate has taken on a greater urgency with the release of AI models seemingly demonstrating new capabilities and human-like behaviour.
In response to concerns around the potential capacity of frontier AI technologies to exhibit (or experience) emotions, leading AI company Anthropic has appointed an AI welfare researcher to explore the potential of current AI capabilities leading to fully conscious agents. For the AI welfare community, the risk we face is misunderstanding the sentience of AIs, either resulting in a mistreatment of an agent that is morally meaningful, or in ascribing morality to an agent that has none. For the AI safety community, the concern is more about control - whether human societies can reliably align, oversee, and constrain systems whose power may soon exceed our own. According to Nobel prize-winning AI engineer, Geoffrey Hinton, the chances of this happening - the chances of what we euphemistically call a “robot-takeover” - are more than 1% and less than 99%. While in first reading, this seems to be saying nothing at all, from a risk perspective, his message is clear. Fast improving AI capabilities constitute non-negligible risk, surpassing a precautionary risk threshold and therefore warranting ongoing monitoring and contingency planning should an incident occur. While the likelihood of a “robot-takeover”-like event occurring is extremely low, it is not impossible, and its consequences may be hugely significant, thus some precautionary planning is justified.
Despite these differences, most experts agree that once a certain threshold of artificial intelligence is reached - sometimes described as “superintelligence” (there is much debate about the veracity and ethics of this term), where artificial intelligence significantly overwhelms human intelligence, even at a collective level - it will be near to impossible to regain human control. Mustafa Suleyman, now CEO of Microsoft AI, calls this the containment problem. For Suleyman, Pandora's box has already been opened.
So, what do these debates matter for the rapid expansion of AI in Africa, and for the AI for good and AI for development movement?
AI in Africa is at a critical inflection point. Tech communities are burgeoning in cities across the region, from Dakar to Dar es Salaam. African leaders are taking seriously the need to build domestic AI capacities and ecosystems to drive local innovations, and to set in place guardrails to ensure AI serves the public interest. Data and AI sovereignty are key targets for African countries. And investment in African AI is gathering momentum, with new funding flowing into start-ups, labs, and research centres that are beginning to shape a distinctly African AI landscape.
Yet the relevance of global debates on AI safety and welfare to Africa is far from straightforward. These debates, alongside the movement to build so-called superintelligence, have global impacts and implications. The accelerating deployment of frontier AI systems within key domains such as health, education, and public governance across the African continent raises urgent questions about institutional resilience, policy foresight, and strategic autonomy. It also raises questions about how efforts to ensure AI systems are designed and built to align with human values - a key strategy for mitigating the risk of loss of human control - incorporates multi-cultural perspectives and heritage; as well as how the drive to build superintelligence places an even greater demand on the global supply chains of AI, which require African people, land and resources.
The global debate on AI safety ultimately turns on questions of human autonomy - whether people will retain meaningful control over the technologies that shape their lives. For Africa, this is not an abstract concern: histories of colonialism, dependency, and technological marginalisation make the preservation of agency in the digital era especially urgent. Ensuring that Africa can steer, adapt, and govern AI on its own terms is central to avoiding a new wave of external control disguised as innovation.
In this series on Africa and the Big Debates on AI, published by the African Observatory on Responsible AI, we set out to explore what debates on frontier AI safety mean for Africa’s development agenda and the Continent’s growing AI ecosystems. Our series confronts a number of key issues:
- What do large-scale investments in frontier AI - investments at a scale of billions of dollars which far outweigh the GDP of most African countries - mean for the Continent as it works to “catch-up” and “leap-frog”? Will this create a widening AI divide, and what will its consequences be for global inequality and international cooperation? Can smaller, more frugal, AI systems also transform African societies for the better?
- How can we anticipate and mitigate the climate impacts of increasingly more energy-hungry AI, impacts that are often felt across southern regions and affecting already vulnerable communities?
- What governance challenges do the deployment of frontier AI systems across Africa pose, and how can AI sovereignty be maintained in the face of potential growing dependency for the delivery of critical services on foreign-owned technology?
- What are the risks and challenges - and for whom - of increased pressure on the global AI supply chain, which depends on the extraction of critical minerals from African soil, and the hidden labour of content moderation, cleaning and labelling of AI training data and reinforcement learning from human feedback?
- What African priorities and concerns - including critically existential concerns about exclusion and inequality - become obscured in the fear of an AI takeover, which arises from a distinctly Western imaginary?
- What does the ascription of morality, personhood and even citizenship to AI agents mean for the fight for equal recognition and enjoyment of rights for people around the world, particularly those for whom these have been systematically and historically denied?
- How do we ensure focus is not lost on the opportunity to utilise AI to advance scientific research and technological solutions in critical areas like climate adaptation, drug discovery and agriculture?
While conversations in the Global North centre on the spectre of super intelligence and the containment problem, Africa faces a distinct set of urges which remain critically linked to the aspirations for AI advancement in the US, China and other rich economies. To understand what is truly at stake for Africa, we must situate these global anxieties within the continent’s own realities and aspirations. It is high time that conversations around AI safety are more meaningfully imbued with regional insights and reflections.
