The discourse around the future of Artificial intelligence (AI) in the past several years has been dominated by narratives of a singular trajectory toward reaching artificial general intelligence (AGI). AGI is described as an AI that can perform a range of cognitive tasks comparable to expert-level humans, an aspiration that has shaped the mission of leading industry actors. Once AI reaches human-level, it has been hypothesised that it would soon be followed by “artificial superintelligence” (ASI): intellect that exceeds the best human capabilities in virtually every field, including scientific innovation, overall wisdom and even social skills.

AGI and ASI are claimed to hold significant potential for supporting innovation that would address critical global challenges. Particularly for Africa, AI is often presented as a promising catalytic technology that can promote the continent’s sustainable development. However, the race toward AGI and ASI has significant environmental implications, including the depletion of fresh water sources for cooling large data centres, excessive energy consumption by large models, and increased operational carbon emissions into the atmosphere.

We must also interrogate the political realities that shape digital innovation, contextualise the sociotechnical environments in which they would operate, as well as problematise setting particular perceptions of “intelligence” – informed by problematic social hierarchies – as a benchmark to surpass.

This article explores AGI and ASI as speculative continuations of knowledge and technology for development efforts, and how the ideals we embrace while envisioning the future of AI today will shape its prospects in Africa’s socioeconomic, political, and technological futures.

The Purpose of Development [is What it Does]

Development was conceived as a global project after the Second World War, when the US President at the time, Harry S. Truman, reduced the diverse and complex realities of the Global south to a condition of “underdevelopment”. Truman asserted that greater production is key for peace and prosperity, which he posited required wider and more vigorous application of modern scientific and technical knowledge. In doing so, he established a myth that idealised a linear path toward Western industrialised modernity, and has shaped development pathways ever since. The pursuit of development entrenched dependencies that made Africans reliant on the West to fulfill their most basic needs, while continuing to extract resources from the continent, effectively underdeveloping Africa.

Otto Ullrich highlights that this narrative masked how Western prosperity was attained by plunder and transfer of costs, and created the illusion that restructuring the world into predictable, manipulable systems were conditions for human happiness. He notes that these efforts have not only been futile, but have often been pursued at the expense of the environment, elucidating how Western technologies are not well suited for the future as far as their long-term ecological sustainability was concerned.

Despite the stated intentions to move away from imperialist exploitation toward global knowledge and resource sharing, the negative consequences of development are intrinsic to the values it was founded upon. As cybernetician Stafford Beer observed, the true purpose of a system is what it does, not what it intends to do.

The depoliticisation of structural injustices, and proposing technological fixes - a central development doctrine - fails to challenge the power imbalances that benefit wealthy states and development actors. This lens views Africa’s challenges not only as material deficiencies, but as a lack of the ‘appropriate’ knowledge needed to address them. It neutralises the kinds of knowledge that are valued and organised into empirical information for tackling issues such as poverty, healthcare and infrastructure challenges, as inherently virtuous.

ICTs for Development (ICT4D), a key movement in development that was popularised at the end of the 20th century, although presented as novel, has mirrored these patterns of external top–down interventions. Clapperton Mavhunga critiques how global innovation narratives have portrayed Africa as a technological void or problem space, erasing African knowledge and positioning the continent as a mere recipient of external solutions, expertise and technology from elsewhere. In addition to mismatched socio-cultural, political, and epistemic priorities, these efforts establish new hierarchies based on ‘superior’ forms of knowledge.

Similarly, AI and its projected iterations can be instrumentalised to reinforce this persistent development paradigm, which frames Africa within a constructed path toward modernity.

Such criticisms of colonial continuation, which reveal dominant development paradigms as colonial continuation, are however, confronted with the desirability of development in Africa as a justified aspiration for membership in the global society. Sally Matthews suggests that reconciling these tensions demands that beyond decolonising imaginations, we must disentangle our association of dignified livelihoods with the industrialised west as well as acknowledge a range of African experiences, including those who seek materially comfortable lives.

The excitement around AI for development in Africa is matched by anxiety that it will be yet another technological revolution that bypasses the continent. Yet Africans have always been part of the global history of technological transformation, from forced labour during the industrial revolution to pioneering mobile money and payment systems. The ‘AI era’ is no different, with Congolese children mining essential minerals for digital devices, data labelers moderating content in Kenya, as well as technologists innovating across the continent. The core issue is not exclusion, rather the positionality of Africans within these historical processes, which is disproportionately concentrated in parts of the value chain that are tedious, underpaid, and oftentimes harmful.

The pursuit of more advanced AI without radically challenging these norms is predicated on Africans’ exploitation for others to enjoy its benefits.

Whose Intelligence?

Intelligence has been historically constructed as rational, productivity oriented, and intertwined with eugenics and racial capitalism. It has wielded language for colonial control, deeming those who did not conform to Western norms of rationality and productive subjectivity as unintelligent. Even concepts such as cultural intelligence prioritised Western epistemologies embedding instrumental understandings of intelligence as a tool for governance and productivity.

African worldviews on the other hand, perceive development and intelligence holistically. Knowledge is not segmented into distinct disciplines; instead, it is woven by networks of ideas that children learn through their participation in socio-cultural and economic life. Across Sub-Saharan Africa, intelligence is described in social terms, including ideas of wisdom, reliability and responsibility for others. For example, in Zambia, intelligence is understood as not only cognitive aptitude, but also social responsibility. There is a common emphasis on experience-based practical knowledge, with greater attention to moral and social factors than the abstract and theoretical aspects emphasised in western perceptions of intelligence. Comparative studies indicate that Indigenous African assessments of intelligence are often contextual and build on communal values, while Western tests tend to focus on decontextualised reasoning, which may undervalue locally relevant ways of knowing.

AI has inherited the rationalist traditions that privilege disembodied, abstract reasoning as the model of intelligence, reproducing technocratic control and instrumental productivity. Cross-cultural research exploring which humans large language models (LLMS) mimic has revealed that their performance on cognitive and psychological tasks most resembles that of people from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) countries. Beyond its culturally narrow re-presentation of human intelligence, AI is also confronted with Poliyani’s conception of the tacit dimension that “we know more than we can tell” as well as personal knowledge, which stresses the critical role of the knower in knowledge-making and perceptions of intelligence.

The anthropomorphisation of intelligence also overlooks broader perspectives such as Indigenous science, which Robin Wall Kimmerer, an Indigenous Potawatomi botanist describes as “listening and translating the knowledge of other beings”. Recognising the agency of non-human parts of nature and reciprocally learning with them contrasts with Western approaches, which even in ethical and responsible AI discussions, centre human control without adequately asking which humans’ priorities are being considered and whether they should supersede all other parts of nature, life and being.

The tension is twofold: not only are African and Indigenous perspectives ignored in designing the modes of intelligence AI aims to replicate, but efforts to surpass a benchmark based on Western conceptions risk eroding these other ways of knowing. AI systems that prioritise rationalism and productivity can threaten community, social responsibility and living in harmony with nature by perpetuating isolation, exploiting vulnerable groups and degrading the environment.

If the future of AI is defined solely through exceeding the forms of intelligence that have informed its early foundations, this “super” intelligence will paradoxically be unintelligent when measured against social, experiential and embodied understandings of knowledge.
Speculating Superintelligence

Karen Hao highlights how the lack of a scientifically agreed-upon definition of intelligence creates a malleable benchmark that big-tech companies use to measure their progress and raise funds for the race toward AGI. She notes that the amorphous descriptions of AGI are largely based on aspirations and unfounded dreams, with no consensus on the nature of these technologies or how close we are to achieving them. Nevertheless, it is creating a mega industry whose energy and resource requirements overreach planetary limits and exacerbating the wealth inequality between big-tech companies and the rest of the global economy.

Today, AGI and ASI primarily exist in the imaginary and discursive sense, and yet Michel Foucault’s lecture that discourse constructs the material world and social realities it describes demands that we study these descriptions as narratives that shape how we direct our monetary and attention economies toward realising the future of AI.

Scholars have warned that claims about being close to achieving AI that parallels human intelligence are not based on rigorous scientific evidence, but rather the interests of tech leaders’ financial interests. These ideologies have already permeated into speculative policymaking, resulting in significant monetary and environmental costs without substantial benefits to the general public.

The investment in a singular, predetermined future is what Ruha Benjamin calls an “imposition of dominant imagination” that presents itself as universally desirable. Her call to embrace collective imagination for plotting a future where everyone can flourish invites alternative speculations.

Expansive Imaginations on the Future of AI

Rebecca F. Kuang reminds us that speculative literature is an experimental space for envisioning the worlds that we want to live in and the worlds we may find ourselves in if we are not careful. Science fiction in particular serves as a “conceptual incubator" for the future of technological advancements, and has both anticipated and shaped the trajectory of AI development.

Dominant imaginaries of AGI and ASI are heavily influenced by science-fiction, including multinational technology companies in the race to attain these imagined technologies. Luke Goode’s examination of popular imaginations of a near future where AI overtakes human cognitive capacities, shows that although often sensationalised and potentially misleading, can be helpful for interrogating the stories we tell ourselves about the role of AI in our future. For instance, scholars have discussed how films like The Terminator have perpetuated dystopian narratives and influenced national positions on autonomous weapons as well as how feminised virtual personal assistants reproduce harmful stereotypes of women as subordinate helpers.

Writings from the margins about non-human intelligences not only engage with a wider scope of inquiry, but also compel deeper reflections on the implications of the very quest to define, organise, and impose “intelligence” as a force to shape the future of society. Eleanore Drage has explored how women of colour have expanded speculative writing by incorporating socio-cultural and spiritual knowledge into technology, and recalling on past violences their ancestors survived to offer visions of hopeful and ecologically sound futures. One of the works discussed in the paper is a Kenyan short film Pumzi, directed by Wanuri Kahiu that discusses water scarcity in futuristic Africa, centring Black women’s prospects for fostering better futures for humanity.

Similarly, Octavia Butler’s Patternmaster, featuring the division of human civilisation where a ruling class with exceptional cognitive capabilities exploit the labour of those who are not part of their network, serves as a critical lens to interrogate coercion maintained by “superior abilities” to establish social hierarchies of power. Her Parable series on the other hand can help us re-consider empathy and community building as important capabilities for humanity’s future. Beyond the fictional instruments that can help us imagine the future differently, there are speculative theories such as Afropresentism, conceptualised by Neema Githere as drawing on ancestral knowledge and all technologies at one’s disposal, suggesting that those in the periphery may have the privilege to experiment and [re] imagine how to bridge memories and dreams by recognising that the present is the future in motion.

The future of AI in Africa must grapple with the historical conditions, knowledge politics and values of its people. How Africans engage with AI today, through embracing, redefining, resisting or reimagining AI is a co-constitutive process that will shape both the role of future technologies on the continent, and the role Africans would play in that future.

Our current visions about the future of AI is similar to seeing a beam of light through a keyhole and imagining that the light within that locked room must outshine the sun outside.

The key to a brighter future of AI in Africa must include diverse forms of knowledge to unlock multiple possibilities; establish ethical technology development frameworks that will set the precedents for just labour and knowledge relations; and ensure that these aspirations do not compromise our existing eco-social relations.


--------------------

This article is part of the Africa and The Big Debates on AI analysis series, produced as an output of the African Observatory on Responsible AI, a flagship project of the Global Center on AI Governance, funded by Canada’s International Development Research Centre and the UK government's, Foreign, Commonwealth & Development Office

Sign up to our newsletter
Stay updated with the latest news and exciting updates from GCG!
By subscribing, you agree to receive occasional emails and newsletters from the Global Center on AI Governance. We respect your privacy and will never share your information outside our organization. For more details, please see our terms & conditions.