
Responsible AI Beyond the Global North

Artificial Intelligence (AI) has emerged as a transformative force, impacting every aspect of our lives. While the leading edge of AI development is currently concentrated in the Global North, its ethical implications resonate worldwide. This blog post, inspired by the insights gained from the Global Center on AI Governance course “Artificial Intelligence, Ethics and Policy in Africa”, delves into the critical need to decolonize AI and embrace a more inclusive way forward in AI development.
My wake-up call that AI was getting really good, that it might actually lead somewhere, possibly towards approximating human intelligence, was the December 2016 New York Times Magazine article “The Great A.I. Awakening”. It explained the breakthrough in the quality of Google Translate due to the combination of neural networks and deep learning. However, this initial realization quickly turned to dismay when I learned of the incident that occurred a year earlier of Google Photos misidentifying two black Americans as gorillas. This time the wake-up call was that AI could actually be unintelligent, and as I learned more about the lack of diverse data in AI training data sets, it reinforced the old tech adage “garbage in garbage out”. I began to pay more attention to news about AI but particularly on its flaws as new incidents emerged, reflecting the biases embedded in our society.
Working in international development and specifically in digital health, I recognized a looming issue: AI would reignite the "digital divide," defined (as I have understood it) as the growing gap or “divide” between the Global North and Global South, as the Global North advances technologically more rapidly than the Global South, a term that became popular around the emergence of web browsers and the dot-com boom in the mid-late 1990’s. It motivated me to shift gears and start exploring what I could do with my knowledge and experience to support the Global Majority to leverage AI to its benefit while addressing these harms and harmful approaches in AI.
Why I Took the Course
I felt a crucial element was missing in the prevailing discourse and training surrounding AI ethics, policy and governance: a serious discussion about ethics that transcended a purely Western perspective. The question that lingered was, "How can we talk about responsible, trustworthy, ethical AI without examining what it means to be ethical, to be intelligent, and whose ethics are we even considering?" With AI development and governance largely led by the Global North, it was clear that Western ethical frameworks predominated, often overlooking crucial global perspectives. There is an implicit assumption that we all agree on what it means to be ethical, but do we? Those of us who did not grow up in African cultures but have worked in information and communication technology for development (ICT4D) know of Ubuntu, the Linux operating system, so I was aware of Ubuntu, an African concept and philosophy meaning “I am because we are”, that to be human is to identify and be in solidarity with others, emphasizing community over individualism. I had a superficial knowledge of ethical thought in general and wanted a more solid foundation for my pursuit of responsible AI policy and governance.
Widening the Lens Beyond the US
The AI, Ethics and Policy in Africa course was instrumental in expanding my understanding of responsible AI beyond the confines of the Global North. Through the lectures, assigned reading by African researchers and practitioners, and engaging discussions with classmates, many of whom were African, I gained invaluable insights into African ethical frameworks and contrasted them with Western concepts.
The core point made was the pervasive embedding of colonial thought within today's world order and how that permeates and seeps into AI. The course and the paper written by the course convenor Dr. Rachel Adams, makes the case that despite all the ethical AI frameworks, ethical AI will not be achieved without a fundamental shift away from colonial worldviews. The end of European colonial occupation and independence of Global South countries did not result in the end of colonialism. Think about our economic systems, legal systems, educational systems, i.e. what is taught in universities, what kinds of expertise are respected and funded, what languages are used in official business - where do these systems come from? Answer: the former colonial powers, the Global North, who dominate the global economy. Newly independent states had to conform to these systems to develop and join the global economy in order to survive. The case of Haiti’s exorbitant debt to France, its former colonial power, after gaining independence is a most egregious example and today is still unresolved.
Western conceptions continue to dominate the discourse on AI policy and ethics, shaping how knowledge from the Global South is perceived, included, or marginalized. Western ethical frameworks and standards—often developed in the US or Europe—are treated as universally understood and representative, sidelining local philosophies and values from Africa and other regions. This dominance leads to policies, guidelines, and even technical practices that may not account for the lived realities or priorities of Global South communities. It often casts these regions as beneficiaries or test sites rather than equal partners and policymakers. The reliance on Western ideals, such as individualism and particular notions of intelligence, means that alternative knowledge systems and ethical paradigms—like Ubuntu or community-based conceptions of well-being—are either overlooked or forced to adapt to outside standards, ultimately reinforcing old hierarchies under the guise of technological progress and ethical governance.
This persistent colonial presence, implanted in our foundational ways of thinking and operating in the world, requires a fundamental shift in our mindset to truly address the root cause of unethical, untrustworthy, irresponsible AI. It became clear that we are only scratching the surface by seeking to diversify datasets. ace, gender, and other social classifications have also been defined and used in fraught ways. They are not neutral nor fixed. They were shaped by historical processes and power dynamics.
I also gained a deeper understanding of tensions in the AI data “supply chain” that underscore a new reality: modern colonial exploitation in the form of data extraction, content moderation, and data labeling. Cases include Worldcoin’s gathering of biometric data (iris scans) in Ghana, Kenya, Sudan, Zimbabwe and other countries while failing to deliver on promised payments and being less than transparent about how the scans will be used, and the experiences of Kenyan data workers labeling heinous content to train AI while being paid low wages and provided little mental health support. The concept of digital sovereignty has gained prominence, as evidenced by the late negotiations on the EU AI Act as France and Germany sought to reduce restrictions that would hobble its own emerging AI competitors to the US, and more recently Microsoft's decision to shut down the email account of the chief prosecutor of the International Criminal Court in the Hague, which raised concerns about US platforms’ power to act as a digital “kill switch” to cloud and other digital services. Critical minerals in Africa raise ethical and environmental concerns associated with the extraction of minerals such as coltan vital for electronics like smartphones and laptops and cobalt used in semiconductor chips that drive AI data centers. The significant energy and fresh water consumption, as well as the environmental impact of the infrastructure required for AI, is particularly concerning in Africa, given the basic yet unmet power and water needs for essential use by households and hospitals, much less data centers.
Technology is Not Neutral
Another pivotal takeaway from the course was the understanding that technology, contrary to popular belief, is not neutral. I had previously read about Robert Moses and the course provided additional reading about his impact on New York infrastructure - roads, bridges, railroads, and subways - which demonstrated how even physical infrastructure, a form of technology, can appear neutral but is not, as it connects some while dividing others, creates access for some while blocking it for others. It reminded me about a quote I learned recently by Paul Virilio, a French philosopher who lived from 1932-2018 and is known for his critique of contemporary technology, that “every technology involves its accompanying accident: with the invention of the ship, you get the ship wreck; the plane brings on plane crashes; the automobile, car accidents, and so on. … the technocratic vision is thus one-sided and flawed in that it postulates a perfect technological system”.
Conclusion
The AI, Ethics and Policy in Africa course provided the foundational understanding of ethics with respect to AI that I had been seeking, particularly by contrasting Western and African ethical principles and examining how the legacy of colonialism remains deeply embedded in our contemporary world. The course underscored the critical need to "decolonize" AI, recognizing that ethical guidelines are a starting point, but a deeper re-examination and potential upheaval of the status quo and how we envision our world is necessary to achieve ethical AI. The work of AI policy, ethics, and governance needs to make space for multiple ways of knowing and being. The current state of the world and the AI race in particular make this a difficult endeavor, but as Martin Luther King Jr once said, “the arc of the moral universe is long, but it bends toward justice”. Through the course and other efforts, I hope we are shortening that arc and bending it closer towards justice.
Author: Alice Liu
NEXT READ: Listening to the Body: Responsible AI and Public Health in Africa
Articles in the “Ideas from the Palaver Tree” collection were co-edited by Selamawit Engida Abdella and Dr. Fola Adeleke