Dr Leah Junck, Senior Researcher: Global Center on AI Governance

What does it mean to build machines into our lives that act like us, when acting “like us” has always been riddled with questions of care and power? As more human-like versions of AI are expected to step into roles of emotional, cognitive and bureaucratic human labour, the quiet politics of who matters risk becoming background noise in the algorithm’s data chase, and considered a mere cost of functionality.

This piece critically examines the rise of "agentic AI" - AI systems that mimic human initiative and decision-making, arguing that treating such systems as human-like partners risks reinforcing harmful assumptions about humanness, agency, and responsibility, especially in contexts shaped by historical inequality and marginalisation.

From Tools to Partners

Frontier AI systems are increasingly claimed to smoothly align with human needs and desires, steadily improving at their ability to replicate human thinking and action. “Agentic AI” has become a key stand-in for the idea of systems that take human-like initiative, whether in the form of trading agents navigating market volatility, robots managing warehouses, autonomous vehicles adapting to road chaos, or systems detecting and pre-empting military threats. These technologies are envisioned not just as tools, but as collaborators - ones that can conveniently make decisions and act on our behalf.

Technologies designed to imitate humanness are also envisioned to become central in highly intimate domains: as personal assistants that fine-tune our calendars, healthcare agents that coordinate patient care - and even as tools that manage emotional labour, filling in for human therapists and simulating dead loved ones. Unlike “traditional AI” which remains largely reactive and leans more on human interpretation and activation, agentic AI technologies appear to be guided by purpose. Their perceived capacity to plan, learn, and adapt with minimal oversight reinforces the idea that these are not just machines but, for the most part, autonomous actors that require only minimal human oversight.

As they creep into domains previously reserved for humans, the use of such tools is legitimised both by their functionality and supposed ability to creatively improvise by imitating human patterns of imagination. Drawing on vast data reservoirs, blending and recontextualising different sources of information, they are imagined to understand us - support us better, enhance our effectiveness, and “free us up” to focus on what we deem to be the more important tasks in life.

Forms of AI considered agentic, then, also fundamentally tug at questions of how we think about human-technology relationships and where epistemological priorities should lie for both humans and machines. It becomes especially important to look at this from African vantage points, given unequal distributions of access, limited insights into how people relate to technologies on the continent, and the potentially amplified costs where AI, fragile infrastructures, and histories of marginalisations meet.

The Trouble with “Human” as a Benchmark

Beneath agentic AI “filling in” for humans while apparently aligning itself with actual human wants and needs lies a problematic premise, namely that the category “human” is stable and universal enough to model and replicate. If agentic AI is designed to imitate, collaborate with, or even substitute for humans, it presumes that humanness is a coherent and fixed benchmark. And yet, meanings imbued in the category of humanness are neither neutral nor universally settled. They very much hinge on the judgement of other humans and are fundamentally informed by contextual developments and power dynamics.

Anthropologist Francis Nyamnjoh addresses these complexities aptly when explaining that human existences, rather than being a firm state, are quintessentially informed by our conviviality. This means that we are not just inherently dependent on one another. But our well-being is also contingent on developing a capacity to live different lives among one another in a way that is improvisational, open to ambiguity, and of mutual becoming. This process of interconnecting, so Nyamnjoh insists, is never tidy or quite complete. Instead, it requires a constant challenging of rigid identities and boundaries.

Indeed, being human is a matter of degrees. There is a lot we as people share, like the existential risks we are constantly exposed to and our dependability on fellow humans and systems in living with those risks. How we are able to exist, to a profound extent, depends on how people respond to us and whether they recognise our needs as worthy of being attended to. Or, in other words, the level of humanity ascribed to us in the context we find ourselves in.

Linked to this level of humanness is the extent of agency deemed appropriate to assign to a person. Refugees, for instance, hold a grey zone status in society - not quite considered to belong but not (yet) expelled. Being in bureaucratic limbo also means being limited in acting on one’s needs - and being more dependent on these needs being legitimated through surrounding power structures. People who commit crimes and whose moral status is downgraded through institutional sanctions illustrate how societal decisions are undergirded by certain understandings of what makes one worthy of acknowledgement, and how the fragility of the status “fully human” serves as an order-maintaining measure. There are even explicit degradation rituals documented across cultures and the globe that mark the lowering of a person’s status by shaming or expelling them from the group (e.g., public uniform-stripping and excommunication). Slavery, both historical and contemporary, operates precisely on the denial of a human status, involving practices of renaming, branding, and isolation.

Often, however, gradations to humanness seem more subtle as they have become rather steadily integrated into rationales of normalcy. Violence against sex workers, for instance, might be judged by some, but does not typically elicit much protest or consequence. This is because internalised assumptions of some peoples’ humanities weighing less than that of others facilitates a progressive loss of visibility and access to rights-reaffirming measures. The more subtle and normalised these gradations of human treatment become, the more effectively they reinforce prevailing social logics about whose needs count - and whose can be overlooked without creating any consequential social friction.

Humannness, then, is informed by hegemonic assumptions and ideals, maintained through steady practice and ways of paying attention to, or unknowing, certain experiences. They are also deeply political, as becomes evident when women’s rights are habitually tarnished and undermined by institutional knowledge systems that fail to adequately reflect their lived experiences. Even when not explicitly considered “lesser humans”, certain groups such as children and people with mental disabilities are often thought of as having less capacity as agents and are thus, by legal default, largely excluded from social practices in place that proactively define justice and worth.

All of this matters when we consider AI, especially in its more autonomous forms. Judgments about who gets to act and make important decisions are firmly and conceptually linked to notions of humanness and moral agency. In human systems, action is associated with intention - and intention with accountability. People are held liable because they are believed to understand their actions and able to choose otherwise in weighing different consequences. Ambitions to mechanise human processes through AI complicate this logic. When AI produces outputs informed by inappropriate judgements, which usually only becomes apparent when related unjust outcomes are difficult to ignore, this tends to be subsumed under the idea of data bias or algorithmic error – problems typically deemed addressable with minimal human intervention and through technical adjustments. This technification of a profoundly social issue makes it more likely for judgements to be quietly entrenched into AI systems.

The Cost of “Close Enough”

There is a dangerous conflation of epistemic concepts like humanness and choice-making (and a rat’s tail of assumptions they string along) when AI technologies are cast as “close enough” to maintain processes that are inherently social or even improve them. Machines are often humanised, described as sensing or empathetic, but not held to the same standards of explanation as humans. AI is not asked to trace a thread of rationale for those impacted by its outputs and related decisions. Nor is it expected to reflect nuance or our interdependence in the same way human-driven systems are compelled to at least feign attentiveness to. As a result, there are few pathways for recognising the complex entanglements and mobilities of being human, nor any accommodation for the fact that our existence depends on the continual reaffirmation of our value within the different social constellations we inhabit. After all, AI is designed to serve and not to owe, with harm compensation remaining a topic policy-makers only carefully approach.

Considering machines able to reliably and adequately respond to human needs disregards the circumstance that this responsiveness - to recognise and accept obligations of care - is deeply bound up in hierarchical systems. When AI is treated as capable of neatly filling in for us with only tangential human input, thereby “freeing us up” to focus on what we deem most important, without acknowledging that humanness is constructed through collective, power-laden social processes, we risk further detachment from the difficult everyday labour of ethical relationality.

What we define as “important” against the backdrop of human-machine interactions should begin with a reassessment of how we understand justice and the treatment of others, especially as systems are applied to under-researched and policy-guarded settings. This becomes all the more urgent when AI actions are framed as “close enough” to human intention - a sentiment that makes it all too easy to disregard moral and legal responsibilities and to overlook the deeply embedded layers of dependency and power that inform both human and technological decision-making


Sign up to our newsletter
Stay updated with the latest news and exciting updates from GCG!
By subscribing, you agree to receive occasional emails and newsletters from the Global Center on AI Governance. We respect your privacy and will never share your information outside our organization. For more details, please see our terms & conditions.