The vultures have not seen you but the ant has
- isiZulu Proverb

Introduction: The Vulture and the Ant

In African tradition, wisdom is often curated through oral storytelling. Observations from the environment and the animals and birds that share our fragile ecosystems provide relatable proverbs for describing the human condition. The vultures have not seen you but the ant has, carries multiple meanings. First, whilst both eat the same food the ant is humble and observes more than the proud vultures circling above. Second, the proverb serves as a forewarning of terrible events to come. In this article, its use is to conceptualise the imperative of maintaining human judgment in digital environments where our reasoning is progressively perceived as inefficient and unreliable compared with that of artificial intelligence. The article proceeds philosophically, then examines a use case, and concludes by exploring a potential existential threat.

The imperative raises two questions: (i) what importance do we give to our cognition in a future where machines are ostensibly ‘better’ at all cognitive tasks than we are; and (ii) why in a report on Artificial Intelligence in the Military Domain, did the Secretary-General of the UN assert that “machines that have the power and discretion to take human lives are politically unacceptable and morally repugnant.” All things considered, if both machine and human reason towards the same objective in the same way, it seems wrong that only the machine is characterised as morally reprehensible while the human remains free to make these decisions.

To the observer interrogating this quandary and the potential dangers it forewarns, one thing seems certain, that, if nothing else, it would appear wise to approach it with the humility of the ant. More precisely, to consider what the ant notices about human judgment that the vulture overlooks.

Human Automatons, Autonomous Machines

The first observation points towards distinguishing between ‘autonomy’ and ‘automation’. Philosophically, a simple description for human autonomy is that it represents the human experience when our desires or impulses compel us to do something we know we ought not to and we rely on our own judgment or reasoning to refrain from acting. Conversely, machine autonomy refers to systems acting in a particular environment with limited or no human intervention – here the more appropriate term is ‘automation’. A human with limited autonomy may, similarly, assume the role of an automaton (i.e. someone functioning in a mechanistic and predictable way).

In decision-making the nuances between human autonomy and machine autonomy matter. For the human, autonomy refers to discerning whether to act in accordance with a particular moral standard. For the machine, it merely means operating independently from the clutches of its designers. Agentic artificial intelligences may have complicated this distinction because their processes mimic human deliberative functions – in some instances so convincingly that we even question their sentience. However, human desire and consciousness (human autonomy) differs fundamentally from rapid prediction and calculation made possible by exponential compute power (machine autonomy). Conflating the two ignores that the former entails intentionally choosing to act in accordance with societal norms whereas the latter pertains solely to operating consistently with these norms – just as a ticking clock doesn’t pause to consider why it ticks, artificial intelligences’ algorithms do not pause to consider why they analyse the datasets presented to them. And for that matter, neither would the human automaton.

Autonomy And Human Judgment

With this framing, one can discern why continued human judgment remains critical. The concept of free will is philosophically contested, but if it exists it is inherent in our capacity to act contrary to our instincts or desires – what can be described as second-order volitions or the thoughts behind our immediate thoughts. Though not overtly discernible, this capacity grants us intellectual independence or autonomy and constitutes a moral freedom (i.e. we remain free only as long as our minds, too, are kept free). In Lectures on Kant’s Political Philosophy, Hannah Arendt stressed that in such a state imaginative capacity and contemplative reflection are essential for impartial judgment.

Thinking, willing, and judging constitute the human cognitions necessary for autonomy and the burden of carrying genuine moral weight in relation to moral standards. In The Life of the Mind, Arendt described this as the ‘two-in-one’ or duality, representing the internal dialogue of self with self. As a prerequisite for moral judgment the mind must therefore remove itself from the appearing world and from involvement in immediate interests to consider the particularities of a situation. In this deliberative state, like the ant, one assumes the role of the observer, not the actor. As artificial intelligences continue to anthropomorphise one may forget that they remain incapable of this internal dialogue – observation is not the same as analysis. Even when functioning consistently with our ethical standards, the absence of second-order volitions means they cannot reason in accordance with these standards and, as a result, they bear no moral responsibility for their actions.

By relying on confusingly similar terms describing machines as ‘intelligent’, ‘thinking’, or ‘autonomous’, we are drawn into beliefs that distinct philosophical nuances nevertheless amount to the same thing. However, a life-or-death decision left to a machine can be called morally repugnant because it does not, in fact, act out of concern for morality. Even when its behaviour is consistent with ethical norms, it processes them in blind obedience to those norms with history offering up many horrific examples of the consequences of superseding independent judgment with such an obedience. For this reason, philosophically at least, even if inefficient, human judgment remains a crucial and necessary capacity for safeguarding humanity’s moral integrity. For this reason too, we should also avoid functioning like automatons at all costs.

Brain Matter Still Matters

How might this philosophical confusion translate into a dangerous practical reality? Where might it lead future societies? A possible answer comes from a recent MIT Media Lab study, which tasked three groups with writing a series of essays. One group used only brain power, another search engines, and the final group relied entirely on LLMs. These essays were then examined by human teachers and artificial intelligences. The LLM group focused heavily on output and was statistically the most homogenous, which stemmed from the narrow range of ideas categorised by copying and pasting content as opposed to incorporating original conceptions into each participant's essays. The group also failed to engage deeply or critically with their topics.

Without critical engagement, the accumulation of cognitive debt becomes likely, representing a cognitive state where one becomes susceptible to internalising bias and superficial understandings of the world – participants using LLMs showed significantly diminished mental effort (up to 55% reduction) and were the most compromised in their ability to think independently, with possible long-term costs including “diminished critical inquiry, increased vulnerability to manipulation, [and] decreased creativity.” Conversely, the brain-only group had the strongest activations in the areas of the brain responsible for semantic integration, creative ideation, and executive self-monitoring. They also experienced higher satisfaction and focused more on interrogating the ‘what’ and ‘why’ behind their writing, whereas the assisted groups primarily focused on the ‘how’.

Findings suggest that digital support tools are restructuring the underlying cognitive architecture of our brains – potentially even affecting our neurocognitive development. The areas of the brain responsible for episodic memory consolidation, semantic encoding, and deep memory encoding processes represented the most significant and consistently diminished cognitive and behavioural divergences experienced by the LLM group as compared to the brain-only group. In relating this back to their essays, this meant participants could not articulate meaning coherently, attach any significance to their writing, were psychologically dissociated, and had limited cognitive agency.

Although restricted to educational essay writing and not necessarily generalizable across all artificial intelligence domains, the study also suggests that habitual usage over time might compromise long term decision-making and behavioural competence. It follows that as digital tools get ‘smarter’ and more ‘useful’, autonomous humans are likely more susceptible to becoming human automatons. This brings the philosophical confusions, as well as the potential political and existential ramifications relating to human judgment back into this discussion. It isn’t that difficult to imagine a future where an artificial general intelligence, because of its sheer usefulness in every conceivable cognitive task, causes human judgment for any user who relies on the tool, to collapse altogether. If exponentially scaled over time (most likely by the United States, China, or a corporation domiciled in either of the two) this may result in future localised vulnerabilities, for instance, subliminal psychological indoctrination, for the cognitive debt accumulating societies whose memories are atrophying and for whom existence has shifted to complete dependence on foreign tools (see Rob Nixon’s Slow Violence and the Environmentalism of the Poor for an analogous illustration of how this potentially unfolds over time). Steve Biko’s ominous warning comes to the fore, “the most potent weapon in the hands of the oppressor is the mind of the oppressed.”

Conclusion: Automatons As Existential Risks

Both C. S. Lewis and Norbert Wiener postulated a future moment in history where human mastery over nature (including human nature) reaches a crescendo. Here, any additional technological advancement is met with a corresponding enslavement to nature as significant extractive practices mean there is less that remains to be extracted. At this high-point, humans unwittingly confer less power on future generations even though they may have incredible machines at their disposal. This is because, for these future generations, the parameters for operating have been hard-coded into existence long before they even arrive on earth. Wiener likens such technological conquests to the Mad Hatter’s Tea Party. When the treats at one seat are exhausted we naturally move to the next and, when Alice asks what happens when they come around full circle and there are no more chairs the March Hare quickly changes the subject.

Like Alice, we should ask the technologist what happens to humans in a future where a barrage of powerful recommender algorithms and self-improving artificial intelligences swiftly meet every cognitive demand we could ever experience in a day? It seems possible that we could open a Pandora's Box which dictates that human lives are externally governed by algorithms and internally characterised by sheer thoughtlessness. On this, history reminds us that disenfranchised populations – those subject to extreme social amortization and individualization – begin to develop a sense that their lives are expendable and do not matter. In these societies the conditions for mass movements preceding authoritarian or totalitarian rule become possible.

In conclusion, yes, a future artificial super intelligence may place human existence on the brink of extinction if it fails to align with human values. But this fear may be less immediate than guarding against the kinds of more plausible atrocities humans have previously perpetrated. History shows that simple tools in the hands of many can be vastly more destructive than the weapons of mass destruction we fear. In this comparison, artificial super intelligence may end up being more like a present day atomic bomb – a single weapon capable of mass destruction that lies dormant – while the more tangible danger comes from many human automatons, each capable of individually causing harms that scale. As the saying goes, death by a thousand cuts.

All that remains to be written is that when you next rely on an artificial intelligence for cognitive support remember the ant and its observations.


Sign up to our newsletter
Stay updated with the latest news and exciting updates from GCG!
By subscribing, you agree to receive occasional emails and newsletters from the Global Center on AI Governance. We respect your privacy and will never share your information outside our organization. For more details, please see our terms & conditions.