By Javier Surasky
Introduction
We continue
to explore myths from different places and cultures around the world. We have
already done so with classical Greek myths in a previous post.
The method
we apply has an exploratory and critical scope, not an explanatory or
predictive one, and follows a hermeneutic–comparative trajectory. We do not
treat myths as symbolic anticipations of artificial intelligence (AI), nor as
historical explanations of contemporary phenomena, but rather as dense
conceptual structures, thereby avoiding any suggestion of historical continuity
or literal equivalence. Instead, we argue for the richness of a structural
analogy between myth and the problems and challenges posed by AI today, since
the analytical value of myth lies not in providing answers but in making
visible assumptions that contemporary technical discourse tends to naturalize.
An
additional caution is necessary to avoid falling into “interpretive
extractivism”: these narratives are culturally and historically situated and
possess internal meanings that far exceed the use we make of them here. For
this reason, we do not aim to exhaust their significance nor to speak “on
behalf of” the communities that produce and transmit them. More modestly, we approach these narratives as critical devices to press on present-day
categories without detaching them from their origins or reducing them to simple
instrumental metaphors.
This
exercise cannot eliminate the asymmetry between the spaces in which these myths are produced and those in which they now circulate as analytical tools, but it seeks to make that asymmetry visible, a characteristic of contemporary systems
that appropriate knowledge without taking responsibility for the conditions of
its production.
On this
occasion, we turn our attention to African Indigenous peoples. As is always the
case in exercises of this kind, it is impossible to capture the full richness
and diversity of the continent, so we maintain our strategy of selecting three
myths from different peoples and geographical subregions of Africa.
Nommo’s Sacrifice: Ex Post Mitigation
In Dogon
culture, characteristic of the Sahel subregion, disorder is not an accident but
the result of an intervention that alters the very process through which the
world is constituted.
The myth of
Ogo recounts that Amma creates the “world egg,” from which twins—Ogo and
Nommo—were meant to be born. Ogo, however, decides to leave the egg
prematurely, seeking to appropriate the universe in formation. He tears off a
piece of his placenta, from which his sister was to be born, steals seeds
created by Amma, and uses the detached fragment of placenta as a vessel, which
later becomes the Earth. Ogo then “penetrates” the Earth in search of his lost
sister, whom he will never find—an act interpreted by the Dogon as incestuous:
the penetration of his own mother’s placenta.
All of this
disrupts the order of creation, which is only partially restored when Nommo
descends to repair the damage caused by Ogo, whom Amma ultimately transforms
into a pale fox (Bonnefoy [ed.], 1993:154).
This
narrative sequence allows us to think about the problem of mitigation after the
deployment of AI systems, once they are already operating in the world,
generating externalities and accumulating dependencies. The myth of Ogo
provides a framework for conceptualizing failure as a process of escalation: a
design or deployment decision triggers a chain of effects that, once
consolidated, become difficult to reverse.
At this
point, the mythical analogy directly engages with a broad literature on path
dependence and increasing returns in political and technological systems. As
Pierson shows, early decisions tend to produce cumulative effects that
reinforce the chosen path and progressively raise the costs of institutional
change, even when initial outcomes prove suboptimal (Pierson, 2000:251–259). In
a convergent vein, Arthur demonstrates how small contingent events can generate
processes of technological lock-in, in which later corrections cease to be
neutral options and become costly reconfigurations of the system as a whole
(Arthur, 1989:116–121). In the case of AI, such lock-in is rarely confined to
“the model” itself; it tends to be embedded in ecosystems comprising
infrastructure, data, contracts, standards, and organizational practices that
raise the political and institutional costs of reversing early decisions.
We argue
that this myth also introduces the idea of repair as cost. Restoring order
requires a specific and comprehensive mechanism—the sacrifice of Nommo—that
goes beyond piecemeal corrections. In the field of AI, such “fixes” often
involve withdrawing systems, auditing decisions, compensating harms, retraining
models, redesigning human processes, and reconfiguring organizational
incentives. Ex post repair does not restore a prior state; it reorganizes the
world under the imprint of damage already done.
Finally,
the myth points to responsibility as a sustained practice: creation continues,
but disorder remains as a memory that conditions the system’s subsequent
development. In contemporary terms, this resonates with the difficulty of
institutional learning from past failures in complex systems. McGregor observes
that, despite the growing deployment of intelligent systems in safety-critical
domains, the international community lacks shared, binding formal mechanisms to
learn from past failures—hence her emphasis on making incidents visible as a
condition for preventing their recurrence (McGregor, 2020:1–6).
Sky–Earth Connection and Natural Resources’ AI Overexploitation
Among the
Lozi, the myth of Kamonu narrates the rapprochement between creative power and
human initiative. In the beginning, the creator god lives alongside humans,
among them Kamonu, who stands out for his intelligence. When the creator works
iron and forges tools, Kamonu imitates his actions. Tension turns into conflict
when Kamonu makes a spear and kills an antelope. Upon learning of this, the
creator punishes him for breaking the order of coexistence among beings and
expels him from his lands. Kamonu begs to return and is readmitted on the
condition that he devote himself solely to cultivation, yet he again kills
animals when they enter his plantation.
From that
point onward, a series of disastrous events unfolds: objects break,
domesticated animals die, and finally Kamonu loses his child. He goes to the
creator to complain and finds, in the god’s house, the broken objects, the dead
animals, and his own child, all in the god’s possession, who refuses to return
what was lost. Kamonu decides to pursue him, and when the creator ascends to
the sky, he attempts to reach him by climbing a thread woven by a spider. When
this fails, he piles up trees to reach the sky, but the structure collapses,
and with it the aspiration to access the god’s world through accumulation
self-destructs (Parrinder, 1986:40).
This
narrative invites reflection on the extractive use of natural resources and
data that fuels the AI industry. Kamonu embodies a characteristic impulse of
technical modernity: imitation as a desire for equivalence—not to use a tool,
but to appropriate the creator’s role and alter its purpose. The myth thus
shifts resource capture from an instrumental register to a relational one:
killing “brothers” introduces a rupture that produces irreparable losses and
warns against the illusion that the extraction of matter and data can be
sustained without systemic costs.
This
reading converges with contemporary diagnoses of the extractive nature of data
capitalism. Couldry and Mejías describe how the systematic appropriation of
information and human experience constitutes a new form of colonialism, based
not on classical territorial occupation but on massive data capture and the
subordination of entire populations to opaque digital infrastructures (Couldry
& Mejías, 2019:1–27). In a convergent institutional register, UNCTAD’s Digital
Economy Report 2024 describes the persistence of digital and data divides
and highlights dynamics of concentration within the digital ecosystem; in
particular, it notes how the growing centrality of platforms and digital
infrastructure services is associated with forms of concentration that cut
across the data value chain (UNCTAD, 2024:3–5).
The
creator’s withdrawal and the failure of Kamonu’s ascent thus point, in
contemporary terms, to disputes over access to the infrastructures that enable
the benefits of AI. Institutionally, this is reflected in the limits of
algorithmic impact assessment frameworks, which may be weakened when they rely
on technical and organizational knowledge to which, in practice, only system
developers themselves have full access (Selbst, 2021:117–128).
On Chameleons, Lizards, AI-Priorities, and Latency
The Zulu
inhabit southern Africa. One of their central myths recounts how Unkulunkulu
sends a simple and decisive message to humanity through a chameleon: “No man
shall die.” Noticing that the chameleon moves slowly and becomes distracted
along the way, Unkulunkulu sends a second messenger—a lizard—with the opposite
message: “Men shall die.”
The lizard
arrives first, and when the chameleon finally reaches the people, they reply
that they have already heard the lizard and that “Through the word of the
lizard, men will die” (Callaway, 1870:6).
This myth
helps illuminate a key feature of contemporary AI: the primacy of time. In
rapidly scaling systems, whoever arrives first—whether to the market or to the
state—sets trajectories of use and normalization before safeguards, audits, or
regulations are in place. The myth does not claim that delay causes death in a
literal sense, but it does suggest that the order of the world can be sealed by
a temporal sequence: once the first word is spoken and accepted, the second no
longer carries the same weight.
This
problem has been widely addressed in the literature on technological governance
as a mismatch between innovation and control. Collingridge formulated this
dilemma early on, noting that in the initial stages of a technology change, it
is easy, but the effects are not yet visible, whereas once effects become
evident, the technology is already deeply embedded and difficult to modify
(Collingridge, 1980:11–24). Subsequent work has taken up this intuition under
the notion of the “pacing problem,” emphasizing how the speed of technological
development tends to outstrip regulatory and normative response capacity
(Allenby, Marchant & Herkert, 2011:1–19).
Within this
framework, contemporary diagnoses of competitive framing reinforce the mythical
reading. Cave and Ó hÉigeartaigh describe the rhetoric of an “AI race for
strategic advantage” and warn of the risks that such framing incentivizes
corner-cutting in matters of safety and governance (Cave & Ó hÉigeartaigh,
2017:1–8). In the same vein, a report coordinated by Perry World House and RAND
identifies the intensification of these dynamics—particularly between the
United States and China—and warns of the risk of catastrophic accidents,
misuse, and associated violent conflicts (Perry World House & RAND,
2025:v–xii).
The
difficulties in establishing a global model of AI governance thus resemble the
chameleon: the critical question is not only which rules are designed, but also
the pace of their progress, slower than the pace of change in digital
technologies.
Conclusions
Taken
together, the three myths support a single thesis: artificial intelligence does
not operate as a neutral or self-sufficient entity, but as a conditional,
relational, and materially situated form of power. It is conditional because it
depends on infrastructures and access rules it does not fully control;
relational because it reconfigures relationships, hierarchies, and asymmetries;
and material because it rests on finite resources and deployment temporalities
that distribute costs and benefits unevenly.
Read
through this lens, the promise of objective and efficient AI appears not as an
intrinsic technical property but as a stabilizing narrative—a way to close
debate on responsibility, repair, and limits precisely at the moment when those
debates become most urgent.
Myths do
not offer normative solutions or alternative models of governance, but they do
enable the denaturalization of this premature closure of meaning, reminding us
that every constructed order—including the technological one—entails decisions,
sacrifices, and exclusions.
Thinking
about AI through these narratives is therefore neither an exercise in nostalgia
nor in forced analogy, but a way of resisting the idea that the technical
present lacks conceptual precedents and therefore imaginable alternatives.
Where dominant discourse presents the expansion of AI as inevitable and linear,
myths reintroduce questions of how, for whom, and at what cost—lifting the veil
behind which digital technologies seek to conceal high-impact global political
decisions under a discourse of naturalization, objectivity, and progress.
References
Allenby,
B., Marchant, G., y Herkert, J. (Eds.). (2011). The growing gap between
emerging technologies and legal-ethical oversight: The pacing problem.
Springer.
Arthur, W.
B. (1989). Competing technologies, increasing returns, and lock-in by
historical events. The Economic Journal, 99(394), 116-131.
Bonnefoy,
Y. (Ed.). (1993). American, African, and Old European mythologies.
University of Chicago Press.
Callaway,
H. (1870). The religious system of the Amazulu. Trübner & Co.
Cave, S.,
& Ó hÉigeartaigh, S. (2017). An AI race for strategic advantage:
Rhetoric and risks. Future of Humanity Institute, University of Oxford, 1-8.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3446708
Collingridge,
D. (1980). The social control of technology. Frances Pinter.
Couldry,
N., & Mejías, U.A. (2019). The costs of connection: How data is
colonizing human life and appropriating it for capitalism. Stanford
University Press.
McGregor,
S. (2020, November 18). When AI systems fail: Introducing the AI Incident
Database. AI Incident Database. Partnership on AI. https://partnershiponai.org/aiincidentdatabase/
Parrinder,
G. (1986). African mythology (rev. ed.). Peter Bedrick Books.
Perry World
House & RAND Corporation. (2025). The artificial general intelligence
race and international security. University of Pennsylvania / RAND. https://www.rand.org/pubs/perspectives/PEA4155-1.htm
Pierson:(2000).
Increasing returns, path dependence, and the study of politics. American
Political Science Review, 94(2), 251-267.
Selbst, A.
D. (2021). An institutional view of algorithmic impact assessments. Harvard
Journal of Law & Technology, 35(1), 117-190. https://jolt.law.harvard.edu/assets/articlePDFs/v35/Selbst-An-Institutional-View-of-Algorithmic-Impact-Assessments.pdf
UNCTAD.
(2024). Digital Economy Report 2024: Shaping an environmentally sustainable
and inclusive digital economy. https://unctad.org/system/files/official-document/der2024_en.pdf
Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121-136. https://faculty.cc.gatech.edu/~beki/cs4001/Winner.pdf
A Spanish version (ES) will be available next Friday
