By Javier Surasky
What do an
international megaconference, a university classroom, a corporate board
meeting, and a gathering of friends have in common? Regardless of the reason
for the meeting, it is highly likely that at some point, someone will talk
about Artificial Intelligence (AI).
Widespread
across every corner of the world and immensely powerful as a tool with no
purpose of its own, it is only natural that AI has become a central element of
international politics.
Today, we
see algorithms crossing borders faster than diplomacy, influencing economic and
security decisions, and forcing states to rethink their place in a world where
the variables defining power now include the ability to manage and produce data
and computing capacity, alongside traditional forms of influence. We are
witnesses to, and participants in, a race for control of knowledge and
innovation, where every step ahead matters.
What began
as a dream for a few has evolved into a competition among technology companies,
and today it has also become a race among states. Leading the way, the United
States, China, and the European Union stand as the leading contenders, with
national strategies and billion-dollar budgets to support them. This headlong
race, with little concern for those left behind, recalls the 20th-century space
race, but with deeper implications: this time, the state is not the main protagonist,
and whoever reaches the finish line first will set the rules for everyone else.
In 2023, in
recognition of the impact of large language models like ChatGPT, the Bletchley
Declaration, signed by more than 25 countries, marked the first attempt at
a global dialogue on AI governance. It was an explicit recognition that this
technology, its use, and its control had ceased to be an internal or purely
corporate matter: AI had become a critical issue of international security and
active peacebuilding.
With
centuries-old traditions showing cracks and giving way, AI reinforces the idea
that, in the 21st century, power depends less on territorial control than on
control of information. It is hard to know whether today the struggle of
developing countries to establish a New
International Economic Order (NIEO) would attract more attention than
their past efforts to create a New
International Information and Communication Order (NWICO), now almost
forgotten.
Countries
that concentrate digital infrastructure, data centers, and AI companies enjoy
advantages comparable to those once granted by oil, and before that, by steel.
The main difference is that, in those previous cases, the state was the central
actor. Today, that role belongs to a handful of private companies that hold
enormous power. Moreover, neither oil nor steel ever entered our homes the way
AI has, from the first Roomba to the systems that now shape our digital lives.
In our
context, algorithms function as instruments of both soft and hard power: they
shape perceptions, influence elections, serve as weapons of war, and create a
divide that will determine the world order for the coming centuries, the digital
divide, a new version of what, more than a century ago, was the gap between
industrialized and non-industrialized countries. In other words, those left
behind today will be the “poor countries” of tomorrow. A new form of
underdevelopment is emerging before our eyes.
The very
concept of sovereignty, the cornerstone of international relations and
international law, is being transformed. It is increasingly complex to say where
things happen. Locating certain technologies is almost an exercise in sleight
of hand, where the trick is visible to everyone.
China, for
example, has tied its technological development to the notion of digital
sovereignty, while the United States integrates it into its vision of
global leadership in innovation. The European Union,
for its part, has sought, with limited success so far, to play the role of a
normative arbiter. The least developed countries occupy the lowest levels
of the production chain, labeling data in low-paid, remote jobs. Meanwhile, a
group of states located between the frontrunners and those seemingly resigned
to their natural place are seeking strategies to move forward within the middle
group, increasingly distant from both the top three and the large mass lagging
far behind.
The result?
A new map of international order that resembles the human nervous system
more than the traditional world order: information flows from
decision-making centers, the “political brain”, to the zones of execution
through a digital spinal cord called the Internet. At these extremities, AI
reappears as a weapon, a diplomatic tool, or an investment tool. More broadly,
the global order operates increasingly as an information-processing structure,
receiving external stimuli, creating responses from pre-processed data, and
transmitting them from the political brain of decision-making—where AI has
taken root through a virtual spinal cord, the Internet, to the nerves that put
decisions into action, where AI reemerges in the form of autonomous weapons,
digital diplomacy, or investment mechanisms, among others.
The ability
of states to make binding decisions without direct external interference, once
central to the notion of sovereignty, has shifted to algorithmic systems that
assess risks, allocate resources, and determine priorities. Sovereignty is now,
increasingly, the freedom to choose which algorithm to use. The new
counterparts of politicians are the “thinking machines” imagined by Turing.
When an
algorithm determines a person’s level of dangerousness in court, decides access
to social programs, or even participates in economic planning, it takes on
functions once reserved for public institutions. This delegation of authority
raises debates about international responsibility, transparency in diplomacy,
and the legitimacy of governments, especially when the systems in use are
developed by transnational corporations beyond the control and regulatory
capacity of both states and international organizations.
From this
new (dis)balance of digital power arises a deeper question: Who controls the
controllers?
The
challenges of a new digital world order do not lie in technology itself but in
politics as the key factor of its democratic control.
The global governance of AI will require new forms of diplomacy and interstate
relations in which private actors, scientists, experts, and AI users must all
play a role. The time of purely state-based diplomacy is over, and institutions
that fail to adapt to this reality will become increasingly irrelevant. We are
witnessing one of the greatest institutional experiments in human history.
For these
reasons, among others, it is absolutely essential that social scientists
engage with AI: they must understand it more deeply, look beyond the
screens of their computers, and grasp the complex processes and mathematical
and statistical formulas that underlie it. The social sciences also face the
risk of growing irrelevance if they fail to speak about this world or focus
only on its most visible surface: How do language models define their answers? How
does a system generate a realistic video from a simple description? How do
networks and algorithms influence voters’ moods during elections?
All these
are questions for social scientists that require, as a foundation, a technical
understanding above the average. Even the great questions of philosophy, timeless
as they may be, are still seeking new answers. What does it mean to be human in
times of AI? What does it mean to “feel” when a language model tells you it is
disappointed for not having been able to help you?
In the
field of international relations, where I can speak from personal experience,
AI is emerging as a new actor on the global stage, with the potential to
reshape power relations, redefine the concept of sovereignty, and challenge the
ethical and political foundations of the established international order.
If the 20th
century was marked by nuclear energy, the space race, and the birth of the
Internet, this one will be defined by Artificial Intelligence and the race to
master the language that gives it form, legitimacy, and, if possible, limits. The
key question for the world today is not who will develop the following, more
precise and powerful algorithm, but how to ensure that these systems align with
values shared internationally.
The answer
we give will determine whether AI becomes an instrument of collective
emancipation or the invisible architecture of a new form of global domination
by the few over the many.
