By Javier Surasky
For some time, artificial intelligence (AI) governance has been a topic of international debate. The search for a global governance model capable of combining safety with the continuous development of AI’s potential is the Gordian knot yet to be untied, especially given the implications such a decision—whatever it may be—will have on the race to dominate the AI space, where the three major players have adopted different models: the United States with a market-driven approach, China with centralized state control, and the European Union seeking a balance between risks, citizens’ rights, and technological progress.
In recent
months, despite being weakened by its financial situation and its inability to
resolve issues such as Russia’s invasion of Ukraine or the unfolding tragedy in
Gaza, the United Nations has launched two key initiatives to advance AI
governance. It could almost be said that the UN is paving the way for a fourth
model, combining elements of the others while at the same time diverging from
them. The UN is clearly betting on an AI oriented toward sustainable
development, one that factors in the “external” impacts of its growth—such as
environmental and social dimensions—addresses the technology gap, and is shaped
by experts producing evidence for policymaking.
The first
initiative is the creation of the United Nations University Institute
on AI (UNU-AI) in
Bologna, Italy, as a new academic hub on the subject, guided by the values and
principles of the Organization.
UNU-AI is
set to become a permanent research institute within UNU, supported by the
Government of Italy. Its full operationalization should take place before the
end of the year, with a primary mission to mobilize big data and AI
for advancing the SDGs, with emphasis on capacity building in the Global
South.
The second
initiative is the adoption by the UN General Assembly, on 26 August, of
resolution A/RES/79/325 on the Terms of Reference and
Modalities for the Establishment and Functioning of the Independent
International Scientific Panel on Artificial Intelligence and the Global
Dialogue on Artificial Intelligence Governance.
This Panel,
which will serve as an advisory body to the General Assembly, will be composed
of 40 experts from diverse disciplines, nominated by the
Secretary-General and appointed by the General Assembly based on expertise, geographical
balance, and gender equity. Unfortunately, there are no
references to epistemic diversity, nor any acknowledgment of the need to
integrate traditional knowledge from Indigenous peoples or direct consideration
of vulnerable groups facing AI risks.
Each expert
will serve in a personal capacity for three years, with the main objectives of:
- Producing annual assessments based on independent scientific data on AI’s opportunities, risks, and impacts, modeled after the Intergovernmental Panel on Climate Change (IPCC). Indeed, the design of the Panel is clearly inspired by the IPCC and IPBES (biodiversity protection), as science-policy interfaces delivering authoritative but non-prescriptive evaluations.
- Maintaining an interactive dialogue with the General Assembly twice a year, and presenting conclusions at the Global Dialogue on AI Governance (GD-AI).
The GD-AI,
created under the same resolution, will be a multistakeholder forum meeting
annually, alternating between Geneva and New York, to:
- Facilitate international
cooperation and exchange of good practices.
- Debate ethical, social,
cultural, and technical implications of AI.
- Address technological and human
capacity gaps in AI.
- Promote open-source AI
software, data, and models.
- Reaffirm the primacy of human
oversight, transparency, accountability, and human rights in AI
development.
The main
limitation imposed on the GD-AI is that it shall not address the use of AI
for military purposes, leaving this entire field outside its mandate—a
political compromise necessary to avoid blocking the whole system.
The
greatest risk facing the Panel, and to a lesser extent also the GD-AI, lies in its
funding, which will depend largely on voluntary contributions from states,
the private sector, and philanthropy, raising doubts about sustainability and
the Panel’s true ability to act independently.
Taken
together, UNU-AI will focus primarily on the production of expert knowledge,
while the Panel will add a dimension of political legitimacy in the
multilateral arena, creating opportunities for collaboration between the two.
These
initiatives are signs of global efforts to rapidly institutionalize AI
governance within the UN system, laying the groundwork for shared norms on
AI while taking into account North–South inequalities and current and future
impacts, and avoiding normative fragmentation that—given the transnational
nature of AI—would lead to ineffective and inefficient results.
Perhaps the
most structural challenge facing UNU-AI, the Panel, and the Dialogue alike is
their integration into an increasingly crowded landscape of AI forums, such as
the OECD’s AI Policy Observatory, the G7’s Hiroshima Process, UNESCO’s initiatives, and the ITU’s AI for Good. Ensuring complementarity and avoiding duplication and competition
across these processes, both inside and outside the UN framework, will be
critical.