By Javier Surasky
In the
current context, where the UN 80 initiative aims to achieve modern and inclusive governance of
the UN, designing an algorithm for this purpose could bring together
technological innovation, transparency, and political sensitivity.
It is
crucial to understand that such an algorithm would not aim to decide
the selection but rather serve as a supporting and standardizing tool that
member states could take into account as part of their deliberations. Its main
advantage is that all candidates would be assessed based on the same objective
criteria, shaping a candidate analysis algorithm that promotes evaluative
fairness in the process.
Before
designing any algorithm, however, it is imperative to address a historical
shortcoming: the absence of a clear definition of the Secretary-General’s
mandate in the UN Charter.
Unlike a
modern job description, the Charter provides only an ambiguous and limited
outline of the role, one that the evolving practice of former
Secretaries-General has overtaken. Therefore, establishing a clear definition
of the Secretary-General's role and responsibilities is a necessary preliminary
step. It is worth clarifying that this step does not require a formal reform of
the Charter. The general nature of the role’s description allows the General
Assembly and the Security Council to shape a job profile jointly. For instance,
the renewable five-year term granted to each Secretary-General is not a written
law, but rather a customary practice supported by both bodies.
This
definition would serve as a framework upon which algorithmic candidate
assessments could be built, minimizing risks in the construction of evaluation
criteria. A robust algorithm should be able to assess candidates’ diplomatic
careers, their experience in resolving multilateral international challenges,
their alignment with the principles and purposes of the UN, their understanding
and engagement with each of its three pillars, and their administrative
leadership background. It will be critical to rely on high-quality, verifiable
data in these areas (though lack of data might itself be a meaningful signal).
Some
existing experiences could be leveraged. At COP26, for example, Climate Analytics used its Climate Action
Tracker to generate predictive analysis and evaluate delegates' positions.
The model incorporated variables such as voting records in UN bodies and public
speeches. By applying machine learning, it projected that the commitments
reached would lead to global warming exceeding the Paris Agreement targets.
The
approach of processing large volumes of diplomatic data could be applied to the
algorithmic evaluation of candidates for the position of Secretary-General.
But we
cannot be naïve. While data has the potential to strengthen the
selection process, the practical implementation of a multidimensional
algorithmic evaluation faces significant technical and political challenges.
On the technical
side alone, we must consider:
- The consistency, completeness,
and formatting of diplomatic data can vary widely, requiring extensive
cleaning and standardization.
- Translating inherently
qualitative skills—such as being a “skilled negotiator” or exercising
intuitive judgment in crises—into quantifiable metrics suitable for
algorithmic processing is highly complex.
- The technological and
cybersecurity infrastructure needed to process such data within the UN is
equally demanding.
At the political-technical
interface, one delicate issue would be deciding which selection variables
to include in the algorithm. Would it be feasible to incorporate the ones
proposed by the Ad Hoc Working Group on the Revitalization of the General
Assembly regarding the Secretary-General selection process?
Moreover,
as part of a broader effort toward democracy and participation, could the
algorithm optionally include variables suggested by civil society campaigns,
such as 1 for 8 Billion? In any case, incorporating these
inputs would increase the complexity, as it would require standardized
mechanisms for collecting such contributions.
Another
sensitive issue at the political-technical interface would be determining the
weighting of variables. What should carry more weight in selecting the
Secretary-General? Having experience within the UN? A previous strong commitment
to human rights demonstrated clear adherence to the UN principles, or the ability
to resolve complex international situations? These decisions should be made
through political deliberation, not left to programmers, and only then
implemented technically.
Additionally,
the algorithm should be able to model a candidate’s potential impact on the
global balance of power and their ability to initiate and sustain a much-needed
UN reform process, currently embodied in the UN 80 initiative, the
latest in a long line of mostly unsuccessful reform efforts in recent decades.
Two illustrative cases come to mind:
- In 2022, Meta developed Cicero, an AI model that simulated negotiations in the game Diplomacy, achieving near-human levels of alliance-building. A similar system could project how a given candidate might influence negotiation dynamics in the Security Council and General Assembly within a prospective global governance exercise.
- Using the same game, in 2025, Alex Duffy had 18 LLMs compete against each other (including various versions of ChatGPT, Claude, DeepHermes, Deep Seek, Gemini, Grok, Llama, Mistral, and Qwen) with striking results: “The top-performing models learned to lie, deceive, and betray” their peers.
These
examples remind us of the need to ensure impartiality and integrity in any
algorithm informing the Secretary-General selection process. This may require
restricting its use to an independent, multicultural panel of experts, subject
to regular external audits, similar to those
applied in UN peacekeeping operations.
Another
issue of utmost importance, given the political sensitivity of the matter, is
that any algorithm developed for this purpose must, by definition, be transparent
and explainable. We already know the damage caused by the lack of such
qualities. Consider, for example, the case that led to the fall of the
Dutch government after a scandal
involving an algorithm used to detect childcare benefit fraud. For this
reason, whatever its final form, the algorithm must document its criteria, data
sources, and analytical methods, and should be aligned with the UNESCO 2021 Recommendation on the
Ethics of Artificial Intelligence.
Although
the algorithm’s results may remain confidential during the selection process,
transparency requires that at least a summary report be published afterward.
Ultimately,
the main difficulty lies not in obtaining data or in the algorithm’s
technical implementation, but in the deeply political nature of the process
itself, which involves international negotiations and national interests, especially
those of the five permanent members of the Security Council. These states might
view the development of such an algorithm, as discussed in this blog, as an
attempt to constrain their decision-making power or, worse, as a threat to
their influence over the selection of the Secretary-General.
Achieving
consensus among Member States on the algorithm’s parameters, the weighting of
variables, and the very definition of the Secretary-General’s mandate is the Gordian
knot that must be untied before moving forward. Its legitimacy will hinge
on whether the Security Council’s decision garners support from the General
Assembly.
As a
result, designing and implementing an algorithm to support the selection of the
UN Secretary-General would be far more of a political achievement than a
technological milestone. It would require political agreements on both the
Secretary-General’s “job description” and the technical-political interface
we’ve described.
One
final, yet fundamental, ethical question remains: Can the UN adopt such
technological tools without compromising its legitimacy and character? AI cannot replace political
intuition, the ability to build trust, and interpersonal negotiations, though
AI can indeed be a powerful tool in supporting them.