AI, Governance, and Sustainable Development

 By Javier Surasky-

The Summit of the Future, scheduled for September 2024, aims to strengthen multilateral governance through two main approaches:

  • Enhancing the governance of multilateralism by strengthening the role of the UN as its core: This approach focuses on issues of institutional frameworks, processes, and action tools.
  • Updating the multilateralism framework to address current challenges: This includes discussions on a platform to respond to global crises, incorporating future generations and Artificial Intelligence (AI).

These approaches are interdependent. The new governance must respond to the expanded framework of multilateralism, while new issues will require a more agile and efficient multilateralism to address them.

We are particularly interested in updating the multilateralism framework because, although its emerging issues have clear contours, the exercise must be based on less debated premises. Whether the Summit of the Future succeeds or not, it will inevitably lead to a fundamental shift in the dimensions that have ordered human society since its inception: time and space.

International relations have always been based on the premise that events occur in a defined place and time, even if their duration is indefinite. For example, while we might not know when a war will end, we can precisely identify where it occurs and when it starts, applying norms to territorial actors based on this knowledge.

The advent of AI and the inclusion of future generations in multilateral management challenge this logic: AI is pushing us toward actions without a determinable location, a phenomenon that began with the internet, while future generations disrupt the temporal scale, forcing us to manage a future we do not yet know in the present.

Can we create efficient governance for a multilateralism where time and space have been reconfigured? This is the challenge we face. Given the breadth of the topic, we will focus solely on AI.

Artificial Intelligence: Some Clarifications

There are multiple possible definitions of AI. Here are a few representatives of the different approaches:

  • John McCarthy: AI is the "science and engineering of making intelligent machines, especially intelligent computer programs," and is related to using computers "to understand human intelligence."
  • George Luger: AI is "the branch of computer science concerned with the automation of intelligent behavior."
  • Marvin Minsky: AI is "the science of making machines do things that would require intelligence if done by humans." (quoted in Geist & Lohn)

The first definition focuses on the technical element of AI (making intelligent machines and software), the second emphasizes the mechanization of intelligent processes, and the third centers on making and thinking of machines replicating human actions and thoughts.

“Can machines think?" Turing asked in 1950. This question resurfaced in the 1990s when advances in computing power overcame previous computational limits, coupled with an explosion of data available to train these machines (the World Wide Web became public in 1991 and connected 10 million terminals within five years). Data is the "flour for the bread" of AI.

Significant events impacting the general public began to occur, from Deep Blue defeating Gary Kasparov in chess in 1997 to the entry of AI into households with Roomba vacuum cleaners launched in 2002. Since then, its integration into daily life has not stopped: Google Maps, virtual assistants on mobile phones, Netflix, and many other AI services are part of our daily lives.

With AI's growth, risks have also increased, from mass espionage and the buying and selling of private data to the safety of autonomous vehicles and the use of "intelligent" weapons.

AI is a "tool" without inherent intentionality, which is provided by its creator or user. The user's intention, however, is conditioned by the tool's characteristics: no one would use a hammer to remove a screw.

This last element makes AI regulation essentially a regulation of behaviors linked to human actions.

AI and Sustainable Development

AI and sustainable development share the characteristic of being addressed in the present with direct effects on the future, which must be considered in current decision-making processes.

If coordinated, AI can become a driver of sustainable development, but the current situation is concerning:

  • Digital technologies accelerate the concentration of economic power among an increasingly small group of business elites: "The combined wealth of tech billionaires, $2.1 trillion in 2022, exceeds the annual GDP of more than half of the G20 economies" (A Global Digital Compact, Policy Brief No. 5 by the Secretary-General).
  • There is competition among companies and states to gain the advantages AI provides in political, economic, and military areas.
  • The lack of governance framework for digital technologies results in the absence of "basic protection systems; today, it is harder to bring a stuffed toy to market than an AI chatbot" (A Global Digital Compact, Policy Brief No. 5 by the Secretary-General).

The effort to align AI with sustainable development faces several problems. Firstly, while sustainable development is primarily the responsibility of states, the main actor in AI is the private sector, which is mainly regulated by optional codes of conduct. Moreover, companies, even transnationals, lack international legal personality (they are beyond the reach of international public law).

On the other hand, the impacts of AI on achieving sustainable development have been extensively studied (Vinuesa et al, 2020; van Wynsberghe, 2021; Sætra, 2021 and 2022). In all cases, it is clear that AI is a significant force in driving or detracting from sustainable development.

AI's power in sustainable development is such that its impacts can accelerate its achievement but also intensify the trade-offs between objectives: AI could improve the quality of education while increasing inequities among students.

Thus, decisions regarding AI and Sustainable Development will profoundly impact people's lives, especially children, youth, and future generations!

Regulating AI: Initial Steps

In 2015, against the backdrop of international negotiations leading to the adoption of the 2030 Agenda, a group of AI researchers and social scientists founded "AI4Good," aiming for "a world where we can harness the full potential of emerging technologies to create positive social change."

Two years later, AI experts in California adopted the "Asilomar Principles" for AI governance, consisting of 23 principles grouped into three axes: research, ethics and values, and long-term issues.

The list of initiatives in this direction has grown, including the "Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations" proposed by the European "AI4People" initiative in December 2018, the "Ethics Guidelines for Trustworthy AI" adopted by the European Commission's High-Level Expert Group on AI in April 2019, and the "Declaration of Human Rights for a Digital Environment" presented by the University of Deusto in November 2018.

In March 2023, en route to the SUmmit of the Future, over 1,000 AI experts and tech industry executives published an open letter calling for an immediate pause, for at least six months, in training the most powerful AI systems like GPT-4, citing "profound risks to humanity."

The topic gained momentum within the multilateral framework with its inclusion in the Summit of the Future and the decision to adopt a Global Digital Compact to "establish principles, objectives, and actions to promote an open, free, secure, and human-centered digital future anchored in universal human rights and to achieve the SDGs" (A Global Digital Compact, Policy Brief No. 5 by the Secretary-General).

The Global Digital Compact

Negotiations for the Global Digital Compact are underway under the leadership of Sweden and Zambia, replacing Rwanda. The initial draft and its first review, published on May 15, 2024, are structured with a brief preamble followed by four chapters on objectives, principles, commitments and actions, and monitoring and review.

Although still a work in progress, this first review shows five objectives:

  1. Closing all digital gaps and accelerating progress on all Sustainable Development Goals (SDGs);
  2. Expanding inclusion and benefits of the digital economy for all;
  3. Fostering an inclusive, open, safe, and secure digital space that respects, protects, and promotes human rights;
  4. Promoting responsible and equitable international data governance;
  5. Strengthening international governance of emerging technologies, including AI, for the benefit of humanity.

Each objective is assigned a series of commitments aligned with the SDGs, aiming for completion by 2030.

Proposals for Designing AI Governance for Sustainable Development

Let's start by assuming some facts on which to base the design of AI governance for sustainable development:

  • We lack experience regulating a field like AI. Any attempt to establish governance must be subject to a continuous cycle of analysis and improvement. Paraphrasing Julia Stoyanovich, director of the Center for Responsible AI at New York University, any regulation will be better than none. "Until we try to create regulation, we won't learn how to do it."
  • Traditional forms and processes of governance cannot keep up with the pace of AI change nor foresee its consequences.
  • Expert knowledge is exceptionally important in decision-making and comparable to the fight against climate change.
  • The extensive applications of AI and its private sector drive demands that any governance scheme be open to multiple actors.

Based on these pillars, we propose an initial list of essential and urgent steps to be taken globally and regionally:

At the global level:

  • The main global forum for debating sustainable development is the United Nations High-Level Political Forum, which is responsible for providing political guidance on the implementation of the SDGs. Although the Forum has not been successful in this task (Cepei, 2023), it remains the only global multilateral space with this mission. Strengthening the Forum's role as a political leader and including a permanent chapter on AI in its discussions will reinforce its connection to sustainable development.
  • The Technology Transfer Mechanism established in the 2030 Agenda should include a chapter on AI, with measurable goals for knowledge, equipment, and capacity transfer.
  • The General Assembly should request the International Law Commission to start work on an international treaty regulating AI's most critical elements. Concurrently, UN system agencies should intensify their work on studying and approving AI guidelines in their specific fields.
  • The Economic and Social Council should make AI a permanent agenda item

At the regional level:

  • The regional sustainable development forums that support the 2030 Agenda review process should begin, including AI and sustainable development forums.
  • The UN regional economic commissions should initiate research and systematization of AI practices and rules in regional contexts, using pre-agreed models for analysis and reporting to avoid duplication and enhance effectiveness.
  • Regional integration processes and treaties should include specific chapters regulating AI design, transfer, and use, making sustainable development a pillar of their construction.