By Javier Surasky-
The year
2024 has shown intense activity in developing knowledge and agreements on
Artificial Intelligence (AI).
- The General Assembly adopted its first two resolutions on AI to "Enhance international cooperation for capacity-building in artificial intelligence" (A/Res/78/311, sponsored by 28 Global South countries), and to "Harness the opportunities of safe and trustworthy artificial intelligence systems for sustainable development" (A/Res/78/265, sponsored by 55 member states from both North and South, including the United States and China).
- Negotiations for the Global Digital Compact, which will provide a framework for international cooperation in AI and digital technologies, are going through a difficult phase (see this post), but it is expected to be adopted in September during the Summit of the Future.
- Support among states to adopt an international treaty on Autonomous Weapons Systems (AWS) is growing, bolstered by the conference "Humanity at a Crossroads, Autonomous Weapon Systems and the Challenge of Regulation" held in Vienna last April. Over 30 governments have endorsed the chair's summary, which states, "There is strong convergence that AWS that cannot be used in accordance with international law or that are ethically unacceptable should be explicitly prohibited. All other AWS should be appropriately regulated." The Secretary-General has called for an international treaty on the matter to be approved by the end of 2026.
- On February 5-6, the Second UNESCO Global Forum on AI Ethics convened in Kranj, Slovenia, where the Global AI Ethics and Governance Observatory was officially launched. This initiative promotes national assessments based on the AI Readiness Assessment Methodology (RAM), published in 2023, integrated into the development of national AI socio-technological profiles. These profiles are currently available for Brazil, Chile, Gabon, Mexico, Morocco, and Senegal, with 50 other countries profiles are being built.
- The final document of the 2024 G7 Summit held in Apulia, Italy, dedicates an entire chapter to AI, including a commitment to promote a “safe, secure, and trustworthy AI," following a line opened in their 2023 meeting in Hiroshima's final document. There, they declared their determination to "drive international discussions on inclusive governance and interoperability of AI to achieve our shared vision and goal of trustworthy AI."
- In May 2024, the Global AI Summit and Forum convened in Seoul, a meeting "among allies" attended by Germany, Australia, Canada, Korea, the United States, France, Italy, Japan, the United Kingdom, Singapore, and the European Union. The "Seoul Declaration for Safe, Innovative, and Inclusive AI" was adopted, calling for "enhanced international cooperation to advance Al safety, innovation and inclusivity to harness human-centric Al to address the world's greatest challenges, to protect and promote democratic values, the rule of law and human rights, fundamental freedoms and privacy, to bridge Al and digital divides between and within countries, thereby contributing to the advancement of human well-being, and to support practical applications of Al including to advance the UN Sustainable Development Goals." (§5).
- Also in May, the International Telecommunication Union (ITU) held its fifth AI for Good Summit, where governments, experts, civil society, and the private sector gathered to discuss and present some of the most spectacular innovations enabled by AI in support of sustainable development. In 2024, we saw everything from humanoid robots and bionic pets to brain interfaces for controlling prosthetics and semi-autonomous rescue teams. This meeting made it clear that AI and robotics have a third companion: the imitation of nature in creating efficient designs.
- Without much coherence among themselves, more and more countries are introducing national legislation on AI. Notably, the European AI Act, adopted by the European Parliament, came into force on August 1, 2024, applying to all European Union member countries. However, regulations concerning the prohibition of AI systems deemed to present an unacceptable risk will come into effect on February 1, 2025, and those related to general AI models on August 1 of that year.
- In August, the excellent report "Mind the Gap" was released, prepared by the International Labor Organization and the Office of the Secretary-General's Envoy on Technology, addressing AI gaps and their impacts on employment.
- Adopting the text of the United Nations Convention on Cybercrime on August 8, 2024, although not explicitly referencing AI, is a significant step forward. The approved text incorporates regulations on data use, with special rules for personal data, deepfakes (calling for their criminalization), and revenge pornography, as well as powers for governments to demand information from computer service providers, among other elements closely related to AI. Despite criticisms of the agreement by human rights organizations and a call from tech companies to reject it, the text was adopted unanimously after a series of last-minute amendment attempts.
With so
much happening simultaneously, we have the evidence we need to
establish strong global AI governance. Although debates will continue,
we already have a good understanding of the principles, vocabulary, main risks
and how to address them, critical areas, stakeholder mapping and
responsibilities, and a more than acceptable theoretical development to face
the questions implied by a world where we will coexist with AI. The foundations
for establishing an agreement for its global governance have been laid.
Today it
is sad to remember that to adopt the Universal Declaration of Human Rights we
had to endure a holocaust, to move forward in banning fossil fuels we had to
reach uncertain environmental limits, to create an international organization
that would protect future generations from the scourge of war we had to go
through two world wars.
If
leaders have learned anything from history, it will not be necessary to wait
for horror to drive us to an increasingly urgent strong global AI regulation,
as has happened so many times in other fields.