International Law and the Quest to Define Artificial Intelligence

 


By Javier Surasky

 

To regulate Artificial Intelligence (AI) globally, we first need a shared understanding of what we regulate. This is the critical starting point for building ethical AI governance that supports sustainable development. Time is of the essence—if we don’t act swiftly, the consequences could be unpredictable. The clock is ticking, but we’re still in the game.

AI has emerged as a transformative force in the 21st century, reshaping social and economic structures at an unprecedented pace. Its spread into every corner of life poses profound questions for international law system, designed to govern territorial entities now grappling with autonomous, data-driven, and borderless technologies.

The series Black Mirror often highlights AI’s darker potential. Episodes like “White Bear” (Season 2, Episode 2) depict dystopian worlds where humanity’s sensitivity has eroded—a must-watch for anyone interested in law and ethics. Yet, the most unsettling stories, like “Common People” (Season 7, Episode 1), feel uncomfortably close to our reality, a chapter in which we see a world where corporations wield unchecked power over cutting-edge digital technologies, mirroring our current concerns.

This post will explore the first steps toward legally defining AI and why it matters for international law.

International law evolves in response to society’s needs. The changes AI brings (new ways of connecting, producing goods, and communicating) reshape its foundations. While this isn’t the first technological revolution international law has faced (think steam engines, electricity, aviation, or the internet), AI is different. Its development is decentralized, driven by a distributed ecosystem that’s tough for any single state to control.

The first step toward global regulation is agreeing on what AI is. This process has begun, but we’re far from a universal definition.

Two competing approaches are stalling progress:

·        Risk-first advocates prioritize controlling AI’s dangers over speeding up its development. This camp, led by the European Union and top global experts, calls for guardrails to ensure safety and ethics.

·        Market-driven proponents, primarily in the United States, argue for letting innovation run free. They believe state regulations will only slow down AI’s benefits.

Caught in this tug-of-war, early international definitions of AI lean heavily on technical perspectives. For example, the 2022 ISO 22989 standard defines AI as “AI research and development of mechanisms and applications of AI systems” (3.1.4), and “Artificial Intelligence System” as an “engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives”. While useful, the definition lacks government backing, legal weight, and consideration of ethical and social dimensions critical for defining rights and responsibilities.

Other attempts, though non-binding, offer more nuance. The OECD’s 2019 Recommendation on Artificial Intelligence (OECD/LEGAL/0449) describes an AI system as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.” It highlights machines, autonomy, and specific outputs like predictions or decisions.

UNESCO’s 2021 Recommendation on the Ethics of AI takes a broader view, defining AI systems as “information-processing technologies that integrate models and algorithms to produce capabilities for learning and performing cognitive tasks, leading to outcomes like prediction and decision-making in material and virtual settings.” Here, the focus shifts to information processing, learning, and varying degrees of autonomy.

The European Union broke new ground with the first legally binding definition in its AI Regulation (EU) 2024/1689, adopted on June 13, 2024. It defines an AI system as: “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” (article 3.1)

This definition emphasizes machines, autonomy, and inference as the core of AI’s output generation.

Nationally, definitions vary widely. The U.S., in its 2020 National AI Initiative Act, mirrors the OECD, describing AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to: (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.” (article 3.3).

The UK’s 2023 AI White Paper “defines” AI “by reference to the two characteristics that generate the need for a bespoke regulatory response”, namely “adaptivity” and “autonomy” (section 3.2.1).

China lacks a unified legal definition, but Shanghai’s 2022 AI Industry Regulation refers to AI as systems “of theories, methods, technologies, and applications that uses computers and computer-controlled machines to simulate, extend, and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to achieve optimal results.” (article 2).

Brazil has not yet passed a law on AI. However, the Senate has already approved a bill on the use of AI. Article 2.1 defines "Artificial intelligence system" as “computer system, with varying degrees of autonomy, designed to infer and achieve a given set of objectives, using approaches based on machine learning and/or logic and knowledge representation, through input data from machines or humans, to produce predictions, recommendations or decisions that can influence the virtual or real environment”.

These definitions share some common threads—autonomy, machine-based systems, and outputs like predictions—but diverge on key details. Some mention specific technologies like machine learning, while others focus on hardware versus software or the scope of autonomy and outputs.

Why does this heterogeneity matter? Because it is the starting point to build an AI shared definition, which is necessary for tackling pressing issues such as AI’s safety, impact on human rights, humanitarian law, sustainable development, and environmental protection, among others.

As we look to the future, one thing is clear: regulating and governing AI is the most urgent challenge facing international law today. A unified definition is our first step toward ensuring AI serves humanity ethically and sustainably.