By Javier Surasky-
In a
hyperconnected world with decentralized production chains and integrated
international trade, the establishment of internationally shared management
and product quality standards is key. Haven't you encountered plugs or
battery chargers for your devices with different shapes? Haven't you wondered
in those cases why on earth they don't agree, and all use the same model? If
so, what you're asking for is standardization, and that's the kind of problem
it solves.
The
establishment of common standards backed by knowledge has strong impacts on
people's daily lives, but also in the field of sustainable development as
it affects the forms and processes of production, trade, and consumption of
goods and services.
The concern
for establishing universally accepted standards is far from new: as early as
1904, the International Electrotechnical Commission was created, which still
functions today. To (greatly) reduce a long history, we can jump to 1946: the
World's economic reorganization following World War II, which included the
creation of the United Nations, IMF, and World Bank, it resulted in the
establishment of the International Organization for Standardization, better
known as "ISO".
Today, ISO
is a technical organization, led by experts working in thematic committees. It
is not a "state" organization as we usually understand it: its 172
members are national standardization organizations.
ISO's focus
and agility allow it to react quickly to changes and challenges whose solution
involves, at least to some extent, standardization processes. Taking the year
2000 as a reference, the following are just a few examples:
- In 2005, the ISO/IEC 27001 standard adopted standards for information security management.
- In 2010, ISO 26000 addressed the issue of social responsibility.
- In 2011, ISO 50001 established management strategies to increase energy efficiency and performance.
- In 2016, ISO 37001 became the first international standard on management systems for combating bribery.
- In 2019, ISO 56002 focused its attention on management and innovation systems.
- In 2020, ISO/PAS 45005 on workplace safety during the COVID-19 pandemic was approved.
In the
field of AI, at the beginning of 2022, the ISO/IEC 22989 standard on
information technology and Artificial Intelligence was introduced, establishing
standard concepts and terminology to be applied to the field of AI. In December
2023, the ISO/IEC 42001 standard [*] was adopted, establishing standards for
implementing, maintaining, and continuously improving an artificial
intelligence management system to ensure its responsible development and use.
The
standard is structured through 10 chapters:
1. Scope:
Establishes the purposes and limits of application of the standard. It states
that its purpose is to establish requirements and guide the establishment,
implementation, maintenance, and continuous improvement of an AI management
system in the context of an organization. Its objective is for these processes
to be carried out safely and responsibly, including through consideration of
stakeholder obligations and expectations.
2. Normative
References: Describes the standards and documents that should be considered
when implementing the ISO/IEC 42001 standard.
3. Terms and
Definitions: Explains how the most relevant terms used in the standard should
be understood. For terms not expressly included in this chapter, there is a
general redirection to the definitions given in the ISO/IEC 22989 standard
adopted in 2022.
4. Context of
the Organization: Refers to understanding the organization that will manage the
AI system, external and internal issues, and roles and responsibilities within
the AI management system.
5. Leadership:
Focused on top management, the institution's policies and objectives when
applying an AI system, and the allocation of resources and responsibilities in
its life cycle.
6. Planning:
Presents the requirements for planning actions related to risks and
opportunities associated with AI systems.
7. Support:
Focuses on the resources necessary for the implementation and maintenance of
the AI management system.
8. Operation:
On requirements for the effective implementation of processes and controls
related to AI management, including data-related issues.
9. Performance
Evaluation: On requirements for monitoring, measuring, and evaluating the
performance of the AI management system.
10. Improvement:
Establishes requirements for continuous improvement of the AI management
system, such as corrective, preventive, and improvement actions based on
performance evaluations.
Four
annexes address elements of a primarily operational nature for implementing the
elements defined in the standard.
It is
especially interesting to know some definitions of terms provided by the
standard:
A stakeholder
is a "person or organization that can affect, be affected by, or perceive
itself to be affected by a decision or activity." The place given to
perception, beyond effective impact, considerably broadens the frame of
reference that must be considered in AI system management processes.
Risk is considered not only as a
possible effect but also as a lack of certainty about the effects of
implementing the AI system, hence control is the measure that
"maintains and/or modifies risk."
Information
security is
defined as the "preservation of confidentiality, integrity and
availability of information," and a note adds that other properties of
information such as authenticity, accountability, non-repudiation, and
reliability may be involved. Along the same line, data quality is
presented as the "characteristic of data that the data meet the
organization's data requirements for a specific context," which means they
necessarily require a specific contextualized analysis.
But the
most relevant definition, which we must look for in the 2022 standard, is that
of Artificial Intelligence, which is understood as
"research and development of mechanisms and applications of AI
systems," explaining that its research and development can take place in
various areas, among which it mentions computer science, data science,
humanities, mathematics, and natural sciences. To complete the understanding of
this definition, an "AI system" is defined as an
"engineered system that generates outputs such as content, forecasts,
recommendations or decisions for a given set of human-defined objectives,"
clarifying that it can use various AI techniques and approaches to develop a
model, represent data, knowledge, processes, etc. that can be used to perform
tasks.
The
definition of an AI system has some elements we want to highlight:
- It does not include references to systems having to be autonomous, an element that should be understood as a "characteristic of a system that is capable of modifying its intended domain of use or goal without external intervention, control or oversight."
- It also does not refer to the creation of knowledge, an element defined as "abstracted information about objects, events, concepts or rules, their relationships and properties, organized for goal-oriented systematic use." This definition departs from the general understanding of the meaning of the word, defined by the Merriam-Webster dictionary as "the fact or condition of knowing something with familiarity gained through experience or association"), something that the ISO/IEC 22989 standard addresses by including an explanatory note stating that knowledge in the AI domain does not imply a cognitive capacity or a cognitive act of understanding.
- The definition of AI given by ISO is close to that provided by John McCarthy, a pioneer in the field, as the "science and engineering of making intelligent machines, especially intelligent computer programs" (see here), emphasizing the technical element of AI, moving away from Lasse Rouhiainen's view of defining AI as "the ability of machines to use algorithms, learn from data and use what they have learned in decision-making as a human would" (2018:17), highlighting the automatic mechanization of processes, and even from Marvin Minsky, another father of current AI, who defined it as "the science of making machines do things that would require intelligence if done by humans" (quoted in Geist and Lohn, 2018:9), where the focus is on achieving "humanly intelligent" results.
These
definitions must be taken very seriously. Craig Murphy and Joanne Yates
(2009:2) tell us that:
ISO has,
in fact, taken on some of the tasks that have proven too difficult for the
League of Nations or the UN. These include environmental regulation, where the
voluntary ISO standard, ISO 14000, may have had more impact than any of the
UN-sponsored agreements of the 1990s, and questions of corporate responsibility
for human rights (including core labor rights), where the new ISO 26000 could
prove more successful than the UN-sponsored Global Compact.
The mere
suggestion by experts that this may be so (I cannot affirm or deny it, although
I am inclined to believe it is true) means that we must fully take into account
ISO's work when thinking about AI governance, not as an imposition on states,
but as elements of weight in a field where the private sector has leadership
and states' capacities to impose decisions on large technology companies are
limited, whether one likes it or not, due to the very dynamics that states have
imprinted on the current world order.
Note
[*]
The "ISO/IEC" reference (which we also see in the 27001 standard on
information security from 2005) means that it is a joint development of ISO and
the International Electrotechnical Commission.
References
Murphy C. y Yates J. (2009). The International Organization for
Standarization. Global Governance through Voluntary Consensus. Routledge.
Rouhiainen, L. (2018). Inteligencia
artificial 101 cosas que debes saber hoy sobre nuestro futuro. Planeta.
Geist, E. y Lohn, A. (2018). How Might Artificial Intelligence Affect
the Risk of Nuclear War? RAND Corporation.