The Unforgivable Silence of the Committee on Economic, Social and Cultural Rights on AI: The General Comment No. 27
By Javier Surasky
On 6
November 2025, the Committee on Economic, Social and Cultural Rights (CESCR)
adopted its General Comment No. 27 (2025), “on
economic, social and cultural rights and the environmental dimension of
sustainable development.” It is a highly relevant document, but one with an unacceptable gap.
The Comment
can be seen as a normative step forward, as it addresses the right to a clean,
healthy, and sustainable environment in an interconnected manner within the
framework of human rights. But that progress is accompanied by a structural
deficiency: the document completely omits the role of digital technologies,
particularly artificial intelligence (AI).
When
governance of AI for Good or AI for Sustainable Development (AI for SD) is
discussed, the centrality of human rights norms in the design of regulation or
institutions for these new technologies is consistently emphasized. How can we
move in that direction if the human rights field itself does not provide the
necessary reception?
Although
digital technologies hold a central position in environmental governance and
the global economy, they also pose new human rights challenges ranging from
privacy protection to questions of identity. Yet, their absence from the
Comment is almost total. This reinforces an unjustifiable conceptual vacuum
that undermines the drafters' effort to provide a holistic view of
environmental crises.
AI consumes
massive amounts of energy and water, drives demand that fuels highly polluting
mining chains, contributes to climate governance decisions, and has its models
and algorithms applied to the study of climate phenomena that would otherwise
be difficult to analyze. AI also affects States' capacity to plan for, mitigate,
and adapt to the effects of climate change. Additionally, electronic waste (e-waste) is one of the fastest-growing sources of
pollution worldwide.
The General
Comment discusses unsustainable economic models, the impacts of private
companies on human rights and the environment, and structural inequalities.
Faced with all this, we must ask: how can we explain the CESCR’s silence on
digital technologies and AI?
Issues such
as the manufacture of digital hardware—which depends on extractivism that has
sparked political and armed conflicts over natural resources —and a dangerous
geopolitical competition between the United States and China for access to
lithium, cobalt, and rare earth elements were overlooked, while unsustainable
production and consumption models were criticized.
Nor does
the Comment address algorithmic risks that threaten the enjoyment of economic,
social, and cultural rights. Automated decision-making already influences the
distribution of climate assistance, the assessment of environmental hazards,
the surveillance of territories inhabited by Indigenous Peoples and
environmental defenders, and the prioritization of mitigation measures.
These
algorithms, like all others, are subject to biases that erase knowledge and
entire communities—often those who have historically been “guardians of the
Earth”—and thus reinforce epistemologies of environmental inequality.
The General
Comment outlines no State obligations regarding any of these elements: it does
not require digital or algorithmic impact assessments, nor does it recognize
that contemporary climate governance relies on a digital infrastructure lacking
global regulation—one that leaves the hands of powerful transnational
corporations free to “drive” the development of AI under logics that rarely prioritize
human rights.
The
Comment’s silence is even more striking given that AI and big data are already
integral to climate scenario modelling, forest management, biodiversity
monitoring, disaster prediction, and many other matters closely related to
those addressed in the document. By failing to acknowledge the structural role
of digital technologies in these processes, the CESCR ends up viewing
environmental governance through the lens of the past, disregarding the
accelerated digitalization that frames all current ecological and human rights
issues. Paradoxically, the CESCR—whose role is to interpret the International
Covenant on Economic, Social and Cultural Rights in light of contemporary
challenges—turns a blind eye to what is arguably the most transformative vector
of sustainable development and the most pressing form of environmental
inequality today.
To make
matters worse, this omission is out of step with recent trends across various
UN bodies and processes that have begun to explicitly integrate the
technological dimension: the Pact for the Future, the UN Global Initiative on
AI Governance, and reports of the Special Rapporteur on Human Rights and the
Environment.
The
consequences of this gap are far-reaching: it creates a landscape in which
emerging risks to economic, social, and cultural rights remain unaddressed;
leaves States without guidance on regulating the behavior of technologically
powerful actors in human rights and environmental matters; and introduces
inconsistency within the UN system itself. Equally serious, it weakens those
advocating for AI development grounded in respect for human rights. Why should
AI actors pay attention to human rights if the system’s leading bodies do not
pay attention to AI?
In the wake
of General Comment No. 27, it has become imperative for the CESCR to
produce—from an interpretative note to a new General Comment—guidance that
integrates the environmental footprint of digitalization, the responsibility of
technology companies, environmental algorithmic justice and limits on digital
surveillance of environmental defenders in vulnerable territories, while also
ensuring that algorithms involved in environmental decision-making are auditable,
transparent and subject to public oversight.
General
Comment No. 27 is, in many ways, a solid document capable of driving new
environmental progress. But on digital technologies, it falls short of
standards already introduced at the regional level.
The
Inter-American human rights system, for example, has addressed the issue in
cases such as Lhaka Honhat v. Argentina (2020) and Berta Cáceres
(2021), and in 2025 the Inter-American Court issued its Advisory Opinion
OC-32/25 on “Climate Emergency and Human Rights.”
The
European system has the most precedents. A notable example is the recent Verein
KlimaSeniorinnen Schweiz and Others v. Switzerland judgment of 9 April
2024, where the Grand Chamber of the European Court of Human Rights held that
Switzerland’s failure to act on climate change violated the rights to private
and family life and to access to justice. A few years earlier, in 2020, the Council
of Europe had already adopted its Recommendation on AI and Human Rights, which
establishes that AI used in environmental contexts must be subject to human
rights impact assessments.
There can
be no just, rights-based environmental transition if the role of AI in
producing, reproducing, and deepening environmental inequalities is ignored.
Failing to consider these elements is a strategic error that weakens
international human rights law’s capacity to confront an increasingly digitalized
environmental landscape.
The CESCR
must quickly understand that, without firm safeguards, AI could turn the
economic, social, and cultural rights of all into algorithmic privileges for a
few—a catalyst for human rights violations and planet devastation at the same
time.
