The legal fact as a “non-place”: IA challenges International Law

By Javier Surasky


Public International Law in the Face of the Blurring of Facts

Public International Law (PIL) was built upon a fundamental premise: facts occur somewhere and can be attributed to a subject. As a consequence of this localization in space and time, they can generate normative consequences. Territory, jurisdiction, sovereignty, and responsibility form part of the same conceptual framework that links the possibility of anchoring a fact to a space, an action to an author, and the allocation of consequences to a causal chain.

The expansion of artificial intelligence (AI) pushes this framework to its limits, not because it introduces a new type of agent—we have already discussed in previous blog posts that AI lacks the capacity to decide or act in a legal sense (see, for example, "AI in Three Classical Myths")—but because it reconfigures the material and organizational conditions under which a fact is produced.

Distributed training, transnational infrastructures, algorithmic supply chains, automated decisions, and cross-border effects are added to the decentralization of the cloud, forcing PIL to rethink its answers to basic questions: where did the fact occur? Who is its author? Who has jurisdiction? To whom can responsibility be attributed?

Seeking answers to these questions confronts us with a reality that is part of the encounter between AI and PIL: "AI systems highlight the shortcomings of existing legal frameworks and call for both innovation and discipline in their development as they progressively extend into fields which have historically been subject to human decision-making" (Çela et al., 2026:167).

We agree with Çela, but we believe that AI exposes a more profound crisis than the one identified there, one related to PIL's inability to operate juridically on the "fact" when AI intervenes. The legal problem does not lie, as some believe, in defining whether AI is an imputable subject, but rather in the delocalization of the fact it produces and the consequent blurring of that fact.

The Fact Itself: Localization, Attribution, and the Assignment of Meaning

PIL has not paid sufficient attention to the fact except in relation to its legal consequences, although it is possible to distinguish it from the act, conduct, and factual circumstances.

The fact—which is the focus of this blog post—is not a simple empirical event, but rather a "legal construct," insofar as it must produce consequences that allow PIL to consider it "legally relevant"; that is, the international legal order itself must have granted it the capacity to produce legal effects, such as the creation, modification, or extinction of rights and obligations.

Alongside it appears the "act," which differs from the fact insofar as it is a manifestation of will intended to produce legal effects. At the same time, conduct refers to the practice of subjects with the capacity to create legal norms. Both may take the form of actions or omissions.

Finally, factual circumstances are those elements of reality that surround the fact, the act, or the conduct and contribute to determining the "meaning and scope of a given provision and, generally, its applicability to specific factual elements" (Casanovas & La Rosa, 2018:328).

The internationally relevant fact was historically articulated around three axes: localization, attribution, and normative qualification. Even in complex contexts such as cross-border operations, environmental harm, activities on the high seas, or in outer space, PIL maintained these reference points through legal fictions, presumptions, or special regimes. Territory, for example, may be functionalized; attribution may be indirect; and causality may be flexibilized—but none of this invalidates the requirement that the fact must still occur in a place.

The legal fact was never interpreted as a punctual event isolated from its context, but rather as legally relevant conduct that may manifest itself as an action or omission and take simple, continuous, or composite forms. As Shaw (2017) notes, the international law of responsibility has operated in the face of complex conduct, provided that such conduct could be legally delimited and attributed. In similar terms, Sánchez Legido et al. (2022) emphasize that the identification of the fact constitutes the logical prerequisite for attribution and jurisdiction, even when that fact cannot be reduced to a single, isolated act.

However, contemporary technological systems destabilize the assumption that facts can be localized in discrete actors or bounded spaces, replacing them with distributed and relational forms of agency (Arvidsson & Jones, 2023). AI thus introduces a qualitatively different difficulty, because it does not displace the fact but fragments it: training may take place in one State, deployment in another, infrastructure may belong to transnational private actors, and effects may manifest simultaneously across multiple jurisdictions. As a result, the "fact" ceases to be produced by a single agent at a given moment and instead becomes a distributed chain of operations.

International Law in the Face of Algorithmic Delocalization

The contemporary crisis of jurisdiction is a direct consequence of the way PIL treats the fact. The classical criteria of territoriality, nationality, protection, and even universality, when applicable, presuppose localization; when such localization becomes impossible or arbitrary, the jurisdictional anchoring of PIL breaks down.

This is precisely what occurs with complex AI systems, where the territory does not coincide with the place of deployment, the location of infrastructure does not coincide with the location of effects, and control does not always coincide with benefit. The result is not a legal vacuum, but rather a potential overlap of combined jurisdictions, with gray zones of non-control: "Global computer-based communications cut across territorial borders, creating a new realm of human activity and undermining the feasibility—and legitimacy—of applying laws based on geographic boundaries" (Tzimas, 2021:230), within a framework of asymmetries between technology, governance, institutions, and law that "create gaps of unregulated areas which can be proven critical when we talk about technologies that in very limited time can exponentially accelerate and cause unexpected disruptions" (Tzimas, 2021:106).

Yet the problem is not exclusively imputative, because it also involves a dimension associated with the difficulty of legally localizing the fact itself. In other words, the problem is not only assigning the fact to an entity; the problem is isolating the fact as such. It is a question of the fact's spatiality.

We are not facing an incapacity of PIL to operate through legal fictions—an area in which it has considerable expertise. What is new is that, when attempting to identify a fact in AI systems—a prerequisite for the attribution of potential international legal responsibility—we find that its individuality has been broken, because the fact becomes the result of a distributed and indivisible algorithmic process, pushing the operational capacity of PIL into a dead end.

The "Non-Place" of the Fact: From Anthropology to Public International Law

The difficulty of localizing the fact produced by AI systems brings their international regulation closer to the concept of the non-place developed by Marc Augé in anthropology. Augé notes that the anthropological place is defined by identity, relations, and history, whereas the non-place is characterized by their absence. In his own words, "if a place can be defined as relational, historical, and concerned with identity, then a space which cannot be defined as relational, or historical, or concerned with identity will be a non-place" (Augé, 1993:82).

This understanding of the non-place becomes more precise—and more useful for moving from spatial place to conceptual place—if attention is paid to how space itself is produced. As Lefebvre tells us, space is not a neutral receptacle in which social actions unfold, but rather "a social product" and, as such, the result of historically determined practices, relations, and forms of organization (Lefebvre, 1974:86).

To this, we can add what Smuha (2025:6) points out: "beyond procedures and outcomes – sufficient attention must be paid to the social processes, structures, and relationships that inform and are co-shaped by the functioning of such systems."

In other words, space does not merely "contain" action; it actively intervenes in its production insofar as it shapes particular relations of power, encounter, and disencounter. From this perspective, the delocalization of AI systems does not imply the disappearance of space, but rather the production of a specific type of spatiality that cannot be understood in classical territorial terms.

The AI "non-place" is not, physically, the cloud, the data center, the infrastructure, the system as software, nor its distributed action, but rather the process through which all of these elements combine to produce a decision with consequences in the real world. The algorithmic decision does not occur at a single point, nor at a single moment, and therefore cannot be isolated: it emerges over the course of an indivisible, distributed process and, in many cases, remains hidden behind a "black box" of algorithmic reasoning. The process as a whole lacks its own legal identity and could hardly have one, although this is not impossible if distributed systems are met with legal responses based on equally distributed schemes of responsibility, determined by the production of facts, assuming that these facts are produced in a non-place within time and space as currently captured by the legal system.

This legal non-place does not describe the absence of facts—easily verifiable ones—but rather the specific manner of their production in time and space, which is incompatible with existing legal categories. PIL is thus pushed to create a spatiality that it has not yet integrated into its corpus.

In creating an understanding of the fact generated by AI systems, we must begin by returning to Lefebvre (1974) and by assuming and making transparent, in international legal terms, that the localization of the fact ceases to be a given prior datum and instead becomes a condition produced by law itself.

As Coicaud (2002:32–33) reminds us, politics has historically been "an enterprise of definition and delimitation of rights, duties, and responsibilities by territorialization (the nation–State); however, it now has to deal to a greater extent with the processes of deterritorialization and immaterialization brought about by the new technologies. A significant concern is how to regulate these practices through law (…) and satisfy the demands of legitimacy. Artificial intelligence makes this concern all the more pressing." This recalls that when PIL fails to identify facts, attribute conduct, or exercise jurisdiction consistently, it loses authority and legitimacy.

The proposal to move toward models of objective and collective responsibility, based on risk allocation (Finocchiaro, 2025), can be read as an attempt to restore law's operability amid the tensions posed by AI.

Open Conclusion

What AI poses to PIL is not a technical-normative problem, but something much deeper, insofar as it exposes a transformation at its conceptual roots: the legally relevant fact is no longer produced in an identifiable place, but in a processual, distributed, and transnational non-place. PIL may continue to operate, as it always has, through fictions, presumptions, and functional displacements—but at the cost of losing operability and legitimacy. The key question is how PIL will confront this new reality of the non-place of fact production without sacrificing effectiveness on the altar of power.

In these terms, PIL faces a structural crisis resulting from the transformations introduced by artificial intelligence, a crisis that reaches its very foundations. If these foundations are not revised and reconfigured, the international legal-normative order that has grown from them is destined to progressively lose its capacity to sustain and organize the new forms of global power.

 

References

Arvidsson, A. y Jones, B. (2023). International law and posthuman theory. Routledge. https://doi.org/10.4324/9781003257413

Augé, M. (1993). Los no lugares. Espacios del anonimato. Gedisa.

Çela, E., Rao Vajjhala, N. y Aslani, B. (2026). Artificial intelligence in legal systems: Bridging law and technology through AI. Springer.

Coicaud, J-M. (2002). The legitimacy of international organizations. United Nations University Press.

Finocchiaro, G. (2025). El nuevo derecho de la inteligencia artificial. Editorial Tirant lo Blanch.

Lefebvre, H. (1974). La production de l'espace. Anthropos.

Sánchez Legido, A.T.; Fernández Tomás, A.; Ortega Terol, J.M.; Forcada Barona, I.; Martínez Carmena, M. y Ballesteros Moya, V. (2022). Curso de Derecho Internacional Público (2.ª ed.). Tirant lo Blanch.

Shaw, M.N. (2017). International law (8th ed.). Cambridge University Press.

Smuha, N.A. (Ed.). (2025). The Cambridge handbook of the law, ethics and policy of artificial intelligence. Cambridge University Press. https://doi.org/10.1017/9781108970607

Tzimas, T. (2021). Legal and ethical challenges of artificial intelligence. Springer.
https://doi.org/10.1007/978-3-030-78584-3

 

This is the original version of the article.

Spanish version (ES) will be published soon.