Agentic AI and Geopolitics of Decision-Making

By
Javier Surasky

 

In the current debate on AI, public attention has focused on generative models and the consequences of their capabilities to produce text, images, video, or code. Behind that focus, however, a form of AI is emerging that responds to human requests and can act autonomously, carrying out tasks and making chained decisions in complex environments: agentic AI (WEF, 2025; IBM, n.d.).

Currently in advanced experimentation and early deployment in areas such as organizational management, digital logistics, cybersecurity, and the provision of public and private services (BCG, 2025; McKinsey & Company, 2025; WEF, 2025), agentic AI entails both an increase in technical capabilities and a qualitative shift in how decision-making is delegated, with direct implications for politics, power asymmetries, and the possible frameworks for global AI governance (HLAB-AI, 2024; OECD, 2023).

Nevertheless, what is agentic AI? From a conceptual standpoint, these are action-oriented AI systems, or systems with agency (hence the name), that depart from traditional models that generate outputs in response to discrete instructions to become actors in the real world. “An AI agent is a system that uses AI and tools to carry out actions in order to achieve a given goal autonomously” (Bornet et al., 2025:15). As such, it can define goals, plan sequences of actions, interact with digital tools, and evaluate outcomes in order to adjust its behavior (Russell & Norvig, 2004; Sapkota et al., 2025) without the need for human interaction or guidance.

The reference to the agent, defined as one who has the capacity to act, is far from an anthropomorphic metaphor. Instead, it describes the system’s distinctive functional property in relation to its predecessors. Its theoretical foundations lie in the convergence between research on agents and multi-agent systems, foundation models, and large language models (McShane et al., 2024; Labaschin, 2023). These operate within layered architectures that integrate components for reasoning, organization, and action execution, connected through standardized protocols to data sources or other agents (WEF, 2025; OpenAI, n.d.), resulting in a distributed decision-making system capable of operating continuously within changing digital environments.

Agentic AI thus represents both a rupture and a continuity. Expert systems, algorithmic trading platforms, or automated industrial infrastructures have long incorporated some degree of decision delegation, but agentic AI generalizes, integrates across sectors, and scales this capability.

For these reasons, rather than an absolute historical novelty, agentic AI should be understood as a systemic advance that produces a qualitative shift in AI models while also redefining their very nature.

What we observe is a deepening of the politics of automation, oriented toward goals such as efficiency, effectiveness, speed of response, and control over deployed actions, all within a framework of increasing systemic density in decision-making processes and their implementation.

At this point, and before examining the potential impact of advanced agentic AI, it is necessary to pause and reflect on the central idea of “delegation of decisions.”

In general terms, one can distinguish operational delegation—understood as the automated execution of tasks—from tactical delegation, which involves optimization, coordination, and the selection of courses of action within predefined objectives, and strategic delegation, where what is transferred is the capacity to define objectives and priorities. Agentic AI currently operates mainly within tactical delegation, where systems acquire the capacity to coordinate complex processes and make chained decisions, without this necessarily implying a transfer of strategic control. In other words, these systems enjoy functional autonomy but not political or normative autonomy, as they operate under external organizational incentives.

Even with these limitations, the advent of agentic AI introduces inevitable “ruptures” or discontinuities that warrant attention. We move from working with singular models to operating within ecosystems of agents, where the ability to coordinate and work jointly is as critical to the model as individual performance. Moreover, collaborative capacity becomes part of individual performance evaluation: a system that produces good results in isolation but not when interoperating with others will be deficient from the standpoint of agentic AI (WEF, 2025; Schick et al., 2023).

The emphasis shifts from content production toward direct participation in processes, ranging from resource allocation and workflow management to team-based task execution and integration with pre-existing systems (AWS, n.d.; IBM, n.d.). This shift displaces humans from action executors to supervisors, auditors, and controllers of AI agents, requiring a redesign of organizational processes and the development of new regulatory frameworks (OECD, 2023; HLAB-AI, 2024).

To make this more concrete, consider an example: according to recent reports published by companies, consultancies, and research centers (WEF, 2025; McKinsey & Company, 2025; OpenAI, n.d.), agentic AI is already coordinating specialized agents in activities such as inventory management, logistics, customer service, or the maintenance of digital infrastructures. These systems respond to real-time needs, directly executing actions such as reallocating resources or reshaping workflows in response to changes in external conditions. This deployment of tactical response capabilities at ever-increasing speed allows organizations that adopt agentic AI to achieve greater efficiency in their regular operations.

The combination of relative (tactical and functional) autonomy, increased execution speed, and the inherent scalability of agentic AI is already reshaping decision-making logic, especially in sectors where rapid and coordinated responses are critical.

However, this shift generates political and security consequences that go beyond the purely technical (Mitre & Predd, 2025). Here, the political significance of agentic AI becomes clearer: by expanding and strengthening capabilities already considered strategic in the digital economy, security, or public policy management, among other sectors, the new agent role assumed by AI produces competitive advantages in highly complex environments, at the cost of reinforcing dependence on digital infrastructures and timely access to data (UNCTAD, 2024; ITU, 2025).

That is not a process articulated exclusively through states, but one that depends on public–private ecosystems, often transnational in nature and heavily weighted toward the corporate sector, which enjoys the “competitive advantage” of operating without the constraints that borders impose on states. As a result, the power associated with agentic AI exceeds the state’s capacity for action and rests with those who exercise effective control over the decision architectures that shape its operation. Put more simply, power lies with those who design how the system makes decisions, not with those who merely benefit from them.

Once this element is made explicit, it becomes evident that the development of agentic AI reinforces pre-existing structural asymmetries (UNCTAD, 2024). For many technologically lagging countries—or even those with relatively high levels of development but operating outside the technological frontier—the adoption of agentic AI may entail increasing dependence on externally defined decision architectures, with minimal margins for adaptation, appropriation, and sovereign control (Srivastava & Bullock, 2024; Colomina Saló & Galceran-Vercher, 2024). The “coloniality of being” thus becomes embodied.

It is worth pausing on this idea, which once again highlights the need to think about AI through the lens of the social sciences. The concept of the coloniality of being is one of the pillars of decolonial thought. It is defined as “the radical betrayal of the trans-ontological through the formation of a world in which the non-ethics of war are naturalized through the idea of race” (Maldonado-Torres, 2007:267).

The trans-ontological presents a primordial ethical relation in coloniality, namely the gift bestowed by the colonizing Self upon the colonized Other. This relation of imposed superiority of the “Self” through the degradation of the “Other” is expressed as a foundational betrayal of coloniality. In the coloniality of being, it manifests through the colonized subject’s acceptance of the naturalness of this logic, resulting in an order in which “the non-ethics of war”—that is, murder, rape, and so forth—are justified by the concept of “race” (with the colonizer’s race deemed superior and the colonized’s inferior). It should be specified, however, that race in this quotation serves as a starting point for the inclusion of other variables, such as gender. What we argue, then, is that agentic AI could give physical form to a relationship that, until now, had been visible only in its consequences, yet remained incorporeal.

This idea can be reinforced by considering the adoption of standardized agentic architectures in cultural contexts marked by different worldviews, values, and unequal institutional capacities. The integration of agentic AI systems into critical digital infrastructures—such as energy supply control platforms or economic coordination systems—leaves states lacking the capacity to design their own decision architectures at the mercy of those established by dominant actors, reproducing the “Self–Other” relationship in digital terms. Under ostensibly well-founded pretexts of interoperability, such systems end up generating subordination to externally imposed decision schemes, conditioning domestic tactical decisions on design logics, optimization criteria, and normative frameworks embedded in agentic systems, thereby dealing a new blow to digital justice and sovereignty (UNCTAD, 2024; ITU, 2025). In this sense, the risk associated with agentic AI is structural and probabilistic rather than deterministic, and depends on specific social, cultural, political, and institutional configurations.

As a result, it is possible to speak of a “geopolitics of the delegation of decisions” as an analytical category, but not as a theory or deterministic prediction. Accordingly, we do not claim that agentic AI has already transformed the international order; rather, we limit ourselves to noting that it introduces a plausible trajectory for the reconfiguration of power associated with the delegation of tactical decisions. That is reflected in a shift in how power is exercised in contexts where such decisions are particularly consequential, and in the definition of decision architectures as a resource of international projection with, now truly, strategic value.

The changes generated by this geopolitics of decision delegation appear at multiple levels. Economically, it grants a competitive advantage to those who can automate and coordinate complex processes more efficiently (McKinsey & Company, 2025; BCG, 2025). In the security domain, it creates opportunities for automating sensitive functions. More broadly speaking, it gives rise to risks of systemic errors, especially when high levels of interaction exist among actors with conflicting objectives in contexts of high uncertainty (Mitre & Predd, 2025).

Nevertheless, not everything associated with agentic AI entails risks and pressures in international geopolitics. With appropriate management, governance, and expert support, it is possible to advance global democratic processes of technical standardization, define equitable interoperability frameworks, and even reduce barriers to access to cutting-edge technologies. None of this depends on the technology itself, but rather on political decisions and on the capacity of actors with differing interests to participate in and influence deliberative processes.

Here we return to an issue we will merely mention, as it has been addressed in previous blog posts: the early deployment of agentic systems allows first movers to set de facto standards that subsequently constrain others, producing the legal phenomenon of “regulatory capture.” This concept has been known for decades and is defined by Carpenter and Moss (2014:13) as “the result or process by which regulation, in law or application, is consistently or repeatedly directed away from the public interest and toward the interests of the regulated industry, by the intent and action of the industry itself.”

However, if progress in AI governance more broadly is already slow and challenging due to conflicts among major powers—creating a significant gap between global normative-institutional development and technological advance—this problem is exacerbated in the specific domain of agentic AI. Most existing regulatory instruments focus on individual models or applications and are ill-suited to distributed, adaptive systems that interact continuously with other systems and agents (HLAB-AI, 2024; OECD, 2023).

That is compounded by the fact that “current regulations are unable to cope with the AI revolution due to the pacing problem and the Collingridge dilemma” (Tehrani, 2022:21). This dilemma expresses an inherent tension in the field of technology: in the early phases of a new technology’s development, it is relatively easy to modify or regulate it, but difficult to foresee its social, economic, or political impacts; in its mature phase—when those impacts become visible—the technology is already embedded in global social and economic life, making regulation extremely difficult or politically unviable (Collingridge, 1980:17–18).

For these reasons, agentic AI is not merely “another advance” in the field of AI, but a qualitative change that enables new forms of exercising power through digital systems. As we have noted, it is no longer about controlling others’ decisions, but about taking control over how decisions are made.

Those who fail to understand and participate in the governance of agentic AI at the international level will have to accept the risk of formulating their policies through someone else’s lenses—without even realizing it.

 

References

AWS (Amazon Web Services). (n.d.). What is Agentic AI? https://aws.amazon.com/what-is/agentic-ai/

BCG (Boston Consulting Group). (2025). AI Agents. https://www.bcg.com/capabilities/artificial-intelligence/ai-agents

Bornet, P., Wirtz, J., Davenport, T. H., De Cremer, D., Evergreen, B., Fersht, P., Gohel, R., & Khiyara, S. (2025). Agentic Artificial Intelligence: Harnessing AI Agents to Reinvent Business, Work, and Life. World Scientific Publishing.

Carpenter, D., & Moss, D. A. (Eds.). (2014). Preventing Regulatory Capture: Special Interest Influence and How to Limit It. Cambridge University Press.

Collingridge, D. (1980). The Social Control of Technology. Frances Pinter.

Colomina Saló, C., & Galceran-Vercher, M. (2024). The other geopolitics of artificial intelligence. CIDOB Journal of International Affairs, (138), 27–50.

HLAB-AI (UN High-Level Advisory Body on Artificial Intelligence). (2024). Governing AI for Humanity. United Nations. https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf

IBM. (n.d.). What is Agentic AI? https://www.ibm.com/think/topics/agentic-ai

ITU (International Telecommunication Union). (2025). AI Standards for Global Impact: From Governance to Action. https://www.aigl.blog/ai-standards-for-global-impact-itu-2025/

Labaschin, B. (2023). What Are AI Agents? When and How to Use LLM Agents. O’Reilly Media.

McKinsey & Company. (2025). Seizing the Agentic AI Advantage. https://www.mckinsey.com/~/media/mckinsey/business%20functions/quantumblack/our%20insights/seizing%20the%20agentic%20ai%20advantage/seizing-the-agentic-ai-advantage.pdf

McShane, M., Nirenburg, S., & English, J. (2024). Agents in the Long Game of AI: Computational Cognitive Modeling for Trustworthy, Hybrid AI. MIT Press.

Mitre, J., & Predd, J. B. (2025, February 10). Artificial General Intelligence’s Five Hard National Security Problems. RAND Corporation. https://www.rand.org/pubs/perspectives/PEA3691-4.html

OECD (Organisation for Economic Co-operation and Development). (2023). Advancing Accountability in Artificial Intelligence. OECD Digital Economy Papers, (349). https://www.oecd.org/content/dam/oecd/en/publications/reports/2023/02/advancing-accountability-in-ai_753bf8c8/2448f04b-en.pdf

OpenAI. (n.d.). A Practical Guide to Building Agents. https://cdn.openai.com/business-guides-and-resources/a-practical-guide-to-building-agents.pdf

Russell, S. J., & Norvig, P. (2004). Artificial Intelligence: A Modern Approach (2nd ed.). Pearson.

Sapkota, R., Roumeliotis, K., & Karkee, M. (2025). AI agents vs. agentic AI: A conceptual taxonomy, applications, and challenges. Information Fusion, 126.

Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Hambro, E., Zettlemoyer, L., Cancedda, N., & Scialom, T. (2023). Toolformer: Language models can teach themselves to use tools. Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS). https://proceedings.neurips.cc/paper_files/paper/2023/file/d842425e4bf79ba039352da0f658a906-Paper-Conference.pdf

Srivastava, S., & Bullock, J. (2024). AI, Global Governance, and Digital Sovereignty. https://arxiv.org/pdf/2410.17481

Tehrani, P. M. (Ed.). (2022). Regulatory Aspects of Artificial Intelligence on Blockchain. IGI Global.

UNCTAD (United Nations Conference on Trade and Development). (2024). Digital Economy Report 2024. https://unctad.org/system/files/official-document/der2024_en.pdf

WEF (World Economic Forum). (2025). AI Agents in Action: Foundations for Evaluation and Governance. https://reports.weforum.org/docs/WEF_AI_Agents_in_Action_Foundations_for_Evaluation_and_Governance_2025.pdf