AI from the Inside: Basic Concepts for Non-Experts

By Javier Surasky


To understand the challenges posed by AI, as well as some of its opportunities, we tend to think in terms of the user or of the society affected by its impact. This is correct, but it leaves out a key player: AI itself.

Understanding, at least at a basic level, what an AI does is a window of opportunity to deepen our grasp of its advantages and dangers, and also to identify possible entry points where each person and profession can contribute.

Let us then carry out a first "dissection" of how any AI operates, to identify its main elements.

The first step is to understand that any AI interacts with the non-virtual reality in a reciprocal exchange that, in principle, modifies both: the ""real reality"" (let's' call it RR) provides technology and data to the AI's' ""virtual reality"" (VR), which in turn returns information (or performs actions) that can transform the RR.

This gives us a first, somewhat obvious step worth mentioning: AI operates within and acts upon a larger ecosystem that is RR, becoming what we call an "agent" to reflect that it has the capacity for action. Both ChatGPT and a self-driving car are agents. Google defines "agent" as "software systems that use AI to achieve goals and complete tasks on behalf of users. They display reasoning, planning, and memory, and have a level of autonomy to make decisions, learn, and adapt." This definition is crucial as it provides several of the elements we will examine below.

Before that, we must include something the definition leaves out: perception, meaning the agent's ability to receive context, that is, to receive inputs at any moment, based on which it builds its own experience history, which in turn will inform its decisions. Just as with a person, immediate perceptions, combined with its historical experience, will guide the AI system's actions.

The bridge between what the agent perceives and what it does is its function, what it has been trained to do (play chess, drive a car, translate texts, etc.). The function, of course, takes shape from the program that creates the agent.

It is to be expected that any AI will orient its actions toward what it considers good and move away from what it considers bad. But what is good and what is bad? For an AI, good is what brings it closer to its assigned goals, and bad is what moves it away from them. If an AI is given the mission to scam people, then "good" for it will be to scam as many people as possible, and "bad" will be failing to do so. We have already discussed in a previous post that AI is a means, not an end in itself. The AI's intention always comes from the humans who create or use it. Achieving its goals is the measure of success for any AI, and thus requires that the goals be measurable as a precondition. To measure success, the most common approach is to design metrics based on the desired results in RR, rather than on how the agent should behave.

Combining what we have just seen, any AI should be understood as a rational agent with a rationality oriented toward achieving the goals for which it was created, based on its perceptions and perception history, and seeking to maximize its success measure.

Given the importance of perceptions for successful results, AI may take actions aimed at "refining" its future perceptions. To do this, it gathers and seeks information. Consequently, the usefulness of the information it holds will depend on the rationality with which the agent was endowed. Here lies one of the most important elements in AI development: the agent's ability to learn from new perceptions combined with its history, transforming raw data into helpful information for rational decision-making. This learning ability gives AI its autonomous nature: it does not depend exclusively on the information provided by the programmer. It partly explains the existence of "black boxes" that prevent us from knowing exactly how an AI reaches a decision.

As a result, any rational AI agent includes four fundamental components:

  • The learning element: to improve itself.
  • The performance element: to set the actions the AI will take.
  • The critical element: to review the agent's performance and, if necessary, modify the performance element to improve its success measure.
  • The problem generator: suggests actions that may lead the AI to explore innovative experiences, preventing it from closing in on those it has already mastered. It encourages the AI to remain unsatisfied with its current expertise and constantly explore new paths.

And this is where the idea of an "algorithm" plays a central role. Algorithms are simply process organizers, sequential or iterative. A cooking recipe is an algorithm that guides the agent (the cook) through a step-by-step process to achieve a desired result, with the success measure being the diners' satisfaction with the prepared food. It's worth noting that success is best measured by changes in the environment rather than by the activities performed. We have all dealt with algorithms in our lives. Did you study the calculation of the greatest common divisor in primary school? That is one of the earliest "coded" algorithms in history.

The algorithm is the foundation (the soul?) of AI, the manager of the processes that link its perceptions, history, goals, success measure, and rationality to achieve the best rational result it can obtain. That is, the result may not be perfect or the absolute best, but it is the best the AI can deliver given the described elements. Search and planning are AI subfields focused on finding sequences of actions that allow agents to achieve their assigned objectives.

But agents are not only made of data and programming. They also have a physical support (chips, cables, boards), which we call AI's architecture. Thus, AI development work can be described as the path toward programs that, based on current architecture limitations, exhibit rational behavior while minimizing code and expanding the ability to perceive, create histories, and, from there, maximize success based on their rationality.

Of course, we are leaving out aspects such as the types of environments in which AI operates, its ability to perceive them fully or partially, the continuity or change in their conditions, among many others. Nor have we analyzed how these elements change, or may change, when we speak of weak AI, strong AI, generative AI, or agentive AI. But we now have a better idea of "what "AI does, and that is no small thing.