AI and the Human Brain: Algorithms and Savy

By Javier Surasky

(Estimated reading time: about 14 minutes)

 


Since artificial intelligence began to integrate into everyday life, an almost automatic comparison has emerged: Does AI resemble the human brain?

The question arises intuitively because both "learn" and "generate responses", but it also leads to confusion if not scrutinized. Understanding the similarities, how they diverge, and the ethical and governance implications of this relationship is essential, especially from the perspectives of international relations and the social sciences.

The human brain is a biological organ—the product of millions of years of evolution. It is estimated to contain approximately 86 billion neurons, interconnected by chemical and electrical synapses, forming a hierarchical, modular architecture of enormous complexity. As Suzana Herculano-Houzel (2009:2) notes, "the human brain is not exceptional in its cellular composition", which places its organization in continuity with that of other mammals, though with an exceptionally high number of neurons.

There are multiple types of neurons in the brain, each with specific electrophysiological properties that are influenced by neurotransmitters such as dopamine, serotonin, norepinephrine, and acetylcholine. Brain functioning is also immersed in a chemical–hormonal environment that regulates its activity, modulates its plasticity, and shapes its learning capacity.

Modern AI, by contrast, runs on specialized digital hardware such as GPUs, TPUs, and other accelerators, and is based on mathematical models that optimize specific functions. A central component of this AI is the so-called "artificial neurons," a concept that dates back to the mid-1950s and has gained relevance with the rise of machine learning and deep neural networks.

Each artificial neuron in a neural network "computes a weighted sum of their inputs and apply a nonlinear function to this sum" (Goodfellow, Bengio & Courville, 2016:167) to produce an output.




Their similarity to a biological neuron is, however, overstated, and the distance between the two types of neurons is vast.

To begin with, consider energy efficiency. The Human Brain Project (2023) estimates that "a human brain uses roughly 20 Watts to work", less than a household light bulb, while training large-scale AI models requires vastly greater energy and infrastructure resources.

In the domain of learning, the brain relies on synaptic plasticity—that is, the ability to strengthen or weaken connections between neurons depending on joint activation patterns, modifying synaptic efficacy and the shape of dendritic spines. As Citri and Malenka (2008:18) summarize, "Synaptic plasticity is the major mechanism by which experience modifies brain function." It is a continuous, multimodal, and deeply contextual form of learning that shapes the configuration of our brains in response to our experiences and needs.

AI, by contrast, learns through statistical optimization: loss functions, gradients, and rules for adjusting the weights of its artificial neurons, along with other statistical–mathematical techniques. Its goal is to process large volumes of data. In the words of LeCun, Bengio, and Hinton (2015:43), "deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction."

Although both systems modify internal connections, AI requires enormous amounts of data, learns in discrete stages, depends on predefined error functions, and lacks the kind of innate inductive biases that guide human learning.

Another superficial similarity is distributed representation: neither the brain nor AI stores concepts in a single unit. Both generate complex activation patterns that enable AI models to serve as working hypotheses for studying vision, language, and semantic categorization. However, this functional convergence does not imply cognitive equivalence. Humans incorporate social context, ethical intuitions, autobiographical memory, and embodied experience into our cognitive processes. AI, in contrast, generates outputs based on statistical correlations with no semantic understanding or subjective experience. As Bender, Gebru, McMillan-Major, and Mitchell (2021:1) observed, "an LM is a system for haphazardly stitching together sequences of linguistic forms (…) without any reference to meaning."

The deepest difference arises when considering consciousness. In the film Transcendence (2014), the AI scientist played by Morgan Freeman poses a persistent question to the advanced AI systems he encounters: "Do you have self-awareness?" This is the critical boundary: the human brain not only processes information but also generates emotions, intentionality, and subjective experience. No current AI model is capable of that. "Conscious access allows us to extract meaning and reason about it" (Dehaene, 2014:105).

If biological neurons are so superior to artificial ones, can a person run an AI algorithm on their neurons? The answer is categorical: no. There are structural, biological, and computational reasons behind this conclusion.

An artificial algorithm operates by applying explicit mathematical operations. The brain has neither "mathematical working memory nor an operation register that would allow this, which is why our brain can perform mathematical operations but with far less precision than AI. We have not develooed "tensors" (a structured way of organizing numbers into multiple dimensions so that AI models can process them efficiently) "or "matrices" that can be manipulated with exactness during our biological development, and without them the computation carried out by AI models is impossible.

If this were not enough, algorithms operate through ordered steps (basically forward pass, loss computation, backward pass, and parameter adjustment), whereas brain dynamics are neither sequential nor deterministic, but massive, parallel, and continuous, since "the brain is never silent; neuronal activity is continuous even in the absence of external input" (Buzsáki, 2006:15). Its activity cannot be paused or arranged into stages. It is not mathematics but local biochemical rules modulated by neurotransmitters that give order to neural processing. It is a noisy system (we will return to noise shortly) since "noise is present at all stages of neural processing and fundamentally shapes neural function" (Faisal, Selen & Wolpert, 2008:292). In part, this noise arises from the fact that the human brain "lacks a single central controlling structure; control is distributed across many interacting regions" (Sporns, 2011:89), meaning it is a nonlinear system lacking a defined computational flow, a fixed architecture, or an externally imposed execution order—all of which are present in AI models.

More reasons? AI models store information in vectors and matrices, whereas the brain encodes information in distributed neuronal ensembles, using representations that include affective behaviors and biologically predetermined responses (e.g., fear) whose weights cannot be explicitly adjusted to achieve predefined goals.

By contrast, the brain can learn to simulate cognitive strategies—such as learning rules, identifying patterns, reinforcing behaviors through trial and error, or generalizing from examples—which allows it to apply heuristics and behave "as if" it were running an algorithm.

Another fundamental element is that the basic unit of the brain is an action potential, which depends on sodium, potassium, calcium, neurotransmitters, temperature, fatigue, prior history, and multiple interacting variables. In the words of Kandel, Koester, Mack, and Siegelbaum (2021:14), "the action potential is the fundamental signal that carries information through the nervous system." Everything there is, by essence, imprecise, noisy, and contextual. In contrast, the fundamental unit of any AI system is an exact matrix operation, free of biological noise and controlled with floating-point precision. In short, AI works through precision, whereas the brain works through the biological noise it generates.

For this reason, human memory—our archive—is semantic, "knowledge about the world that is not tied to specific experiences" (Tulving, 1983:386). It "saves" meanings, personal histories, social context, emotions, together with logical rules. It is a distributed (not centralized) memory integrated into historical and bodily experience. AI stores statistical correlations encoded as numerical weights—millions of them—but nothing more. This enables the brain to retrieve why things happen, while AI focuses on retaining how to produce outputs over time. AI "remembers" as vectors; the brain has multimodal memories (auditory, visual, emotional, motor) because those have been its channels for sensory experience.

All this is entirely logical, since the brain learns what a person needs to live, while AI learns what it is given to optimize a process. The brain understands; AI predicts. Understanding leads to wisdom; prediction to accuracy. The brain is part of each individual’s subjectivity, whereas AI is an intangible object—or, if preferred, a process. The brain has a “self,” and therefore its own goals; AI has neither.

In general terms, we can summarize our negative answer by claiming that the brain is a self-organized system developed by genetic evolution. In contrast, AI is a system specified by design. This establishes fundamentally different bases for their organization, functioning, and processing modes.

If we combine this with the “astonishment” generated by conversations with an LLM that does not understand what it says, we reach a crucial point: the brain and AI may converge in apparent behaviors, but their architectures and dynamics are irreducibly different.

These differences should not be understood as a call to neglect the study of the relationship between AI and the human brain. Both fields nourish each other: AI algorithms have been inspired by biological principles such as plasticity, and neuroscience relies on AI models to explore hypotheses and analyze large volumes of neural data.

Moreover, in these differences, we find the origin of ethical and governance issues that must be urgently addressed.

The temptation to anthropomorphize AI can shift responsibility from people toward technology, diverting attention toward science-fiction scenarios—utopian or dystopian—and obscuring real risks such as data bias, lack of transparency, concentration of power, and the deepening of inequalities. As the Chinese government states in its Global AI Governance Initiative (2023), “we must adhere to the principle of developing AI for good, respect the relevant international laws, and align AI development with humanity's common values.”

Understanding that AI is not an artificial brain but a statistical artifact with significant consequences for social life compels the formulation of policies that address current risks: audits, traceability, transparency, capacity-building, and strengthening digital infrastructure in lagging countries, in line with the call of the UN General Assembly in resolution A/RES/78/265 (2024) to “close the digital divides and to promote equitable access to the benefits of safe, secure and trustworthy artificial intelligence systems.” Institutions created to mitigate systemic risks must be understood as contemporary mechanisms for protecting human rights.

In that direction, UNESCO’s Recommendation on the Ethics of Artificial Intelligence affirms that “the protection of human rights and dignity is the cornerstone of the Recommendation” (UNESCO, 2021: para. 23). Similarly, the European AI Act states its commitment to “improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence while ensuring a high level of protection of public interests such as health, safety and fundamental rights.”

For its part, the United States government has pursued, with particular emphasis since early 2025, an AI policy that prioritizes innovation, digital infrastructure development, and investment over citizen rights. The Executive Order Removing Barriers to American Leadership in Artificial Intelligence, of January 23 of that year, declares that the U.S. “must develop AI systems that are free from ideological bias or engineered social agendas” and establishes as national policy “to sustain and enhance America’s global AI dominance.”

This does not imply neglecting the study of the relationship between AI and the human brain. Both fields nourish each other: AI algorithms have been inspired by biological principles such as plasticity, and neuroscience relies on AI models to explore hypotheses and analyze large volumes of neural data.

In summary, the human brain and AI share certain minimal, very abstract principles, but differ profoundly in structure, purpose, and capabilities. AI is not—and cannot be—a “digital super-mind,” but a set of algorithms designed for specific tasks. Understanding this distinction is essential for a serious debate on regulation and governance that allows society to benefit from AI without undermining fundamental rights or eroding autonomy and democracy.

References

Bender, E. M., Gebru, T., McMillan-Major, A., & Mitchell, M. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922

Buzsáki, G. (2006). Rhythms of the brain. Oxford University Press.

Citri, A., & Malenka, R. C. (2008). Synaptic plasticity: Multiple forms, functions, and mechanisms. Neuropsychopharmacology, 33(1), 18–41. https://doi.org/10.1038/sj.npp.1301559

Dehaene, S. (2014). Consciousness and the brain: Deciphering how the brain codes our thoughts. Viking.

European Parliament & Council of the European Union. (2024). Artificial Intelligence Act. EUR-Lex. https://eur-lex.europa.eu/

Faisal, A. A., Selen, L. P. J., & Wolpert, D. M. (2008). Noise in the nervous system. Nature Reviews Neuroscience, 9(4), 292–303. https://doi.org/10.1038/nrn2258

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press. https://www.deeplearningbook.org

Herculano-Houzel, S. (2009). The human brain in numbers: A linearly scaled-up primate brain. Frontiers in Human Neuroscience, 3, Article 31. https://doi.org/10.3389/neuro.09.031.2009

Human Brain Project. (2023, September 4). Learning from the brain to make AI more energy-efficient. https://www.humanbrainproject.eu/en/follow-hbp/news/2023/09/04/learning-brain-make-ai-more-energy-efficient/

Kandel, E. R., Koester, J. D., Mack, S. H., & Siegelbaum, S. A. (2021). Principles of neural science (6th ed.). McGraw-Hill.
https://accessmedicine.mhmedical.com/book.aspx?bookid=3249

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539

People’s Republic of China. Cyberspace Administration of China. (2023). Global AI Governance Initiative. Ministry of Foreign Affairs of the People’s Republic of China. https://www.mfa.gov.cn/eng/wjdt_665385/2649_665393/202310/t20231018_11159238.html

Sporns, O. (2011). Networks of the brain. MIT Press. https://mitpress.mit.edu/9780262528986/networks-of-the-brain/

Tulving, E. (1983). Elements of episodic memory. Oxford University Press.

UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000381137

United Nations General Assembly. (2024). Resolution A/RES/78/265. Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development. https://documents-dds-ny.un.org/doc/UNDOC/GEN/N24/060/68/PDF/N2406068.pdf