AI Today: A “Written Short Film” in 10 Scenes

By Javier Surasky


 

Almost without realizing it, one afternoon we found ourselves not searching Google for what we wanted to know, but talking with an AI that not only answered us—it reasoned, translated, summarized, and even proposed new approaches. It wasn’t magic, as it might have seemed to those sitting for the first time in front of one of the new language models; it was the result of more than seventy years of technical, scientific, and political decisions that together tell a fascinating story, full of breakthroughs and contradictions.

Like a written short film, this story can be told through ten short scenes—with two surprising cameos. A story to read in one sitting to understand precisely how we got here.


Scene 1. Turing Changes the Question

1950

Context: post–World War II. The United Nations had just been created, but the Cold War divided the world.

Involved in British intelligence more out of an obsession with solving riddles than patriotism, Alan Turing proposed shifting the debate: instead of asking whether a machine “thinks,” we should ask whether, in a conversation, one can tell the difference between a human and a machine. The “imitation game” was born, and with it, the first measurable framework for talking about artificial intelligence. The field had no name yet, but it already had a horizon.


First cameo: in 1956, the “Dartmouth Workshop” gathered the first international community of AI scientists and gave the field its name. If Turing laid the foundation stone, Dartmouth brought the builders who began the work.


Scene 2. Learning from Mistakes

1986

Context: Reagan and Thatcher are in power, Europe signs the Schengen Agreement, and Latin America experiences democratic transitions.

Backpropagation makes network training practical: the machine makes a mistake, measures how much, corrects it, and tries again. Rumelhart, Hinton, and Williams gave machines a feedback loop that allowed them to “look back to move forward,” enabling the efficient training of deep networks.


Scene 3. Memory and Context

1997

Context: the Berlin Wall has fallen; Europe signs the Maastricht Treaty creating the EU; Mandela becomes South Africa’s president; Rwanda suffers genocide; the Zapatistas rise in Mexico; Israel and the PLO sign the Oslo Accords; and Asia faces a financial crisis.

LSTM (Long Short-Term Memory) networks give AI a notebook filled with gates and memory cells that allow it to remember what matters in long sequences, text, voice, or series, ending the problem of gradient vanishing. The whisper-down-the-line game that plagued backpropagation was finally solved.


Scene 4. Seeing to Understand

2012

Context: the world still feels the 2008 financial crisis; the Lisbon Treaty enters into force; WikiLeaks shakes global politics; and the Rio+20 Conference is held.

With AlexNet, the combination of vast image datasets, GPUs, nonlinear algorithms, and data augmentation breaks performance records in computer vision. Errors shrink, and machines begin to describe what they see almost flawlessly.


Scene 5. Meaning Speaks the Language of Geometry and Algebra

2013–2014

Context: a new Pope (Francis); China launches the Belt and Road Initiative; Russia annexes Crimea; ISIS rises; and Malaysia Airlines flight MH370 disappears.

Embeddings represent words and concepts as numerical vectors-map, meaning geometrically: king is as close to man as queen is to woman, and astronaut sits in an equidistant point between man and woman. This distributed representation turns mapping and measuring distances into the heart of AI language understanding.


Scene 6. Big Data!

2015

Context: the world agrees on the 2030 Agenda for Sustainable Development, the Paris Agreement, and the Addis Ababa Action Agenda on financing for development.

The UN report A World That Counts establishes data as the new infrastructure of development, bringing Big Data and the “data revolution” from the tech world into multilateral policymaking.


Second cameo: in 2016, Federated Learning emerges. Collaboration without data extraction, learning without moving data, strengthening privacy, and sparking debates about data sovereignty and digital cooperation.


Scene 7. Attention Is All You Need

2017

Context: Brexit unfolds, Trump takes office, the Colombian peace agreement is signed, and the Panama Papers scandal explodes.

A paper titled Attention Is All You Need introduces a new neural network architecture, the Transformer, that parallelizes training, captures long-term dependencies, and adds stability. Not everything that can be learned is equally important: context matters. The Transformer becomes the architectural hinge of modern AI.


Scene 8. AI Gets Conversational

2018–2023

Context: the U.S.-China trade war, Brexit’s conclusion, France’s “yellow vests,” Bolsonaro’s presidency, the COVID-19 pandemic, and Russia’s invasion of Ukraine.

In 2018, Google presented BERT, a model that “understands” context by reading both before and after each word. This leap brings nuance, irony, and meaning to machines, and gives rise to LLMs (Large Language Models), including the GPT family. AI now speaks our language, driving mass adoption.


Scene 9. Hard Science, Real Impact, and a Nobel Prize

2020–2021

AlphaFold2 predicts protein structures with near-laboratory precision, releasing a global atlas. AI proves it can accelerate science, not just classify photos. The creators of AlphaFold later received the Nobel Prize for the breakthroughs their work enabled.


Scene 10. The First Comprehensive AI Law

2024–2025

Context: Trump’s second term; war in Ukraine; conflict in Gaza; expansion of BRICS; and the UN Summit of the Future adopts the Global Digital Compact.

Amid competing governance models, the U.S. free-market vs. China’s state-centered approach, the EU takes the middle path, balancing innovation and rights, and passes the world’s first comprehensive AI law: the EU AI Act, centered on risk-based regulation.


Final Credits

Far from a historical museum of AI achievements, this “written short film” is a living map of forces still in motion. Turing started it; backpropagation and LSTM taught us how to teach; AlexNet improved perception; embeddings deepened understanding; the world exploded in data; Transformers optimized learning; LLMs democratized AI; AlphaFold proved its scientific worth; and the AI Act drew the first global rules.

In AI, not everything is new, nothing is completely changed, and no one knows what comes next, and that is what makes it so fascinating, innovative, and challenging. Before fearing or defending it, we must understand it.

As we have said before, intelligence is not only what lies inside a head (or a machine) but what emerges in the space between them.