Trailblazing Women in AI (Part 2)

By Javier Surasky

This is the original version of the blog entry
A Spanish version (ES) will be available next Thursday.

Eleven trailblazing women in AI after Dartmouth: vision, robotics, ethics, and data

A few introductory words

We pick up where the previous post left off—bringing back into view the women whose ideas, methods, and leadership have shaped AI’s evolution. This time, the spotlight turns to 11 figures born after the 1956 Dartmouth Conference, the landmark moment often cited as AI’s formal starting point, and whose work helped push the field from founding ambitions into real-world breakthroughs.

 

Outstanding Women in AI (Part 2: Post–Dartmouth Conference)

1. Leslie Pack Kaelbling.

Born in August 1961 in the United States, where she carried out most of her scientific work at SRI International, Brown University, and later at MIT, becoming a leading figure in reinforcement learning applied to machines operating in uncertain environments—something especially relevant for today’s robotics.

Her work helped turn theoretical ideas into methods that allow systems to learn strategies and behaviors, and her article Reinforcement Learning: A Survey, co-authored with Littman and Moore (1996), is a reference text in reinforcement learning studies.

2. Rosalind W. Picard.

Born in May 1962 in the United States and developed her work mainly at MIT, while also founding and supporting spin-offs such as Affectiva and Empatica..

Her book Affective Computing (1997) is the foundational stone of the field of affective computing, an area that explores how systems can recognize and respond to emotional signals and human states (stress, attention, wellbeing), influencing sensor-based technologies and today’s wearables, as well as applications for health and learning. There she states: “Being a woman in a field containing mostly men has provided [her] extra incentive to cast off the stereotype of emotional female in favor of the logical behavior of a scholar” (Picard, 1997:ix).

3. Cordelia Schmid.

Born in September 1967 in Mainz, Germany, but developed most of her career in France, becoming a world reference in computer vision.

She contributed to the creation of methods to recognize patterns in images and video, applying the idea that it was possible to teach machines to interpret the visual world by combining mathematics, data, and large-scale experimentation. Her contributions lie behind today’s architectures for video analysis applications and systems that “understand” complex scenes, a field in which her work Action Recognition with Improved Trajectories, co-authored with Heng Wang, is among the most influential texts (Wang & Schmid, 2013).

4. Cynthia Breazeal.

Born in November 1967 in the United States, and the central part of her work was based at the MIT Media Lab, from where she worked to bring AI into the real world through robots designed to interact with people, learn from them, and generate more natural bonds, notably through the construction of Kismet, a project for which she served as chief designer. Her book Designing Sociable Robots (Breazeal, 2002) consolidated the vision of social robotics and helped place human–robot interaction on the main AI agenda. Breazeal is also recognized as a major driver of AI literacy and public education initiatives.

5. Daphne Koller.

Born in August 1968 in Israel and developed her career mainly in the United States, with a key academic period at Stanford University, and later large-scale projects in education and biomedicine. She combined AI with two fields of high social impact, education and health, through the application of probabilistic models that allow reasoning under uncertainty; later in her career, she focused on biomedicine, aiming for AI to help accelerate drug discovery. However, Koller is best known for her push for large-scale online education, which led her to co-found Coursera in 2012.

6. Catherine D’Ignazio.

Born in 1975 in the United States, where her career developed and where she now works at MIT (Department of Urban Studies and Planning), directing the Data + Feminism Lab. Her work focuses on the intersection between data, power, and social justice, with an emphasis on data literacy, feminist technology, and social practices. She criticizes treating data as “neutral” and analyzes the harms of automated decisions.

Her most influential contribution is the book co-written with Lauren Klein, Data Feminism, in which the authors systematize the “data feminism” approach, offering a framework to work with ideas of power and representation in data production and management practices: “Our claim, once again, is that data feminism is for everyone. It’s for people of all genders. It’s by people of all genders. And most importantly: it’s about much more than gender. Data feminism is about power, about who has it and who doesn’t, and about how those differentials of power can be challenged and changed using data” (D’Ignazio & Klein, 2020:19).

7. Fei-Fei Li.

Known as “the godmother of AI,” Fei-Fei Li was born on July 3, 1976, in China, and developed her career mainly in the United States, both in academia and in the private sector, working, for example, for Google Cloud. She is the co-founder of World Labs, a company developing generative AI systems that perceive, generate, reason, and interact with the world in three dimensions, and she is currently co-director of Stanford’s Human-Centered Artificial Intelligence (HAI) Institute.

Her key field is computer vision, where she produced theoretical advances, illustrated, for example, by her participation in the team that wrote the paper ImageNet: A Large-Scale Hierarchical Image Database (Deng et al., 2020), which was key triggering the era of massive datasets in computer vision—as well as practical developments. In parallel, she has promoted a “people-centered” perspective concerned with responsible applications and AI’s social benefits.

8. Yejin Choi.

Born in 1977 in South Korea, but developed her career in the United States, holding academic positions at universities such as the University of Washington and, currently, Stanford University and Stanford HAI.

Her influence appears in the field known as commonsense knowledge & reasoning in natural language. She focuses on the problem of machines’ “common sense,” seeking to develop models that have at least basic notions about the world, in order to avoid absurd or dangerous answers in the real-world contexts in which they are produced. Along the way, she has shown that LLMs fail because they do not operate with basic inferences about intentions, consequences, and social norms that people take for granted.

9. Timnit Gebru.

Born in Ethiopia in 1983. Her professional work has unfolded between academia and industry in the United States, where she arrived as a refugee, including at Stanford, Microsoft Research, and Google, and later in independent research through the creation of DAIR (Distributed Artificial Intelligence Research Institute).

She is a central figure in current debates on ethics and responsibility in AI, with long work on the existence of bias in systems, the concealment of social costs, and the concentration of power in the absence of safeguards for transparency and control. She participated in the team that wrote the article On the Dangers of Stochastic Parrots (Bender et al., 2021), which is an indispensable reading for anyone interested in discussions about risks, costs, and the governance of large-scale language models.

10. Joy Buolamwini.

Born on January 23, 1990 in Canada and developed her work mainly in the United States, more specifically at the MIT Media Lab and at the Algorithmic Justice League, combining research, auditing, and outreach around AI with activism demanding the setting of standards, evaluations, and accountability.

She turned a technical problem into a political and social debate topic by showing, in her study Gender Shades, co-authored with Gebru, disparities in accuracy by gender and ethnicity in commercial AI-assisted classification. Before that, she had shown that facial recognition systems failed more often with women and with Black people.

11. Rediet Abebe.

The only person on the list born in the 1990s, more precisely in 1991, in Ethiopia. She moved to the United States to study, first at Harvard, then at the University of Cambridge, and finally to earn her PhD in computer science at Cornell University, with a dissertation titled Designing Algorithms for Social Good (Abebe, 2019).

She was a co-founder of Black in AI and of Mechanism Design for Social Good, dedicating her academic and field work to promoting equity through the application of algorithms and the incorporation of equity into them. She designed algorithmic methods and frameworks to understand and mitigate inequities and to support interventions that create opportunities for marginalized or vulnerable populations.

Final thoughts

Beyond showing that women have always played an important role in AI, even from its incipient origins, the list also reveals inequalities that accompany gender inequality within the AI space, creating a clear space of intersectionality.

When analyzing their countries of birth, we find nine Americans, two British women, two German women, and two Ethiopian women, and one person from each of the following countries: Czechoslovakia (today Slovakia), China, Korea, Ghana, and Israel. Five of them are African/Afro-American (Gladys Brown West, Margaret Hamilton, Timnit Gebru, Rediet Abebe, and Joy Buolamwini) and two have direct Asian ancestry (Fei-Fei Li and Yejin Choi). There is no Latin American woman or any woman from the Arab world on the list.

Of the 11 women born outside the United States, the two British women and one German woman (Katharina Morik) pursued their professional careers in their countries of origin, and the other German woman (Cordelia Schmid) migrated to France. The rest moved to build their professional careers in the United States: two arrived there as students (Ruzena Bajcsy and Rediet Abebe) and one as a refugee (Timnit Gebru). This means that of the 20 women listed, 8 were migrants or refugees.

The few current reports that exist with official data on diversity in AI are relatively recent, and their metrics are not always a good reflection of what they seek to measure (the UNESCO index we mentioned in the introduction, for example, uses LinkedIn data, which cannot express a comprehensive picture of what occurs in the field). But despite these shortcomings, everything indicates that the marginalization women experience in science is replicated—and is even worsened—in the field of AI, and includes intersectional traits, especially ethnic ones.

Returning to the work of pioneers and leaders in the field is not only a way to recognize their contributions to contemporary AI, but also a reminder of their historical marginalization and the efforts they have made to achieve recognition.

The list of 20 women we worked with is only the tip of an iceberg that includes many other marginalized women we will never know about, precisely because doors were closed to them to enter the field or they were not allowed to fully assert their capacities. Moreover, it is a list missing many other names that we had to exclude for formatting reasons, but who deserve the same recognition, such as Martha Pollack (1958, AI for cognitive assistance [intelligent cognitive orthotics]), Claudia Eckert (1959, cybersecurity), Daniela Rus (robotics + AI and distributed robotics), Rineke Verbrugge (1965, logics for multi-agent systems and computational models of social cognition/theory of mind), Maarja Kruusmaa (autonomous and bio-inspired robotics), Kate Crawford (1972) (social, political, and ethical implications of AI); Nicola Dell (human-centered AI, safer and more equitable technology, especially for underserved or at-risk communities, and technology for survivors of intimate partner violence), Kate Devlin (social robotics, intimacy and sexuality), Kira Radinsky (1986, predictive AI and applied machine learning), or Deborah Raji (1995, algorithmic auditing and accountability in AI), among others.

The first woman on our chronological list, Ada Lovelace, could not be a member of the Royal Society Library because of her sex, which prevented her from directly accessing the scientific literature of the time. The last, speaking about being the first woman and Black professor in the computer science department at the University of California, said: “I’m going to come into a space that was not built for me.”

Technology changes fast; prejudice within the scientific sphere does not.


References

Abebe, R. (2019). Designing algorithms for social good (Doctoral dissertation, Cornell University). https://ecommons.cornell.edu/server/api/core/bitstreams/0154e72e-ec86-4622-bf4e-401e9c9a5eda/content

Bender, E.; Gebru, T.; McMillan-Major, A. y Shmitchell, S. (2020). On the Dangers of Stochastic Parrots: Can Language Models be Too Big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://dl.acm.org/doi/epdf/10.1145/3442188.3445922

Breazeal, C. L. (2002). Designing sociable robots. The MIT Press.

Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K. y Li, F.-F. (2009). ImageNet: a Large-Scale Hierarchical Image Database. Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

D’Ignazio, C. y Klein, L. (2020). Data Feminism. The MIT Press.

Pack Kaelbling, L.; Littman, M. y Moore, A. (1996). Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research, (4), 237–285. https://www.jair.org/index.php/jair/article/view/10166/24110

Picard, R. (1997). Affective computing. The MIT Press.

Wang, H. y Schmid, C. (2013). Action recognition with improved trajectories. Proceedings of the IEEE International Conference on Computer Vision (ICCV), IEEE, 3551–3558. https://openaccess.thecvf.com/content_iccv_2013/papers/Wang_Action_Recognition_with_2013_ICCV_paper.pdf

 

This is the original version of the blog entry

A Spanish version (ES) will be available next Thursday