ΑΙhub.org
 

One Hundred Year Study on Artificial Intelligence (AI100): 2021 report released


by
16 September 2021



share this:

Image from AI100 report. Reproduced under a CC BY-ND 4.0 licence.

Today, the One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report has been released.

The mission of AI100 is to launch a study every five years, over the course of a century, to better track and anticipate how artificial intelligence propagates through society, and how it shapes different aspects of our lives. The first report was published in 2016, and, like that inaugural document, the 2021 edition has been written by a team of AI experts, all with much experience in the field.

The report aims to address four audiences: the general public, industry, government, and AI researchers. It is structured as a collection of responses by the 2021 Study Panel to 12 standing questions and two workshop questions posed by the AI100 Standing Committee. You can see these questions below.

The standing questions:

  1. What are some examples of pictures that reflect important progress in AI and its influences?
  2. What are the most important advances in AI?
    • Underlying technologies; language processing; computer vision and image processing; games; robotics; mobility; health; finance; recommender systems.
  3. What are the most inspiring open grand challenge problems?
    • Turing test; RoboCup; International Math Olympiad; the AI scientist; broader challenges
  4. How much have we progressed in understanding the key mysteries of human intelligence?
    • Collective intelligence; cognitive neuroscience; computational modelling; the state of the art.
  5. What are the prospects for more general artificial intelligence?
    • Self-supervised learning with the transformer architecture, making deep reinforcement learning more general, common sense.
  6. How has public sentiment towards AI evolved, and how should we inform/educate the public?
    • Primary drivers of public understanding and sentiment; improving and widening public understanding of AI: where do we go from here?
  7. How should governments act to ensure AI is developed and used responsibly?
    • Law, policy, and regulation; AI research & development as a policy priority; cooperation and coordination on international policy; case study: lethal autonomous weapons; from principles to practice; dynamic regulation, experimentation, and testing.
  8. What should the roles of academia and industry be, respectively, in the development and deployment of AI technologies and the study of the impacts of AI?
    • Research and innovation; research into societal and ethical issues; development and deployment; education and training; societal impact: monitoring and oversight.
  9. What are the most promising opportunities for AI?
    • AI for augmentation; AI agents on their own.
  10. What are the most pressing dangers of AI?
    • Techno-solutionism; dangers of adopting a statistical perspective on justice; disinformation and threat to democracy; discrimination and risk in the medical setting.
  11. How has AI impacted socioeconomic relationships?
    • The story so far; AI and inequality; localized impact; how the pie is sliced; market power; the future.
  12. Does it appear “building in how we think” works as an engineering strategy in the long run?

The workshop questions:

  1. How are AI-driven predictions made in high-stakes public contexts, and what social, organizational, and practical considerations must policymakers consider in their implementation and governance?
    • Problem formalization; integration, not deployment; diverse governance practices.
  2. What are the most pressing challenges and significant opportunities in the use of artificial intelligence to provide physical and emotional care to people in need?
    • Autonomous systems are enhancing human-to-human care; autonomous systems should not replace human-care relationships; autonomous care technologies produce new challenges; caring AI should be led by social values, not the market.

If you are interested in hearing from some of the researchers who contributed to the report, you can join this seminar on 28 September 2021. This is a fully virtual event that will be led by Russ Altman, Peter Stone, and Michael Littman. The event will offer two broadcasts (9:00-10:00 PDT and 17:00-18:00 PDT)

How to cite the report

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. “Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.” Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report.

Useful links

One Hundred Year Study on Artificial Intelligence (AI100) homepage
Read the report here.
Download the report (pdf) here.
The 2016 report can be found here.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.

The Machine Ethics podcast: moral agents with Jen Semler

In this episode, Ben and Jen Semler talk about what makes a moral agent, the point of moral agents, philosopher and engineer collaborations, and more.

Extending the reward structure in reinforcement learning: an interview with Tanmay Ambadkar

  23 Feb 2026
Find out more about Tanmay's research on RL frameworks, the latest in our series meeting the AAAI Doctoral Consortium participants.

The Good Robot podcast: what makes a drone “good”? with Beryl Pong

  20 Feb 2026
In this episode, Eleanor and Kerry talk to Beryl Pong about what it means to think about drones as “good” or “ethical” technologies.

Relational neurosymbolic Markov models

and   19 Feb 2026
Relational neurosymbolic Markov models make deep sequential models logically consistent, intervenable and generalisable

AI enables a Who’s Who of brown bears in Alaska

  18 Feb 2026
A team of scientists from EPFL and Alaska Pacific University has developed an AI program that can recognize individual bears in the wild, despite the substantial changes that occur in their appearance over the summer season.

Learning to see the physical world: an interview with Jiajun Wu

and   17 Feb 2026
Winner of the 2019 AAAI / ACM SIGAI dissertation award tells us about his current research.

3 Questions: Using AI to help Olympic skaters land a quint

  16 Feb 2026
Researchers are applying AI technologies to help figure skaters improve. They also have thoughts on whether five-rotation jumps are humanly possible.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence