ΑΙhub.org
 

One Hundred Year Study on Artificial Intelligence (AI100) – a panel discussion at #IJCAI-PRICAI 2020


by
21 January 2021



share this:
IJCAI-PRICAI2020 LOGO

One of the panel discussions at IJCAI-PRICAI 2020 focussed on the One Hundred Year Study on Artificial Intelligence (AI100). The mission of AI100 is to launch a study every five years, over the course of a century, to better track and anticipate how artificial intelligence propagates through society, and how it shapes different aspects of our lives. This IJCAI session brought together some of the people involved in the AI100 initiative to discuss their efforts and the direction of the project.

Taking part in the panel discussion were:

  • Mary L Gray (Microsoft Research & Harvard University)
  • Peter Stone (University of Texas at Austin)
  • David Robinson (Cornell University)
  • Johannes Himmelreich (Syracuse University)
  • Thomas Arnold (Tufts University)
  • Russ Altman (Stanford University)

The goals of the AI100 are “to support a longitudinal study of AI advances on people and society, centering on periodic studies of developments, trends, futures, and potential disruptions associated with the developments in machine intelligence, and formulating assessments, recommendations and guidance on proactive efforts”.

Working on the AI100 project are a standing committee and a study panel. The first study panel report, released in 2016, can be read in full here. This reports provide insights from people who work closely in the field, in part to counter the external perceptions and the hype which surround AI, and to accurately portray what is going on in the field. The intended audience for this report is broad, ranging from AI researchers to the general public, from industry to policy makers.

The second study panel report, expected in late 2021, is now underway. It will be based, in part, on two study-workshops commissioned by the AI100 standing committee, one entitled “Coding Caring” and the other “Prediction in Practice”.

In the first part of the session, the panellists discussed their experiences from the two study workshops. These study groups brought together a whole range of stakeholders, including academics (from different disciplines), start-ups, care-givers, and other practitioners. They also brought in people who had created high-stakes AI applications and found out what it was like to maintain and integrate AI applications as part of a larger system. Their aim was to address conceptual, ethical and political issues via a multidisciplinary approach. Balancing the needs of systems users, customers, start-ups and the public sector is a fiendishly difficult challenge, but one that it is necessary to address.

In the second part of the session, we heard views on the value of the AI100 initiative. AI100 allows a periodic, longitudinal view of how AI is viewed by society, and aims to report realistic hopes and concerns. AI has progressed to the point where we need to be having conversations with practitioners about the implications of deploying AI systems in settings such as care and law. AI researchers should be aware of how their work impacts society.

Find out more about the next report here.



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence