ΑΙhub.org
 

One Hundred Year Study on Artificial Intelligence (AI100) – a panel discussion at #IJCAI-PRICAI 2020


by
21 January 2021



share this:
IJCAI-PRICAI2020 LOGO

One of the panel discussions at IJCAI-PRICAI 2020 focussed on the One Hundred Year Study on Artificial Intelligence (AI100). The mission of AI100 is to launch a study every five years, over the course of a century, to better track and anticipate how artificial intelligence propagates through society, and how it shapes different aspects of our lives. This IJCAI session brought together some of the people involved in the AI100 initiative to discuss their efforts and the direction of the project.

Taking part in the panel discussion were:

  • Mary L Gray (Microsoft Research & Harvard University)
  • Peter Stone (University of Texas at Austin)
  • David Robinson (Cornell University)
  • Johannes Himmelreich (Syracuse University)
  • Thomas Arnold (Tufts University)
  • Russ Altman (Stanford University)

The goals of the AI100 are “to support a longitudinal study of AI advances on people and society, centering on periodic studies of developments, trends, futures, and potential disruptions associated with the developments in machine intelligence, and formulating assessments, recommendations and guidance on proactive efforts”.

Working on the AI100 project are a standing committee and a study panel. The first study panel report, released in 2016, can be read in full here. This reports provide insights from people who work closely in the field, in part to counter the external perceptions and the hype which surround AI, and to accurately portray what is going on in the field. The intended audience for this report is broad, ranging from AI researchers to the general public, from industry to policy makers.

The second study panel report, expected in late 2021, is now underway. It will be based, in part, on two study-workshops commissioned by the AI100 standing committee, one entitled “Coding Caring” and the other “Prediction in Practice”.

In the first part of the session, the panellists discussed their experiences from the two study workshops. These study groups brought together a whole range of stakeholders, including academics (from different disciplines), start-ups, care-givers, and other practitioners. They also brought in people who had created high-stakes AI applications and found out what it was like to maintain and integrate AI applications as part of a larger system. Their aim was to address conceptual, ethical and political issues via a multidisciplinary approach. Balancing the needs of systems users, customers, start-ups and the public sector is a fiendishly difficult challenge, but one that it is necessary to address.

In the second part of the session, we heard views on the value of the AI100 initiative. AI100 allows a periodic, longitudinal view of how AI is viewed by society, and aims to report realistic hopes and concerns. AI has progressed to the point where we need to be having conversations with practitioners about the implications of deploying AI systems in settings such as care and law. AI researchers should be aware of how their work impacts society.

Find out more about the next report here.



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Interview with Kayla Boggess: Explainable AI for more accessible and understandable technologies

  14 Feb 2025
Hear from Doctoral Consortium participant Kayla about her work focussed on explanations for multi-agent reinforcement learning, and human-centric explanations.

The Machine Ethics podcast: Running faster with Enrico Panai

This episode, Ben chats to Enrico Panai about different aspects of AI ethics.

Diffusion model predicts 3D genomic structures

  12 Feb 2025
A new approach predicts how a specific DNA sequence will arrange itself in the cell nucleus.

Interview with Kunpeng Xu: Kernel representation learning for time series

  11 Feb 2025
We hear from AAAI/SIGAI doctoral consortium participant Kunpeng Xu.

The Children’s AI Summit – an event from The Turing Institute

  10 Feb 2025
Find out more about this event held ahead of the Paris AI Action Summit.
coffee corner

AIhub coffee corner: Bad practice in the publication world

  07 Feb 2025
The AIhub coffee corner captures the musings of AI experts over a short conversation.

Explained: Generative AI’s environmental impact

  06 Feb 2025
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.

Interview with Nisarg Shah: Understanding fairness in AI and machine learning

  05 Feb 2025
Hear from the winner of the 2024 IJCAI Computers and Thought Award.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association