ΑΙhub.org
 

One Hundred Year Study on Artificial Intelligence (AI100) – a panel discussion at #IJCAI-PRICAI 2020

by
21 January 2021



share this:
IJCAI-PRICAI2020 LOGO

One of the panel discussions at IJCAI-PRICAI 2020 focussed on the One Hundred Year Study on Artificial Intelligence (AI100). The mission of AI100 is to launch a study every five years, over the course of a century, to better track and anticipate how artificial intelligence propagates through society, and how it shapes different aspects of our lives. This IJCAI session brought together some of the people involved in the AI100 initiative to discuss their efforts and the direction of the project.

Taking part in the panel discussion were:

  • Mary L Gray (Microsoft Research & Harvard University)
  • Peter Stone (University of Texas at Austin)
  • David Robinson (Cornell University)
  • Johannes Himmelreich (Syracuse University)
  • Thomas Arnold (Tufts University)
  • Russ Altman (Stanford University)

The goals of the AI100 are “to support a longitudinal study of AI advances on people and society, centering on periodic studies of developments, trends, futures, and potential disruptions associated with the developments in machine intelligence, and formulating assessments, recommendations and guidance on proactive efforts”.

Working on the AI100 project are a standing committee and a study panel. The first study panel report, released in 2016, can be read in full here. This reports provide insights from people who work closely in the field, in part to counter the external perceptions and the hype which surround AI, and to accurately portray what is going on in the field. The intended audience for this report is broad, ranging from AI researchers to the general public, from industry to policy makers.

The second study panel report, expected in late 2021, is now underway. It will be based, in part, on two study-workshops commissioned by the AI100 standing committee, one entitled “Coding Caring” and the other “Prediction in Practice”.

In the first part of the session, the panellists discussed their experiences from the two study workshops. These study groups brought together a whole range of stakeholders, including academics (from different disciplines), start-ups, care-givers, and other practitioners. They also brought in people who had created high-stakes AI applications and found out what it was like to maintain and integrate AI applications as part of a larger system. Their aim was to address conceptual, ethical and political issues via a multidisciplinary approach. Balancing the needs of systems users, customers, start-ups and the public sector is a fiendishly difficult challenge, but one that it is necessary to address.

In the second part of the session, we heard views on the value of the AI100 initiative. AI100 allows a periodic, longitudinal view of how AI is viewed by society, and aims to report realistic hopes and concerns. AI has progressed to the point where we need to be having conversations with practitioners about the implications of deploying AI systems in settings such as care and law. AI researchers should be aware of how their work impacts society.

Find out more about the next report here.



tags:


Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Keeping learning-based control safe by regulating distributional shift

We propose a new framework to reason about the safety of a learning-based controller with respect to its training distribution.
30 September 2022, by

Bipedal robot achieves Guinness World Record in 100 metres

Cassie the robot, developed at Oregon State University, records the fastest 100 metres by a bipedal robot.
29 September 2022, by

#IJCAI2022 distinguished paper – Plurality veto: A simple voting rule achieving optimal metric distortion

How can we create a voting system that best represents the preferences of the voters?
28 September 2022, by

AIhub monthly digest: September 2022 – environmental conservation, retrosynthesis, and RoboCup

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
27 September 2022, by

The Machine Ethics Podcast: Rights, trust and ethical choice with Ricardo Baeza-Yates

Host Ben Byford chats to Ricardo Baeza-Yates about responsible AI, the importance of AI governance, questioning people's intent to create AGI, and more.
26 September 2022, by

Recurrent model-free RL can be a strong baseline for many POMDPs

Considering an approach for dealing with realistic problems with noise and incomplete information.
23 September 2022, by





©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association