ΑΙhub.org
 

One Hundred Year Study on Artificial Intelligence (AI100) – a panel discussion at #IJCAI-PRICAI 2020

by
21 January 2021



share this:
IJCAI-PRICAI2020 LOGO

One of the panel discussions at IJCAI-PRICAI 2020 focussed on the One Hundred Year Study on Artificial Intelligence (AI100). The mission of AI100 is to launch a study every five years, over the course of a century, to better track and anticipate how artificial intelligence propagates through society, and how it shapes different aspects of our lives. This IJCAI session brought together some of the people involved in the AI100 initiative to discuss their efforts and the direction of the project.

Taking part in the panel discussion were:

  • Mary L Gray (Microsoft Research & Harvard University)
  • Peter Stone (University of Texas at Austin)
  • David Robinson (Cornell University)
  • Johannes Himmelreich (Syracuse University)
  • Thomas Arnold (Tufts University)
  • Russ Altman (Stanford University)

The goals of the AI100 are “to support a longitudinal study of AI advances on people and society, centering on periodic studies of developments, trends, futures, and potential disruptions associated with the developments in machine intelligence, and formulating assessments, recommendations and guidance on proactive efforts”.

Working on the AI100 project are a standing committee and a study panel. The first study panel report, released in 2016, can be read in full here. This reports provide insights from people who work closely in the field, in part to counter the external perceptions and the hype which surround AI, and to accurately portray what is going on in the field. The intended audience for this report is broad, ranging from AI researchers to the general public, from industry to policy makers.

The second study panel report, expected in late 2021, is now underway. It will be based, in part, on two study-workshops commissioned by the AI100 standing committee, one entitled “Coding Caring” and the other “Prediction in Practice”.

In the first part of the session, the panellists discussed their experiences from the two study workshops. These study groups brought together a whole range of stakeholders, including academics (from different disciplines), start-ups, care-givers, and other practitioners. They also brought in people who had created high-stakes AI applications and found out what it was like to maintain and integrate AI applications as part of a larger system. Their aim was to address conceptual, ethical and political issues via a multidisciplinary approach. Balancing the needs of systems users, customers, start-ups and the public sector is a fiendishly difficult challenge, but one that it is necessary to address.

In the second part of the session, we heard views on the value of the AI100 initiative. AI100 allows a periodic, longitudinal view of how AI is viewed by society, and aims to report realistic hopes and concerns. AI has progressed to the point where we need to be having conversations with practitioners about the implications of deploying AI systems in settings such as care and law. AI researchers should be aware of how their work impacts society.

Find out more about the next report here.



tags:


Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



The Machine Ethics Podcast: AI readiness with Tim El-Sheikh

In this episode, Ben chats with Tim El-Sheikh about ethical AI as the smarter AI, the importance of a business AI strategy, getting data ready, and more.
22 October 2021, by

Join our team of AIhub ambassadors!

We are looking for people to join us as AIhub ambassadors.
21 October 2021, by

Interview with Lily Xu – applying machine learning to the prevention of illegal wildlife poaching

Lily Xu tells us about her work applying machine learning and game theory to wildlife conservation.
20 October 2021, by

What bird is singing? Merlin Bird ID app offers instant answers

The Cornell Lab of Ornithology’s free Merlin Bird ID app can identify bird sounds.
19 October 2021, by

Distilling neural networks into wavelet models using interpretations

We propose a method which distills information from a trained DNN into a wavelet transform.
18 October 2021, by

Cynthia Rudin wins AAAI Squirrel AI Award

Duke professor becomes second recipient of AAAI Squirrel AI Award for pioneering socially responsible AI.
15 October 2021, by





©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association