ΑΙhub.org
 

One Hundred Year Study on Artificial Intelligence (AI100) – a panel discussion at #IJCAI-PRICAI 2020


by
21 January 2021



share this:
IJCAI-PRICAI2020 LOGO

One of the panel discussions at IJCAI-PRICAI 2020 focussed on the One Hundred Year Study on Artificial Intelligence (AI100). The mission of AI100 is to launch a study every five years, over the course of a century, to better track and anticipate how artificial intelligence propagates through society, and how it shapes different aspects of our lives. This IJCAI session brought together some of the people involved in the AI100 initiative to discuss their efforts and the direction of the project.

Taking part in the panel discussion were:

  • Mary L Gray (Microsoft Research & Harvard University)
  • Peter Stone (University of Texas at Austin)
  • David Robinson (Cornell University)
  • Johannes Himmelreich (Syracuse University)
  • Thomas Arnold (Tufts University)
  • Russ Altman (Stanford University)

The goals of the AI100 are “to support a longitudinal study of AI advances on people and society, centering on periodic studies of developments, trends, futures, and potential disruptions associated with the developments in machine intelligence, and formulating assessments, recommendations and guidance on proactive efforts”.

Working on the AI100 project are a standing committee and a study panel. The first study panel report, released in 2016, can be read in full here. This reports provide insights from people who work closely in the field, in part to counter the external perceptions and the hype which surround AI, and to accurately portray what is going on in the field. The intended audience for this report is broad, ranging from AI researchers to the general public, from industry to policy makers.

The second study panel report, expected in late 2021, is now underway. It will be based, in part, on two study-workshops commissioned by the AI100 standing committee, one entitled “Coding Caring” and the other “Prediction in Practice”.

In the first part of the session, the panellists discussed their experiences from the two study workshops. These study groups brought together a whole range of stakeholders, including academics (from different disciplines), start-ups, care-givers, and other practitioners. They also brought in people who had created high-stakes AI applications and found out what it was like to maintain and integrate AI applications as part of a larger system. Their aim was to address conceptual, ethical and political issues via a multidisciplinary approach. Balancing the needs of systems users, customers, start-ups and the public sector is a fiendishly difficult challenge, but one that it is necessary to address.

In the second part of the session, we heard views on the value of the AI100 initiative. AI100 allows a periodic, longitudinal view of how AI is viewed by society, and aims to report realistic hopes and concerns. AI has progressed to the point where we need to be having conversations with practitioners about the implications of deploying AI systems in settings such as care and law. AI researchers should be aware of how their work impacts society.

Find out more about the next report here.



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Interview with Joseph Marvin Imperial: aligning generative AI with technical standards

  02 Apr 2025
Joseph tells us about his PhD research so far and his experience at the AAAI 2025 Doctoral Consortium.

Forthcoming machine learning and AI seminars: April 2025 edition

  01 Apr 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 April and 31 May 2025.

AI can be a powerful tool for scientists. But it can also fuel research misconduct

  31 Mar 2025
While AI is allowing scientists to make technological breakthroughs, there’s also a darker side to the use of AI in science: scientific misconduct is on the rise.
monthly digest

AIhub monthly digest: March 2025 – human-allied AI, differential privacy, and social media microtargeting

  28 Mar 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

AI ring tracks spelled words in American Sign Language

  27 Mar 2025
In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling.

How AI images are ‘flattening’ Indigenous cultures – creating a new form of tech colonialism

  26 Mar 2025
AI-generated stock images that claim to depict “Indigenous Australians”, don’t resemble Aboriginal and Torres Strait Islander peoples.

Interview with Lea Demelius: Researching differential privacy

  25 Mar 2025
We hear from doctoral consortium participant Lea Demelius who is investigating the trade-offs and synergies that arise between various requirements for trustworthy AI.

The Machine Ethics podcast: Careful technology with Rachel Coldicutt

This episode, Ben chats to Rachel Coldicutt about AI taxonomy, innovating for everyone not just the few, responsibilities of researchers, and more.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association