As part of the 37th AAAI Conference on Artificial Intelligence (AAAI2023), 32 different workshops were held, covering a wide range of different AI topics. We hear from the organisers of two of the workshops, who tell us their key takeaways from their respective events.
AI for Credible Elections: A Call To Action with Trusted AI
Organisers: Biplav Srivastava, Anita Nikolich, Andrea Hickerson, Tarmo Koppel, Sachindra Joshi, Chris Dawes.
This workshop examined the challenges of credible elections globally in an academic setting with apolitical discussion of significant issues. The key takeways from the workshop were as follows:
- How a technology is designed and how issues around their usage are handled can affect voters’ trust in them. Electronic voting has the problem of transparency and a paper trail helps mitigate it somewhat.
- The different jurisdictions in the US have a lot of freedom in organizing elections, but as a result, also chaos from that. It is an open question whether this is desirable for a credible election.
- AI can specifically help elections by disseminating official information (e.g., about candidates, electoral process and candidates) personalized to a voter’s cognitive needs at scale, in their language and format. When AI helps prevent mis- and dis-information, the impact can be far reaching.
You can read a longer summary of the workshop here.
R2HCAI: Representation Learning for Responsible Human-Centric AI
Organisers: Ahmad Beirami, Ali Etemad, Asma Ghandeharioun, Luyang Liu, Ninareh Mehrabi, Pritam Sarkar.
- The AAAI 2023 Workshop on Representation Learning for Responsible Human-Centric AI (R2HCAI) brought together researchers who are broadly interested in representation learning for responsible human-centric AI. The goal of the workshop was to facilitate the development and adoption of AI systems that can enhance, augment, and improve the quality of human life.
- We had six inspiring invited talks from renowned researchers that covered a wide range of research in the field of responsible human-centric AI. Marzyeh Ghassemi gave a talk on designing machine learning processes for equitable health systems, while Daniel Ruckert shared their recent work on human-centered AI for medical imaging. Kathy Meier-Hellstern shared a framework for responsible AI for large models, and Jacob Andreas presented their research towards natural language supervision. Hima Lakkaraju gave a talk on “Bringing Order to Chaos: Probing the Disagreement Problem in XAI”. Finally, Deepak Pathak shared their research on how robots can learn from human videos by watching, practicing, and improving.
- Under the moderation of Ali Etemad and Ninareh Mehrabi, Jacob Andreas, Kathy Meier-Hellstern, Deepak Pathak, and Daniel Rückert there was an insightful panel discussion on representation learning for responsible human-centric AI, including a discussion around the responsible AI aspects of powerful generative models.
- We congratulate Marwa Abdulhai, Clément Crepy, Dasha Valter, John Canny, and Natasha Jaques on winning the best paper award for their paper “Moral Foundations of Large Language Models”. This paper uses Moral Foundation Theory to analyze whether popular LLMs have acquired a bias towards a particular set of moral values.
- We received 46 submissions from 219 authors, of which 30 papers were accepted. 69 program committee members were nominated by workshop chairs, authors and through self nominations. There were also 32 area chairs nominated by workshop chairs. To maximize alignment between paper topics and reviewer expertise, we gathered paper biddings from the program committee and area chairs and assigned papers to reviewers accordingly.
You can see the full list of workshops that took place at AAAI 2023 here.
tags:
AAAI,
AAAI2023
AIhub
is dedicated to free high-quality information about AI.
AIhub
is dedicated to free high-quality information about AI.