ΑΙhub.org
 

#AAAI2023 workshops round-up 1: AI for credible elections, and responsible human-centric AI


by
01 March 2023



share this:
AAAI banner, with Washington DC view and AAAI23 text

As part of the 37th AAAI Conference on Artificial Intelligence (AAAI2023), 32 different workshops were held, covering a wide range of different AI topics. We hear from the organisers of two of the workshops, who tell us their key takeaways from their respective events.


AI for Credible Elections: A Call To Action with Trusted AI

Organisers: Biplav Srivastava, Anita Nikolich, Andrea Hickerson, Tarmo Koppel, Sachindra Joshi, Chris Dawes.

This workshop examined the challenges of credible elections globally in an academic setting with apolitical discussion of significant issues. The key takeways from the workshop were as follows:

  • How a technology is designed and how issues around their usage are handled can affect voters’ trust in them. Electronic voting has the problem of transparency and a paper trail helps mitigate it somewhat.
  • The different jurisdictions in the US have a lot of freedom in organizing elections, but as a result, also chaos from that. It is an open question whether this is desirable for a credible election.
  • AI can specifically help elections by disseminating official information (e.g., about candidates, electoral process and candidates) personalized to a voter’s cognitive needs at scale, in their language and format. When AI helps prevent mis- and dis-information, the impact can be far reaching.

You can read a longer summary of the workshop here.


R2HCAI: Representation Learning for Responsible Human-Centric AI

Organisers: Ahmad Beirami, Ali Etemad, Asma Ghandeharioun, Luyang Liu, Ninareh Mehrabi, Pritam Sarkar.

  • The AAAI 2023 Workshop on Representation Learning for Responsible Human-Centric AI (R2HCAI) brought together researchers who are broadly interested in representation learning for responsible human-centric AI. The goal of the workshop was to facilitate the development and adoption of AI systems that can enhance, augment, and improve the quality of human life.
  • We had six inspiring invited talks from renowned researchers that covered a wide range of research in the field of responsible human-centric AI. Marzyeh Ghassemi gave a talk on designing machine learning processes for equitable health systems, while Daniel Ruckert shared their recent work on human-centered AI for medical imaging. Kathy Meier-Hellstern shared a framework for responsible AI for large models, and Jacob Andreas presented their research towards natural language supervision. Hima Lakkaraju gave a talk on “Bringing Order to Chaos: Probing the Disagreement Problem in XAI”. Finally, Deepak Pathak shared their research on how robots can learn from human videos by watching, practicing, and improving.
  • Under the moderation of Ali Etemad and Ninareh Mehrabi, Jacob Andreas, Kathy Meier-Hellstern, Deepak Pathak, and Daniel Rückert there was an insightful panel discussion on representation learning for responsible human-centric AI, including a discussion around the responsible AI aspects of powerful generative models.
  • We congratulate Marwa Abdulhai, Clément Crepy, Dasha Valter, John Canny, and Natasha Jaques on winning the best paper award for their paper “Moral Foundations of Large Language Models”. This paper uses Moral Foundation Theory to analyze whether popular LLMs have acquired a bias towards a particular set of moral values.
  • We received 46 submissions from 219 authors, of which 30 papers were accepted. 69 program committee members were nominated by workshop chairs, authors and through self nominations. There were also 32 area chairs nominated by workshop chairs. To maximize alignment between paper topics and reviewer expertise, we gathered paper biddings from the program committee and area chairs and assigned papers to reviewers accordingly.

You can see the full list of workshops that took place at AAAI 2023 here.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.

2026 AI Index Report released

  15 Apr 2026
Find out what the ninth edition of the report, which was published on 13 April, says about trends in AI.

Formal verification for safety evaluation of autonomous vehicles: an interview with Abdelrahman Sayed Sayed

  14 Apr 2026
Find out more about work at the intersection of continuous AI models, formal methods, and autonomous systems.

Water flow in prairie watersheds is increasingly unpredictable — but AI could help

  13 Apr 2026
In recent years, the Prairies have seen bigger swings in climate conditions — very wet years followed by very dry ones.

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

  08 Apr 2026
Francesco tells us how LLMs behave in the social network Moltbook, and what this reveals about network dynamics.

Scaling up multi-agent systems: an interview with Minghong Geng

  07 Apr 2026
We sat down with Minghong in the latest of our interviews with the 2026 AAAI/SIGAI Doctoral Consortium participants.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence