ΑΙhub.org
 

#AAAI2023 workshops round-up 1: AI for credible elections, and responsible human-centric AI


by
01 March 2023



share this:
AAAI banner, with Washington DC view and AAAI23 text

As part of the 37th AAAI Conference on Artificial Intelligence (AAAI2023), 32 different workshops were held, covering a wide range of different AI topics. We hear from the organisers of two of the workshops, who tell us their key takeaways from their respective events.


AI for Credible Elections: A Call To Action with Trusted AI

Organisers: Biplav Srivastava, Anita Nikolich, Andrea Hickerson, Tarmo Koppel, Sachindra Joshi, Chris Dawes.

This workshop examined the challenges of credible elections globally in an academic setting with apolitical discussion of significant issues. The key takeways from the workshop were as follows:

  • How a technology is designed and how issues around their usage are handled can affect voters’ trust in them. Electronic voting has the problem of transparency and a paper trail helps mitigate it somewhat.
  • The different jurisdictions in the US have a lot of freedom in organizing elections, but as a result, also chaos from that. It is an open question whether this is desirable for a credible election.
  • AI can specifically help elections by disseminating official information (e.g., about candidates, electoral process and candidates) personalized to a voter’s cognitive needs at scale, in their language and format. When AI helps prevent mis- and dis-information, the impact can be far reaching.

You can read a longer summary of the workshop here.


R2HCAI: Representation Learning for Responsible Human-Centric AI

Organisers: Ahmad Beirami, Ali Etemad, Asma Ghandeharioun, Luyang Liu, Ninareh Mehrabi, Pritam Sarkar.

  • The AAAI 2023 Workshop on Representation Learning for Responsible Human-Centric AI (R2HCAI) brought together researchers who are broadly interested in representation learning for responsible human-centric AI. The goal of the workshop was to facilitate the development and adoption of AI systems that can enhance, augment, and improve the quality of human life.
  • We had six inspiring invited talks from renowned researchers that covered a wide range of research in the field of responsible human-centric AI. Marzyeh Ghassemi gave a talk on designing machine learning processes for equitable health systems, while Daniel Ruckert shared their recent work on human-centered AI for medical imaging. Kathy Meier-Hellstern shared a framework for responsible AI for large models, and Jacob Andreas presented their research towards natural language supervision. Hima Lakkaraju gave a talk on “Bringing Order to Chaos: Probing the Disagreement Problem in XAI”. Finally, Deepak Pathak shared their research on how robots can learn from human videos by watching, practicing, and improving.
  • Under the moderation of Ali Etemad and Ninareh Mehrabi, Jacob Andreas, Kathy Meier-Hellstern, Deepak Pathak, and Daniel Rückert there was an insightful panel discussion on representation learning for responsible human-centric AI, including a discussion around the responsible AI aspects of powerful generative models.
  • We congratulate Marwa Abdulhai, Clément Crepy, Dasha Valter, John Canny, and Natasha Jaques on winning the best paper award for their paper “Moral Foundations of Large Language Models”. This paper uses Moral Foundation Theory to analyze whether popular LLMs have acquired a bias towards a particular set of moral values.
  • We received 46 submissions from 219 authors, of which 30 papers were accepted. 69 program committee members were nominated by workshop chairs, authors and through self nominations. There were also 32 area chairs nominated by workshop chairs. To maximize alignment between paper topics and reviewer expertise, we gathered paper biddings from the program committee and area chairs and assigned papers to reviewers accordingly.

You can see the full list of workshops that took place at AAAI 2023 here.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence