ΑΙhub.org
 

#AAAI2023 workshops round-up 1: AI for credible elections, and responsible human-centric AI


by
01 March 2023



share this:
AAAI banner, with Washington DC view and AAAI23 text

As part of the 37th AAAI Conference on Artificial Intelligence (AAAI2023), 32 different workshops were held, covering a wide range of different AI topics. We hear from the organisers of two of the workshops, who tell us their key takeaways from their respective events.


AI for Credible Elections: A Call To Action with Trusted AI

Organisers: Biplav Srivastava, Anita Nikolich, Andrea Hickerson, Tarmo Koppel, Sachindra Joshi, Chris Dawes.

This workshop examined the challenges of credible elections globally in an academic setting with apolitical discussion of significant issues. The key takeways from the workshop were as follows:

  • How a technology is designed and how issues around their usage are handled can affect voters’ trust in them. Electronic voting has the problem of transparency and a paper trail helps mitigate it somewhat.
  • The different jurisdictions in the US have a lot of freedom in organizing elections, but as a result, also chaos from that. It is an open question whether this is desirable for a credible election.
  • AI can specifically help elections by disseminating official information (e.g., about candidates, electoral process and candidates) personalized to a voter’s cognitive needs at scale, in their language and format. When AI helps prevent mis- and dis-information, the impact can be far reaching.

You can read a longer summary of the workshop here.


R2HCAI: Representation Learning for Responsible Human-Centric AI

Organisers: Ahmad Beirami, Ali Etemad, Asma Ghandeharioun, Luyang Liu, Ninareh Mehrabi, Pritam Sarkar.

  • The AAAI 2023 Workshop on Representation Learning for Responsible Human-Centric AI (R2HCAI) brought together researchers who are broadly interested in representation learning for responsible human-centric AI. The goal of the workshop was to facilitate the development and adoption of AI systems that can enhance, augment, and improve the quality of human life.
  • We had six inspiring invited talks from renowned researchers that covered a wide range of research in the field of responsible human-centric AI. Marzyeh Ghassemi gave a talk on designing machine learning processes for equitable health systems, while Daniel Ruckert shared their recent work on human-centered AI for medical imaging. Kathy Meier-Hellstern shared a framework for responsible AI for large models, and Jacob Andreas presented their research towards natural language supervision. Hima Lakkaraju gave a talk on “Bringing Order to Chaos: Probing the Disagreement Problem in XAI”. Finally, Deepak Pathak shared their research on how robots can learn from human videos by watching, practicing, and improving.
  • Under the moderation of Ali Etemad and Ninareh Mehrabi, Jacob Andreas, Kathy Meier-Hellstern, Deepak Pathak, and Daniel Rückert there was an insightful panel discussion on representation learning for responsible human-centric AI, including a discussion around the responsible AI aspects of powerful generative models.
  • We congratulate Marwa Abdulhai, Clément Crepy, Dasha Valter, John Canny, and Natasha Jaques on winning the best paper award for their paper “Moral Foundations of Large Language Models”. This paper uses Moral Foundation Theory to analyze whether popular LLMs have acquired a bias towards a particular set of moral values.
  • We received 46 submissions from 219 authors, of which 30 papers were accepted. 69 program committee members were nominated by workshop chairs, authors and through self nominations. There were also 32 area chairs nominated by workshop chairs. To maximize alignment between paper topics and reviewer expertise, we gathered paper biddings from the program committee and area chairs and assigned papers to reviewers accordingly.

You can see the full list of workshops that took place at AAAI 2023 here.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Optimizing LLM test-time compute involves solving a meta-RL problem

  20 Jan 2025
By altering the LLM training objective, we can reuse existing data along with more test-time compute to train models to do better.

Generating a biomedical knowledge graph question answering dataset

  17 Jan 2025
Introducing PrimeKGQA - a scalable approach to dataset generation, harnessing the power of large language models.

The Machine Ethics podcast: 2024 in review with Karin Rudolph and Ben Byford

Karin Rudolph and Ben Byford talk about 2024 touching on the EU AI Act, agent-based AI and advertising, AI search and access to information, conflicting goals of many AI agents, and much more.

Playbook released with guidance on creating images of AI

  15 Jan 2025
Archival Images of AI project enables the creation of meaningful and compelling images of AI.

The Good Robot podcast: Lithium extraction in the Atacama with Sebastián Lehuedé

  13 Jan 2025
Eleanor and Kerry chat to Sebastián Lehuedé about data activism, the effects of lithium extraction, and the importance of reflexive research ethics.

Interview with Erica Kimei: Using ML for studying greenhouse gas emissions from livestock

  10 Jan 2025
Find out about work that brings together agriculture, environmental science, and advanced data analytics.

TELL: Explaining neural networks using logic

  09 Jan 2025
Alessio and colleagues have developed a neural network that can be directly transformed into logic.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association