ΑΙhub.org
 

#AAAI2024 workshops round-up 4: eXplainable AI approaches for deep reinforcement learning, and responsible language models


by
12 April 2024



share this:

Above: The workshop organisers : eXplainable AI approaches for deep reinforcement learning. Image credit: Biagio La Rosa. Below: Responsible language models workshop.

In this fourth round-up article of the workshops at AAAI 2024, the organisers of the following two events introduce their workshop and present their key takeaways:

  • eXplainable AI approaches for deep reinforcement learning
  • Responsible language models

eXplainable AI approaches for deep reinforcement learning
Organisers: Roberto Capobianco, Oliver Chang, Leilani Gilpin, Biagio La Rosa, Michela Proietti, Alessio Ragno.

Deep reinforcement learning (DRL) has recently made remarkable progress in several application domains, such as games, finance, autonomous driving, and recommendation systems. However, the black-box nature of deep neural networks and the complex interaction among various factors raise challenges in understanding and interpreting the models’ decision-making processes.

This workshop brought together researchers, practitioners, and experts from both the DRL and the explainable AI communities to focus on methods, techniques, and frameworks that enhance the explainability and interpretability of DRL algorithms.

group photo - five peopleThe workshop organisers. Image credit: Biagio La Rosa.

Key takeaways from the workshop:

  • When agents can explain why they choose an action with respect to another one, the ability of the human user to predict what the agent will do in the future increases noticeably. Therefore, this is a crucial aspect to take into account when deploying RL agents in settings that require real-time human-agent collaboration.
  • While working with counterfactual explanations, it is important to perform plausibility adjustments and make sure that the operated changes are human-perceivable.
  • How to use explanations to improve the agent’s behavior still remains an open question which is worth investigating in the near future.

Two speakers at the workshop. Image credit: Biagio La Rosa.


Responsible language models
Organisers: Faiza Khan Khattak, Lu Cheng, Sedef Akinli-Kocak, Laleh Seyed-Kalantari, Mengnan Du, Fengxiang He, Bo Li, Blessing Ogbuokiri, Shaina Raza, Yihang Wang, Xiaodan Zhu, Graham W. Taylor

The responsible language models (ReLM) workshop focused on the development, implementation, and applications of LMs aligned with responsible AI principles. Both theoretical and practical challenges regarding the design and deployment of responsible LMs were discussed, including bias identification and quantification, bias mitigation, transparency, privacy and security issues, hallucination, uncertainty quantification, and various other risks associated with LMs. The workshop drew 68 registered attendees, showcasing widespread interest and engagement from various stakeholders.

Research Contributions: A total of 21 accepted articles, including six spotlight presentations and 15 posters, demonstrated the depth and breadth of research on the responsible use of LM. One paper Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs was awarded ‘best paper’, while another “Inverse Prompt Engineering for Safety in Large Language Models” was a ‘runner-up’ in the categorization.

ReLM workshop attendees

Talks: Six invited speakers, five panelists, and six spotlight papers discussed current and best practices, gaps, and likely AI-based interventions with diverse speakers from the United States, Canada, and India. The workshop successfully promoted collaboration among NLP researchers from academia and industry.

Our invited keynote speaker, Filippo Menczer (Indiana University) presented a talk titled “AI and Social Media Manipulation: The Good, the Bad, and the Ugly”. The talk focused on analyzing and modeling the spread of information and misinformation in social networks, as well as detecting and countering the manipulation of social media. In the first invited talk, Frank Rudzicz (Dalhousie University) discussed the dangers of language models in his talk titled “Quis custodiet ipsos custodes?” The second invited talk was by Kun Zhang (Carnegie Mellon University and Mohamed bin Zayed University of Artificial Intelligence) who spoke about “Causal Representation Learning: Discovery of the Hidden World”. Muhammad Abdul-Mageed (University of British Columbia) gave the third invited talk on ”Inclusive Language Models”, focusing on applications related to speech and language understanding and generation tasks. Balaraman Ravindran (IIT Madras) presented ”InSaAF: Incorporating Safety through Accuracy and Fairness. Are LLMs ready for the Indian Legal Domain?” The talk examined LLMs’ performance in legal tasks within the Indian context, introducing the Legal Safety Score to measure fairness and accuracy and suggesting fine-tuning with legal datasets. Lastly, Sarath Chandar (Ecole Polytechnique de Montreal) discussed “Rethinking Interpretability” in his talk.

Panel discussion: The topic of the panel discussion is Bridging the Gap: Responsible Language Model Deployment in Industry and Academia. The panel discussion was moderated by Peter Lewis (Ontario Tech University) and featured Antoaneta Vladimirova (Roche) Donny Cheung (Google), Emre Kiciman (Microsoft Research), Eric Jiawei He from Borealis AI, and Jiliang Tang (Michigan State University). This discussion focused on the challenges and opportunities associated with deploying LMs responsibly in real-world scenarios. The panelists advocated for the implementation of policies to establish standardized protocols for LMs before deployment. This emphasis on proactive measures aligns with the goal of ensuring responsible and ethical use of LMs.


Links to our previous round-up articles:
#AAAI2024 workshops round-up 1: Cooperative multi-agent systems decision-making and learning
#AAAI2024 workshops round-up 2: AI for credible elections, and are large language models simply causal parrots?
#AAAI2024 workshops round-up 3: human-centric representation learning, and AI to accelerate science and engineering



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.

Multi-agent path finding in continuous environments

and   11 Dec 2024
How can a group of agents minimise their journey length whilst avoiding collisions?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association