ΑΙhub.org
 

#AAAI2024 workshops round-up 4: eXplainable AI approaches for deep reinforcement learning, and responsible language models


by
12 April 2024



share this:

Above: The workshop organisers : eXplainable AI approaches for deep reinforcement learning. Image credit: Biagio La Rosa. Below: Responsible language models workshop.

In this fourth round-up article of the workshops at AAAI 2024, the organisers of the following two events introduce their workshop and present their key takeaways:

  • eXplainable AI approaches for deep reinforcement learning
  • Responsible language models

eXplainable AI approaches for deep reinforcement learning
Organisers: Roberto Capobianco, Oliver Chang, Leilani Gilpin, Biagio La Rosa, Michela Proietti, Alessio Ragno.

Deep reinforcement learning (DRL) has recently made remarkable progress in several application domains, such as games, finance, autonomous driving, and recommendation systems. However, the black-box nature of deep neural networks and the complex interaction among various factors raise challenges in understanding and interpreting the models’ decision-making processes.

This workshop brought together researchers, practitioners, and experts from both the DRL and the explainable AI communities to focus on methods, techniques, and frameworks that enhance the explainability and interpretability of DRL algorithms.

group photo - five peopleThe workshop organisers. Image credit: Biagio La Rosa.

Key takeaways from the workshop:

  • When agents can explain why they choose an action with respect to another one, the ability of the human user to predict what the agent will do in the future increases noticeably. Therefore, this is a crucial aspect to take into account when deploying RL agents in settings that require real-time human-agent collaboration.
  • While working with counterfactual explanations, it is important to perform plausibility adjustments and make sure that the operated changes are human-perceivable.
  • How to use explanations to improve the agent’s behavior still remains an open question which is worth investigating in the near future.

Two speakers at the workshop. Image credit: Biagio La Rosa.


Responsible language models
Organisers: Faiza Khan Khattak, Lu Cheng, Sedef Akinli-Kocak, Laleh Seyed-Kalantari, Mengnan Du, Fengxiang He, Bo Li, Blessing Ogbuokiri, Shaina Raza, Yihang Wang, Xiaodan Zhu, Graham W. Taylor

The responsible language models (ReLM) workshop focused on the development, implementation, and applications of LMs aligned with responsible AI principles. Both theoretical and practical challenges regarding the design and deployment of responsible LMs were discussed, including bias identification and quantification, bias mitigation, transparency, privacy and security issues, hallucination, uncertainty quantification, and various other risks associated with LMs. The workshop drew 68 registered attendees, showcasing widespread interest and engagement from various stakeholders.

Research Contributions: A total of 21 accepted articles, including six spotlight presentations and 15 posters, demonstrated the depth and breadth of research on the responsible use of LM. One paper Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs was awarded ‘best paper’, while another “Inverse Prompt Engineering for Safety in Large Language Models” was a ‘runner-up’ in the categorization.

ReLM workshop attendees

Talks: Six invited speakers, five panelists, and six spotlight papers discussed current and best practices, gaps, and likely AI-based interventions with diverse speakers from the United States, Canada, and India. The workshop successfully promoted collaboration among NLP researchers from academia and industry.

Our invited keynote speaker, Filippo Menczer (Indiana University) presented a talk titled “AI and Social Media Manipulation: The Good, the Bad, and the Ugly”. The talk focused on analyzing and modeling the spread of information and misinformation in social networks, as well as detecting and countering the manipulation of social media. In the first invited talk, Frank Rudzicz (Dalhousie University) discussed the dangers of language models in his talk titled “Quis custodiet ipsos custodes?” The second invited talk was by Kun Zhang (Carnegie Mellon University and Mohamed bin Zayed University of Artificial Intelligence) who spoke about “Causal Representation Learning: Discovery of the Hidden World”. Muhammad Abdul-Mageed (University of British Columbia) gave the third invited talk on ”Inclusive Language Models”, focusing on applications related to speech and language understanding and generation tasks. Balaraman Ravindran (IIT Madras) presented ”InSaAF: Incorporating Safety through Accuracy and Fairness. Are LLMs ready for the Indian Legal Domain?” The talk examined LLMs’ performance in legal tasks within the Indian context, introducing the Legal Safety Score to measure fairness and accuracy and suggesting fine-tuning with legal datasets. Lastly, Sarath Chandar (Ecole Polytechnique de Montreal) discussed “Rethinking Interpretability” in his talk.

Panel discussion: The topic of the panel discussion is Bridging the Gap: Responsible Language Model Deployment in Industry and Academia. The panel discussion was moderated by Peter Lewis (Ontario Tech University) and featured Antoaneta Vladimirova (Roche) Donny Cheung (Google), Emre Kiciman (Microsoft Research), Eric Jiawei He from Borealis AI, and Jiliang Tang (Michigan State University). This discussion focused on the challenges and opportunities associated with deploying LMs responsibly in real-world scenarios. The panelists advocated for the implementation of policies to establish standardized protocols for LMs before deployment. This emphasis on proactive measures aligns with the goal of ensuring responsible and ethical use of LMs.


Links to our previous round-up articles:
#AAAI2024 workshops round-up 1: Cooperative multi-agent systems decision-making and learning
#AAAI2024 workshops round-up 2: AI for credible elections, and are large language models simply causal parrots?
#AAAI2024 workshops round-up 3: human-centric representation learning, and AI to accelerate science and engineering



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



AI UK 2025 conference recordings now available to watch

  11 Apr 2025
Listen to the talks from this year's AI UK conference.

#AAAI2025 workshops round-up 2: Open-source AI for mainstream use, and federated learning for unbounded and intelligent decentralization

  10 Apr 2025
We hear from the organisers of two workshops at AAAI2025 and find out the key takeaways from their events.

Accelerating drug development with AI

  09 Apr 2025
Waterloo researchers use machine learning to predict how new drugs could affect the body

ChatGPT’s Studio Ghibli-style images show its creative power – but raise new copyright problems

  08 Apr 2025
Social media has recently been flooded with images that look like they belong in a Studio Ghibli film.

#AAAI2025 invited talk round-up 1: labour economics, and reasoning about spatial information

  07 Apr 2025
We give a flavour of two plenary talks from the AAAI conference in Philadelphia.

Everything you say to an Alexa speaker will now be sent to Amazon

  04 Apr 2025
This change was implemented on 28 March 2025.

End-to-end data-driven weather prediction

  04 Apr 2025
A new AI weather prediction system, developed by a team of researchers, can deliver accurate forecasts.

Interview with Joseph Marvin Imperial: aligning generative AI with technical standards

  02 Apr 2025
Joseph tells us about his PhD research so far and his experience at the AAAI 2025 Doctoral Consortium.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association