ΑΙhub.org
 

#AAAI2024 workshops round-up 4: eXplainable AI approaches for deep reinforcement learning, and responsible language models

by
12 April 2024



share this:

Above: The workshop organisers : eXplainable AI approaches for deep reinforcement learning. Image credit: Biagio La Rosa. Below: Responsible language models workshop.

In this fourth round-up article of the workshops at AAAI 2024, the organisers of the following two events introduce their workshop and present their key takeaways:

  • eXplainable AI approaches for deep reinforcement learning
  • Responsible language models

eXplainable AI approaches for deep reinforcement learning
Organisers: Roberto Capobianco, Oliver Chang, Leilani Gilpin, Biagio La Rosa, Michela Proietti, Alessio Ragno.

Deep reinforcement learning (DRL) has recently made remarkable progress in several application domains, such as games, finance, autonomous driving, and recommendation systems. However, the black-box nature of deep neural networks and the complex interaction among various factors raise challenges in understanding and interpreting the models’ decision-making processes.

This workshop brought together researchers, practitioners, and experts from both the DRL and the explainable AI communities to focus on methods, techniques, and frameworks that enhance the explainability and interpretability of DRL algorithms.

group photo - five peopleThe workshop organisers. Image credit: Biagio La Rosa.

Key takeaways from the workshop:

  • When agents can explain why they choose an action with respect to another one, the ability of the human user to predict what the agent will do in the future increases noticeably. Therefore, this is a crucial aspect to take into account when deploying RL agents in settings that require real-time human-agent collaboration.
  • While working with counterfactual explanations, it is important to perform plausibility adjustments and make sure that the operated changes are human-perceivable.
  • How to use explanations to improve the agent’s behavior still remains an open question which is worth investigating in the near future.

Two speakers at the workshop. Image credit: Biagio La Rosa.


Responsible language models
Organisers: Faiza Khan Khattak, Lu Cheng, Sedef Akinli-Kocak, Laleh Seyed-Kalantari, Mengnan Du, Fengxiang He, Bo Li, Blessing Ogbuokiri, Shaina Raza, Yihang Wang, Xiaodan Zhu, Graham W. Taylor

The responsible language models (ReLM) workshop focused on the development, implementation, and applications of LMs aligned with responsible AI principles. Both theoretical and practical challenges regarding the design and deployment of responsible LMs were discussed, including bias identification and quantification, bias mitigation, transparency, privacy and security issues, hallucination, uncertainty quantification, and various other risks associated with LMs. The workshop drew 68 registered attendees, showcasing widespread interest and engagement from various stakeholders.

Research Contributions: A total of 21 accepted articles, including six spotlight presentations and 15 posters, demonstrated the depth and breadth of research on the responsible use of LM. One paper Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs was awarded ‘best paper’, while another “Inverse Prompt Engineering for Safety in Large Language Models” was a ‘runner-up’ in the categorization.

ReLM workshop attendees

Talks: Six invited speakers, five panelists, and six spotlight papers discussed current and best practices, gaps, and likely AI-based interventions with diverse speakers from the United States, Canada, and India. The workshop successfully promoted collaboration among NLP researchers from academia and industry.

Our invited keynote speaker, Filippo Menczer (Indiana University) presented a talk titled “AI and Social Media Manipulation: The Good, the Bad, and the Ugly”. The talk focused on analyzing and modeling the spread of information and misinformation in social networks, as well as detecting and countering the manipulation of social media. In the first invited talk, Frank Rudzicz (Dalhousie University) discussed the dangers of language models in his talk titled “Quis custodiet ipsos custodes?” The second invited talk was by Kun Zhang (Carnegie Mellon University and Mohamed bin Zayed University of Artificial Intelligence) who spoke about “Causal Representation Learning: Discovery of the Hidden World”. Muhammad Abdul-Mageed (University of British Columbia) gave the third invited talk on ”Inclusive Language Models”, focusing on applications related to speech and language understanding and generation tasks. Balaraman Ravindran (IIT Madras) presented ”InSaAF: Incorporating Safety through Accuracy and Fairness. Are LLMs ready for the Indian Legal Domain?” The talk examined LLMs’ performance in legal tasks within the Indian context, introducing the Legal Safety Score to measure fairness and accuracy and suggesting fine-tuning with legal datasets. Lastly, Sarath Chandar (Ecole Polytechnique de Montreal) discussed “Rethinking Interpretability” in his talk.

Panel discussion: The topic of the panel discussion is Bridging the Gap: Responsible Language Model Deployment in Industry and Academia. The panel discussion was moderated by Peter Lewis (Ontario Tech University) and featured Antoaneta Vladimirova (Roche) Donny Cheung (Google), Emre Kiciman (Microsoft Research), Eric Jiawei He from Borealis AI, and Jiliang Tang (Michigan State University). This discussion focused on the challenges and opportunities associated with deploying LMs responsibly in real-world scenarios. The panelists advocated for the implementation of policies to establish standardized protocols for LMs before deployment. This emphasis on proactive measures aligns with the goal of ensuring responsible and ethical use of LMs.


Links to our previous round-up articles:
#AAAI2024 workshops round-up 1: Cooperative multi-agent systems decision-making and learning
#AAAI2024 workshops round-up 2: AI for credible elections, and are large language models simply causal parrots?
#AAAI2024 workshops round-up 3: human-centric representation learning, and AI to accelerate science and engineering



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Madagascar’s ancient baobab forests are being restored by communities – with a little help from AI

The collaboration between communities and scientists aims to restore baobab forests in Madagascar to their natural state.
24 May 2024, by

DataLike: Interview with Wuraola Oyewusi

Ndane and Isabella talk to Wuraola Oyewusi about challenging and rewarding aspects of research and how her background in pharmacy has helped her data and AI career

European Union AI Act receives final approval

On 21 May, the Council of the EU formally signed off the artificial intelligence Act.
22 May 2024, by

#ICLR2024 invited talk: Priya Donti on why your work matters for climate more than you think

How is AI research related to climate, and how can the AI community better align their work with climate change-related goals?
21 May 2024, by

Congratulations to the #ICRA2024 best paper winners

The winners and finalists in the different categories have been announced.
20 May 2024, by

Trotting robots offer insights into animal gait transitions

A four-legged robot trained with machine learning has learned to avoid falls by spontaneously switching between walking, trotting, and pronking
17 May 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association