ΑΙhub.org
 

#AAAI2024 workshops round-up 4: eXplainable AI approaches for deep reinforcement learning, and responsible language models

by
12 April 2024



share this:

Above: The workshop organisers : eXplainable AI approaches for deep reinforcement learning. Image credit: Biagio La Rosa. Below: Responsible language models workshop.

In this fourth round-up article of the workshops at AAAI 2024, the organisers of the following two events introduce their workshop and present their key takeaways:

  • eXplainable AI approaches for deep reinforcement learning
  • Responsible language models

eXplainable AI approaches for deep reinforcement learning
Organisers: Roberto Capobianco, Oliver Chang, Leilani Gilpin, Biagio La Rosa, Michela Proietti, Alessio Ragno.

Deep reinforcement learning (DRL) has recently made remarkable progress in several application domains, such as games, finance, autonomous driving, and recommendation systems. However, the black-box nature of deep neural networks and the complex interaction among various factors raise challenges in understanding and interpreting the models’ decision-making processes.

This workshop brought together researchers, practitioners, and experts from both the DRL and the explainable AI communities to focus on methods, techniques, and frameworks that enhance the explainability and interpretability of DRL algorithms.

group photo - five peopleThe workshop organisers. Image credit: Biagio La Rosa.

Key takeaways from the workshop:

  • When agents can explain why they choose an action with respect to another one, the ability of the human user to predict what the agent will do in the future increases noticeably. Therefore, this is a crucial aspect to take into account when deploying RL agents in settings that require real-time human-agent collaboration.
  • While working with counterfactual explanations, it is important to perform plausibility adjustments and make sure that the operated changes are human-perceivable.
  • How to use explanations to improve the agent’s behavior still remains an open question which is worth investigating in the near future.

Two speakers at the workshop. Image credit: Biagio La Rosa.


Responsible language models
Organisers: Faiza Khan Khattak, Lu Cheng, Sedef Akinli-Kocak, Laleh Seyed-Kalantari, Mengnan Du, Fengxiang He, Bo Li, Blessing Ogbuokiri, Shaina Raza, Yihang Wang, Xiaodan Zhu, Graham W. Taylor

The responsible language models (ReLM) workshop focused on the development, implementation, and applications of LMs aligned with responsible AI principles. Both theoretical and practical challenges regarding the design and deployment of responsible LMs were discussed, including bias identification and quantification, bias mitigation, transparency, privacy and security issues, hallucination, uncertainty quantification, and various other risks associated with LMs. The workshop drew 68 registered attendees, showcasing widespread interest and engagement from various stakeholders.

Research Contributions: A total of 21 accepted articles, including six spotlight presentations and 15 posters, demonstrated the depth and breadth of research on the responsible use of LM. One paper Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs was awarded ‘best paper’, while another “Inverse Prompt Engineering for Safety in Large Language Models” was a ‘runner-up’ in the categorization.

ReLM workshop attendees

Talks: Six invited speakers, five panelists, and six spotlight papers discussed current and best practices, gaps, and likely AI-based interventions with diverse speakers from the United States, Canada, and India. The workshop successfully promoted collaboration among NLP researchers from academia and industry.

Our invited keynote speaker, Filippo Menczer (Indiana University) presented a talk titled “AI and Social Media Manipulation: The Good, the Bad, and the Ugly”. The talk focused on analyzing and modeling the spread of information and misinformation in social networks, as well as detecting and countering the manipulation of social media. In the first invited talk, Frank Rudzicz (Dalhousie University) discussed the dangers of language models in his talk titled “Quis custodiet ipsos custodes?” The second invited talk was by Kun Zhang (Carnegie Mellon University and Mohamed bin Zayed University of Artificial Intelligence) who spoke about “Causal Representation Learning: Discovery of the Hidden World”. Muhammad Abdul-Mageed (University of British Columbia) gave the third invited talk on ”Inclusive Language Models”, focusing on applications related to speech and language understanding and generation tasks. Balaraman Ravindran (IIT Madras) presented ”InSaAF: Incorporating Safety through Accuracy and Fairness. Are LLMs ready for the Indian Legal Domain?” The talk examined LLMs’ performance in legal tasks within the Indian context, introducing the Legal Safety Score to measure fairness and accuracy and suggesting fine-tuning with legal datasets. Lastly, Sarath Chandar (Ecole Polytechnique de Montreal) discussed “Rethinking Interpretability” in his talk.

Panel discussion: The topic of the panel discussion is Bridging the Gap: Responsible Language Model Deployment in Industry and Academia. The panel discussion was moderated by Peter Lewis (Ontario Tech University) and featured Antoaneta Vladimirova (Roche) Donny Cheung (Google), Emre Kiciman (Microsoft Research), Eric Jiawei He from Borealis AI, and Jiliang Tang (Michigan State University). This discussion focused on the challenges and opportunities associated with deploying LMs responsibly in real-world scenarios. The panelists advocated for the implementation of policies to establish standardized protocols for LMs before deployment. This emphasis on proactive measures aligns with the goal of ensuring responsible and ethical use of LMs.


Links to our previous round-up articles:
#AAAI2024 workshops round-up 1: Cooperative multi-agent systems decision-making and learning
#AAAI2024 workshops round-up 2: AI for credible elections, and are large language models simply causal parrots?
#AAAI2024 workshops round-up 3: human-centric representation learning, and AI to accelerate science and engineering



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



The Machine Ethics podcast: Good tech with Eleanor Drage and Kerry McInerney

In this episode, Ben chats Eleanor Drage and Kerry McInerney about good tech.
29 April 2024, by

AIhub coffee corner: Open vs closed science

The AIhub coffee corner captures the musings of AI experts over a short conversation.
26 April 2024, by

Are emergent abilities of large language models a mirage? – Interview with Brando Miranda

We hear about work that won a NeurIPS 2023 outstanding paper award.
25 April 2024, by

We built an AI tool to help set priorities for conservation in Madagascar: what we found

Daniele Silvestro has developed a tool that can help identify conservation and restoration priorities.
24 April 2024, by

Interview with Mike Lee: Communicating AI decision-making through demonstrations

We hear from AAAI/SIGAI Doctoral Consortium participant Mike Lee about his research on explainable AI.
23 April 2024, by

Machine learning viability modelling of vertical-axis wind turbines

Researchers have used a genetic learning algorithm to identify optimal pitch profiles for the turbine blades.
22 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association