ΑΙhub.org
 

Conference on Reinforcement Learning and Decision Making


by
05 July 2022



share this:

The 5th Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM) 2022 took place at Brown University from 8-11 June. The programme included invited and contributed talks, workshops, and poster sessions. The goal of RLDM is to provide a platform for communication among all researchers interested in learning and decision making over time to achieve a goal.

Over the last few decades, reinforcement learning and decision making have been the focus of an incredible wealth of research spanning a wide variety of fields including psychology, artificial intelligence, machine learning, operations research, control theory, neuroscience, economics and ethology. The interdisciplinary sharing of ideas has been key to many developments in the field, and the meeting is characterized by the multidisciplinarity of the presenters and attendees.

Michael Littman (one of the conference general chairs) said that the conference had been a great success, both in terms of the organization and the content: “For many of us, it was the first in-person conference since the start of the pandemic. The organizers put a lot of thought into ways of keeping people safe from COVID and it appears to have paid off, with very few attendees testing positive. RLDM is always exciting, in part because of the effort to coordinate between the cognitive/neuroscience researchers studying decision-making in natural systems and the AI/ML researchers looking at decision-making in machines”.

RLDM lecture theatreOne of the speakers at RLDM. Photo credit: Michael J Frank.

Watch the recordings of the talks

The talks from the four days of the conference were recorded, and you can watch them here:
Day 1 | Day 2 | Day 3 | Day 4

The talks are also available split by individual speakers here.

Best paper awards

Two articles received the honour of RLDM 2022 Best Paper Award:

  • Yash Chandak, Scott Niekum, Bruno Castro da Silva, Erik Learned-Miller, Emma Brunskill, Philip S. Thomas, Universal off-policy evaluation.
  • Diksha Gupta, Brian DePasquale, Charles Kopec, Carlos Brod, An explanatory link between history biases and lapses.

Some of the participants shared their experience on Twitter.

The event website is here.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Machine learning framework to predict global imperilment status of freshwater fish

  20 Mar 2026
“With our model, decision makers can deploy resources in advance before a species becomes imperiled.”

Interview with AAAI Fellow Yan Liu: machine learning for time series

  19 Mar 2026
Hear from 2026 AAAI Fellow Yan Liu about her research into time series, the associated applications, and the promise of physics-informed models.

A principled approach for data bias mitigation

  18 Mar 2026
Find out more about work presented at AIES 2025 which proposes a new way to measure data bias, along with a mitigation algorithm with mathematical guarantees.

An AI image generator for non-English speakers

  17 Mar 2026
"Translations lose the nuances of language and culture, because many words lack good English equivalents."

AI and Theory of Mind: an interview with Nitay Alon

  16 Mar 2026
Find out more about how Theory of Mind plays out in deceptive environments, multi-agents systems, the interdisciplinary nature of this field, when to use Theory of Mind, and when not to, and more.
coffee corner

AIhub coffee corner: AI, kids, and the future – “generation AI”

  13 Mar 2026
The AIhub coffee corner captures the musings of AI experts over a short conversation.

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence