ΑΙhub.org
monthly digest
 

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI


by
27 February 2026



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we explore multi-agent systems and collective decision-making, dive into neurosymbolic Markov models, and find out how robots can acquire skills through interactions with the physical world.

Talking multi-agent systems and collective decision-making with Kate Larson

What if AI were designed not only to optimize choices for individuals, but to help groups reach decisions together? AIhub Ambassador Liliane-Caroline Demers interviewed Kate Larson whose research explores how AI can support collective decision-making. She reflected on what drew her into the field, why she sees AI playing a role in consensus and democratic processes, and why she believes multi-agent systems deserve more attention.

How can robots acquire skills through interactions with the physical world?

One of the key challenges in building robots for household or industrial settings is the need to master the control of high-degree-of-freedom systems such as mobile manipulators. Reinforcement learning has been a promising avenue for acquiring robot control policies, however, scaling to complex systems has proved tricky. In their work SLAC: Simulation-Pretrained Latent Action Space for Whole-Body Real-World RL, Jiaheng Hu, Peter Stone and Roberto Martín-Martín introduce a method that renders real-world reinforcement learning feasible for complex embodiments. We caught up with Jiaheng to find out more.

Relational neurosymbolic Markov models

In this blog post, Lennert De Smet and Gabriele Venturato write about their work (with Luc De Raedt and Giuseppe Marra) showing how their neurosymbolic Markov models beat state-of-the-art neural and probabilistic models in out-of-distribution generalisation, consistent generations, and constraint satisfaction.

Governing the rise of interactive AI will require behavioral insights

AI is no longer just a translator or image recognizer. Today, we engage with systems that remember our preferences, proactively manage our calendars, and even provide emotional support. This is interactive AI. In this blog post, Yulu Pi writes about work presented at AIES 2025 on the challenges and pathways for AI governance.

Reinforcement learning applied to autonomous vehicles

As part of our series meeting the AAAI / ACM SIGAI 2026 Doctoral Consortium participants, we sat down with Oliver Chang, PhD student at UC Santa Cruz, to learn more about his research spanning deep reinforcement learning, autonomous vehicles, and explainable AI. We talked about some of the projects he’s worked on so far, what drew him to the field, and what future AI directions he’s excited about.

Labor management in transportation gig systems through reinforcement learning

Our second AAAI / ACM SIGAI 2026 Doctoral Consortium interviewee this month was Zijian Zhao. Currently, he is concentrating on labor management in transportation gig systems through reinforcement learning, with the aim of enhancing system efficiency while also identifying and mitigating algorithmic discrimination against workers. Find out more in the interview.

Extending the reward structure in reinforcement learning

Tanmay Ambadkar is researching the reward structure in reinforcement learning, with the goal of providing generalizable solutions that can provide robust guarantees and are easily deployable. We caught up with Tanmay, the third of our AAAI / ACM SIGAI 2026 Doctoral Consortium interviewees this month to find out more about the constrained reinforcement learning framework he has been working on.

From Visual Question Answering to multimodal learning

In the latest issue of AI Matters, a publication of ACM SIGAI, Ella Scallan caught up with Aishwarya Agrawal to find out more about her research, what most excites her about the future of AI, and advice for early career researchers. Aishwarya won an AAAI / ACM SIGAI Doctoral Dissertation honourable mention in 2019 for her thesis on Visual Question Answering. The scope of her research has expanded since then, covering other vision and language problems.

Sven Koenig wins the 2026 ACM/SIGAI Autonomous Agents Research Award

Congratulations to Sven Koenig on winning the 2026 ACM/SIGAI Autonomous Agents Research Award. This prestigious award is made for excellence in research in the area of autonomous agents. Sven was recognised “for his work on AI planning and search, which has shaped how intelligent agents reason and act in complex, dynamic environments”.

Winners of 2025 AAAI/ACM SIGAI Joint Dissertation Award announced

The 2025 AAAI/ACM SIGAI Joint Dissertation Award has been won by Noah Golowich (PhD from Massachusetts Institute of Technology) for the dissertation titled “Theoretical Foundations for Learning in Games and Dynamic Environments”, and Akari Asai (PhD from University of Washington) for the dissertation titled “Beyond Scaling: Frontiers of Retrieval-Augmented Language Models”. The committee also bestowed three honourable mentions, to Sarah Alyami, Thom Badings, and Brian Hu Zhang.


Our resources page
Our events page
Seminars in 2026
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows interview series
AfriClimate AI series



tags: , , , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.

The Machine Ethics podcast: moral agents with Jen Semler

In this episode, Ben and Jen Semler talk about what makes a moral agent, the point of moral agents, philosopher and engineer collaborations, and more.

Extending the reward structure in reinforcement learning: an interview with Tanmay Ambadkar

  23 Feb 2026
Find out more about Tanmay's research on RL frameworks, the latest in our series meeting the AAAI Doctoral Consortium participants.

The Good Robot podcast: what makes a drone “good”? with Beryl Pong

  20 Feb 2026
In this episode, Eleanor and Kerry talk to Beryl Pong about what it means to think about drones as “good” or “ethical” technologies.

Relational neurosymbolic Markov models

and   19 Feb 2026
Relational neurosymbolic Markov models make deep sequential models logically consistent, intervenable and generalisable

AI enables a Who’s Who of brown bears in Alaska

  18 Feb 2026
A team of scientists from EPFL and Alaska Pacific University has developed an AI program that can recognize individual bears in the wild, despite the substantial changes that occur in their appearance over the summer season.

Learning to see the physical world: an interview with Jiajun Wu

and   17 Feb 2026
Winner of the 2019 AAAI / ACM SIGAI dissertation award tells us about his current research.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence