ΑΙhub.org
monthly digest
 

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset


by
28 November 2025



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we learn about rewarding explainability in drug repurposing with knowledge graphs, investigate value-aligned autonomous vehicles, and consider trust in multi-agent systems.

Rewarding explainability in drug repurposing with knowledge graphs

In this blog post, Susana Nunes and Catia Pesquita write about work, presented at the International Joint Conference on Artificial Intelligence (IJCAI2025), on rewarding explainability in drug repurposing with knowledge graphs. Their work introduces a reinforcement learning approach that not only predicts which drug-disease pairs might hold promise but also explains why.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

Astrid Rakow writes about designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values. This was work carried out with Marija Slavkovik, Joe Collenette, Maike Schwammberger and Gleifer Vaz Alves, and presented at the 22nd European Conference on Multi-Agent Systems (EUMAS 2025).

Trust in multi-agent systems with Mario Mirabile

In a new AIhub series, we’re featuring some of the European Conference on Artificial Intelligence (ECAI-2025) Doctoral Consortium participants. In the first interview in this collection, we sat down with Mario Mirabile in Bologna and asked about his work on multi-agent systems applied to financial transactions, progress so far during the PhD, and plans for developing his research ideas.

Learning robust controllers that work across many partially observable environments

Maris Galesloot, Roman Andriushchenko, Milan Češka, Sebastian Junges and Nils Jansen presented work at IJCAI entitled Robust Finite-Memory Policy Gradients for Hidden-Model POMDPs, in which they explored designing model controllers that perform reliably even when the environment may not be precisely known. Maris summarises the key contributions of the work in this blog post.

FHIBE: a new fairness benchmark and dataset from SONY AI

Earlier this month, Sony AI released a dataset that establishes a new benchmark for AI ethics in computer vision models. Fair Human-Centric Image Benchmark (FHIBE) is the first publicly available, globally diverse, consent-based human image dataset, inclusive of over 10,000 human images, for evaluating bias across a wide variety of computer vision tasks. Keep an eye out for our interview with project leader, Alice Xiang, which will be published soon.

European Union roll back on AI Act

The EU AI Act came into force in August 2024, however, different rules come into effect at different times. This month, it was announced that the European Commission has now proposed delaying parts of the act until 2027 following pressure from big tech companies and the US government.

Why data centres in space are a terrible idea

There has been a push for AI and satellite companies to join forces to build data centres in space. In this blog post, space electronics expert Taranis, who has worked at both NASA and Google, explains the science behind why this is such a bad idea.

DAIR report on surveillance at Amazon

Amazon delivery drivers, alongside warehouse workers and other people in the Amazon delivery supply chain, are subject to intense amounts of surveillance, including through the use of AI-powered tools. In this report from The DAIR Institute, authors Adrienne Williams, Alex Hanna and Sandra Barcenas investigate how workplace technology enables Amazon to “steal wages, hide labour, intensify poor working conditions, and evade responsibility”.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award. This award is made for excellence in research in the area of autonomous agents and is intended to recognize researchers in whose current work is an important influence on the field. Find out more here.

AI Song Contest winners

The 2025 AI Song Contest crown has been taken by father and son team GENEALOGY. They picked up the prize at the live award show which was held in Amsterdam on Sunday 16 November. Their entry, REVOLUTION, combines rock, drum’n’bass, and samba, and uses completely AI-generated lyrics.


Our resources page
Our events page
Seminars in 2025
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows interview series
AfriClimate AI series



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Interview with AAAI Fellow Yan Liu: machine learning for time series

  19 Mar 2026
Hear from 2026 AAAI Fellow Yan Liu about her research into time series, the associated applications, and the promise of physics-informed models.

A principled approach for data bias mitigation

  18 Mar 2026
Find out more about work presented at AIES 2025 which proposes a new way to measure data bias, along with a mitigation algorithm with mathematical guarantees.

An AI image generator for non-English speakers

  17 Mar 2026
"Translations lose the nuances of language and culture, because many words lack good English equivalents."

AI and Theory of Mind: an interview with Nitay Alon

  16 Mar 2026
Find out more about how Theory of Mind plays out in deceptive environments, multi-agents systems, the interdisciplinary nature of this field, when to use Theory of Mind, and when not to, and more.
coffee corner

AIhub coffee corner: AI, kids, and the future – “generation AI”

  13 Mar 2026
The AIhub coffee corner captures the musings of AI experts over a short conversation.

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence