ΑΙhub.org
monthly digest
 

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset


by
28 November 2025



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we learn about rewarding explainability in drug repurposing with knowledge graphs, investigate value-aligned autonomous vehicles, and consider trust in multi-agent systems.

Rewarding explainability in drug repurposing with knowledge graphs

In this blog post, Susana Nunes and Catia Pesquita write about work, presented at the International Joint Conference on Artificial Intelligence (IJCAI2025), on rewarding explainability in drug repurposing with knowledge graphs. Their work introduces a reinforcement learning approach that not only predicts which drug-disease pairs might hold promise but also explains why.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

Astrid Rakow writes about designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values. This was work carried out with Marija Slavkovik, Joe Collenette, Maike Schwammberger and Gleifer Vaz Alves, and presented at the 22nd European Conference on Multi-Agent Systems (EUMAS 2025).

Trust in multi-agent systems with Mario Mirabile

In a new AIhub series, we’re featuring some of the European Conference on Artificial Intelligence (ECAI-2025) Doctoral Consortium participants. In the first interview in this collection, we sat down with Mario Mirabile in Bologna and asked about his work on multi-agent systems applied to financial transactions, progress so far during the PhD, and plans for developing his research ideas.

Learning robust controllers that work across many partially observable environments

Maris Galesloot, Roman Andriushchenko, Milan Češka, Sebastian Junges and Nils Jansen presented work at IJCAI entitled Robust Finite-Memory Policy Gradients for Hidden-Model POMDPs, in which they explored designing model controllers that perform reliably even when the environment may not be precisely known. Maris summarises the key contributions of the work in this blog post.

FHIBE: a new fairness benchmark and dataset from SONY AI

Earlier this month, Sony AI released a dataset that establishes a new benchmark for AI ethics in computer vision models. Fair Human-Centric Image Benchmark (FHIBE) is the first publicly available, globally diverse, consent-based human image dataset, inclusive of over 10,000 human images, for evaluating bias across a wide variety of computer vision tasks. Keep an eye out for our interview with project leader, Alice Xiang, which will be published soon.

European Union roll back on AI Act

The EU AI Act came into force in August 2024, however, different rules come into effect at different times. This month, it was announced that the European Commission has now proposed delaying parts of the act until 2027 following pressure from big tech companies and the US government.

Why data centres in space are a terrible idea

There has been a push for AI and satellite companies to join forces to build data centres in space. In this blog post, space electronics expert Taranis, who has worked at both NASA and Google, explains the science behind why this is such a bad idea.

DAIR report on surveillance at Amazon

Amazon delivery drivers, alongside warehouse workers and other people in the Amazon delivery supply chain, are subject to intense amounts of surveillance, including through the use of AI-powered tools. In this report from The DAIR Institute, authors Adrienne Williams, Alex Hanna and Sandra Barcenas investigate how workplace technology enables Amazon to “steal wages, hide labour, intensify poor working conditions, and evade responsibility”.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award. This award is made for excellence in research in the area of autonomous agents and is intended to recognize researchers in whose current work is an important influence on the field. Find out more here.

AI Song Contest winners

The 2025 AI Song Contest crown has been taken by father and son team GENEALOGY. They picked up the prize at the live award show which was held in Amsterdam on Sunday 16 November. Their entry, REVOLUTION, combines rock, drum’n’bass, and samba, and uses completely AI-generated lyrics.


Our resources page
Our events page
Seminars in 2025
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows interview series
AfriClimate AI series



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



EU proposal to delay parts of its AI Act signal a policy shift that prioritises big tech over fairness

  27 Nov 2025
The EC has proposed delaying parts of the act until 2027 following intense pressure from tech companies and the Trump administration.

Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence