Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we learn about rewarding explainability in drug repurposing with knowledge graphs, investigate value-aligned autonomous vehicles, and consider trust in multi-agent systems.
In this blog post, Susana Nunes and Catia Pesquita write about work, presented at the International Joint Conference on Artificial Intelligence (IJCAI2025), on rewarding explainability in drug repurposing with knowledge graphs. Their work introduces a reinforcement learning approach that not only predicts which drug-disease pairs might hold promise but also explains why.
Astrid Rakow writes about designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values. This was work carried out with Marija Slavkovik, Joe Collenette, Maike Schwammberger and Gleifer Vaz Alves, and presented at the 22nd European Conference on Multi-Agent Systems (EUMAS 2025).
In a new AIhub series, we’re featuring some of the European Conference on Artificial Intelligence (ECAI-2025) Doctoral Consortium participants. In the first interview in this collection, we sat down with Mario Mirabile in Bologna and asked about his work on multi-agent systems applied to financial transactions, progress so far during the PhD, and plans for developing his research ideas.
Maris Galesloot, Roman Andriushchenko, Milan Češka, Sebastian Junges and Nils Jansen presented work at IJCAI entitled Robust Finite-Memory Policy Gradients for Hidden-Model POMDPs, in which they explored designing model controllers that perform reliably even when the environment may not be precisely known. Maris summarises the key contributions of the work in this blog post.
Earlier this month, Sony AI released a dataset that establishes a new benchmark for AI ethics in computer vision models. Fair Human-Centric Image Benchmark (FHIBE) is the first publicly available, globally diverse, consent-based human image dataset, inclusive of over 10,000 human images, for evaluating bias across a wide variety of computer vision tasks. Keep an eye out for our interview with project leader, Alice Xiang, which will be published soon.
The EU AI Act came into force in August 2024, however, different rules come into effect at different times. This month, it was announced that the European Commission has now proposed delaying parts of the act until 2027 following pressure from big tech companies and the US government.
There has been a push for AI and satellite companies to join forces to build data centres in space. In this blog post, space electronics expert Taranis, who has worked at both NASA and Google, explains the science behind why this is such a bad idea.
Amazon delivery drivers, alongside warehouse workers and other people in the Amazon delivery supply chain, are subject to intense amounts of surveillance, including through the use of AI-powered tools. In this report from The DAIR Institute, authors Adrienne Williams, Alex Hanna and Sandra Barcenas investigate how workplace technology enables Amazon to “steal wages, hide labour, intensify poor working conditions, and evade responsibility”.
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award. This award is made for excellence in research in the area of autonomous agents and is intended to recognize researchers in whose current work is an important influence on the field. Find out more here.
The 2025 AI Song Contest crown has been taken by father and son team GENEALOGY. They picked up the prize at the live award show which was held in Amsterdam on Sunday 16 November. Their entry, REVOLUTION, combines rock, drum’n’bass, and samba, and uses completely AI-generated lyrics.
Our resources page
Our events page
Seminars in 2025
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows interview series
AfriClimate AI series