ΑΙhub.org
 

AIhub blog post highlights 2025


by
16 December 2025



share this:
laptop and cup of coffee on a table

Over the course of the year, we’ve had the pleasure of working with many talented researchers from across the globe. As 2025 draws to a close, we take a look back at some of the excellent blog posts from our contributors.


TELL: Explaining neural networks using logic
By Alessio Ragno
Nine small images with schematic representations of differently shaped neural networks, a human hand making a different gesture is placed behind each network.
This work contributes to the field of explainable AI by developing a novel neural network that can be directly transformed into logic.


Understanding artists’ perspectives on generative AI art and transparency, ownership, and fairness
By Juniper Lovato, Julia Witte Zimmerman and Jennifer Karson

The authors explore the tensions between creators and AI-generated content through a survey of 459 artists.


Generating a biomedical knowledge graph question answering dataset
By Xi Yan

Find out more about work presented at ECAI on generating a comprehensive biomedical knowledge graph question answering dataset.


#AAAI2025 outstanding paper – DivShift: Exploring domain-specific distribution shift in large-scale, volunteer-collected biodiversity datasets
By Elena Sierra and Lauren Gillespie

Learn more about work on biodiversity datasets that won the AAAI outstanding paper award – AI for social alignment track.


Exploring counterfactuals in continuous-action reinforcement learning
By Shuyang Dong

Shuyang Dong proposes a framework for generating counterfactual explanations in continuous action reinforcement learning.


Making optimal decisions without having all the cards in hand
By Nathanaël Fijalkow, Hugo Gimbert, Florian Horn, Guillermo Perez and Pierre Vandenhove

AAAI outstanding paper award winners tackle the challenging problem of developing algorithms that themselves generate other algorithms based on a few examples or a specification of what is expected.


#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour
By Agata Ciabattoni and Emery Neufeld

Winners of an IJCAI distinguished paper award write about their work introducing a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.


Memory traces in reinforcement learning
By Onno Eberhard

Onno Eberhard summarizes work presented at ICML 2025 on partially observable reinforcement learning which introduces an alternative memory framework – “memory traces”.


Discrete flow matching framework for graph generation
By Manuel Madeira and Yiming Qin

In work presented at ICML 2025, Manuel Madeira and Yiming Qin write about a discrete flow matching framework for graph generation.


Machine learning for atomic-scale simulations: balancing speed and physical laws
By Filippo Bigi, Marcel Langer and Michele Ceriotti

How should one balance speed and physical laws when using ML for atomic-scale simulations? Find out more in this blog post about work presented at ICML 2025.


Rewarding explainability in drug repurposing with knowledge graphs
By Susana Nunes and Catia Pesquita

This work, presented at IJCAI 2025, introduces a reinforcement learning approach that not only predicts which drug-disease pairs might hold promise but also explains why.


Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design
By Astrid Rakow

Astrid Rakow writes about designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.


Learning robust controllers that work across many partially observable environments
By Maris Galesloot

In this blog post, Maris Galesloot summarizes work presented at IJCAI 2025, which explores designing controllers that perform reliably even when the environment may not be precisely known.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence