ΑΙhub.org
monthly digest
 

AIhub monthly digest: September 2025 – conference reviewing, soccer ball detection, and memory traces


by
30 September 2025



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we hear about the latest research on soccer ball detection, learn about energy-based transformers, find out about memory traces in reinforcement learning, and explore some potential solutions to the problems with conference reviewing.

Addressing problems with conference reviewing

Issues with the peer-review process, and pertaining to conferences in particular, are often discussed among authors, reviewers and conference chairs alike. However, coming up with potential solutions to the problem has proved challenging. Jaeho Kim, Yunseok Lee and Seulki Lee won an ICML outstanding position paper award for their work Position: The AI Conference Peer Review Crisis Demands Author Feedback and Reviewer Rewards in which they put forward some suggestions. Jaeho told us more in this interview.

Self-supervised learning for soccer ball detection and beyond

An important aspect of autonomous soccer-playing robots concerns accurate detection of the ball. This is the focus of work by Can Lin, Daniele Affinita, Marco Zimmatore, Daniele Nardi, Domenico Bloisi, and Vincenzo Suriani, which won the best paper award at the recent RoboCup symposium. We caught up with some of the authors to find out more about the work, and how their method can be transferred to applications beyond RoboCup.

Learning normative behaviour using multi-objective reinforcement learning

In this blog post, winners of an IJCAI distinguished paper award, Agata Ciabattoni and Emery A. Neufeld, write about their work introducing a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms. You can read the full paper, “Combining MORL with restraining bolts to learn normative behaviour”, here.

Talking probabilistic logic, neurosymbolic AI, and explainability with Luc De Raedt

Should AI continue to be driven by a single paradigm, or does real progress lie in combining the strengths of many? Luc De Raedt has spent much of his career addressing this question. Through pioneering work that bridges logic, probability, and machine learning, he has helped shape the field of neurosymbolic AI. AIhub ambassador Liliane-Caroline Demers sat down with Luc at IJCAI 2025 to find out more.

Trustworthy and efficient machine learning with Yezi Liu

Our series featuring the AAAI/ACM SIGAI Doctoral Consortium participants continued this month as we heard from Yezi Liu. Yezi is working on trustworthy machine learning, with particular emphasis on graph neural networks as well as trustworthy and efficient large language models.

Memory traces in reinforcement learning

In their paper Partially Observable Reinforcement Learning with Memory Traces, which was presented at ICML 2025, Onno Eberhard, Michael Muehlebach and Claire Vernade present an alternative framework for storing information about past observations, which may be important for future actions. In this blog post, Onno explains more about these “memory traces”.

Focus on the RoboCup Logistics League

The RoboCup Logistics League forms part of the Industrial League and is an application-driven league inspired by the industrial scenario of a smart factory. We spoke with three key members of the league to find out more. Alexander Ferrein is a RoboCup Trustee overseeing the Industrial League, and Till Hofmann and Wataru Uemura are Logistics League Executive Committee members.

Going with the flow: a new framework for graph generation

At this year’s ICML, Yiming Qin, Manuel Madeira, Dorina Thanou and Pascal Frossard introduced DeFoG, a discrete flow matching framework for graph generation. Like diffusion models, DeFoG progressively constructs a clean graph from a noisy one, but it does so in a more flexible formulation, decoupling training from generation. Yiming and Manuel explain their approach and why it matters.

Energy-based transformers

On her YouTube channel, AI Coffee Break with Letitia, Letiția Pârcălăbescu explains how energy-based models (EBMs) work, and how they’re different from standard neural networks. She also takes a look at a recent paper in which the authors combined EBMs with transformers.

AI alignment alignment

Have you ever wondered who is going to align all the AI alignment centres? Well, fear not. The Center for the Alignment of AI Alignment Centers has you covered.


Our resources page
Our events page
Seminars in 2025
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows interview series
AfriClimate AI series
AI around the world focus series



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Forthcoming machine learning and AI seminars: December 2025 edition

  01 Dec 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 December 2025 and 31 January 2026.
monthly digest

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset

  28 Nov 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

EU proposal to delay parts of its AI Act signal a policy shift that prioritises big tech over fairness

  27 Nov 2025
The EC has proposed delaying parts of the act until 2027 following intense pressure from tech companies and the Trump administration.

Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence