ΑΙhub.org
monthly digest
 

AIhub monthly digest: September 2025 – conference reviewing, soccer ball detection, and memory traces


by
30 September 2025



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we hear about the latest research on soccer ball detection, learn about energy-based transformers, find out about memory traces in reinforcement learning, and explore some potential solutions to the problems with conference reviewing.

Addressing problems with conference reviewing

Issues with the peer-review process, and pertaining to conferences in particular, are often discussed among authors, reviewers and conference chairs alike. However, coming up with potential solutions to the problem has proved challenging. Jaeho Kim, Yunseok Lee and Seulki Lee won an ICML outstanding position paper award for their work Position: The AI Conference Peer Review Crisis Demands Author Feedback and Reviewer Rewards in which they put forward some suggestions. Jaeho told us more in this interview.

Self-supervised learning for soccer ball detection and beyond

An important aspect of autonomous soccer-playing robots concerns accurate detection of the ball. This is the focus of work by Can Lin, Daniele Affinita, Marco Zimmatore, Daniele Nardi, Domenico Bloisi, and Vincenzo Suriani, which won the best paper award at the recent RoboCup symposium. We caught up with some of the authors to find out more about the work, and how their method can be transferred to applications beyond RoboCup.

Learning normative behaviour using multi-objective reinforcement learning

In this blog post, winners of an IJCAI distinguished paper award, Agata Ciabattoni and Emery A. Neufeld, write about their work introducing a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms. You can read the full paper, “Combining MORL with restraining bolts to learn normative behaviour”, here.

Talking probabilistic logic, neurosymbolic AI, and explainability with Luc De Raedt

Should AI continue to be driven by a single paradigm, or does real progress lie in combining the strengths of many? Luc De Raedt has spent much of his career addressing this question. Through pioneering work that bridges logic, probability, and machine learning, he has helped shape the field of neurosymbolic AI. AIhub ambassador Liliane-Caroline Demers sat down with Luc at IJCAI 2025 to find out more.

Trustworthy and efficient machine learning with Yezi Liu

Our series featuring the AAAI/ACM SIGAI Doctoral Consortium participants continued this month as we heard from Yezi Liu. Yezi is working on trustworthy machine learning, with particular emphasis on graph neural networks as well as trustworthy and efficient large language models.

Memory traces in reinforcement learning

In their paper Partially Observable Reinforcement Learning with Memory Traces, which was presented at ICML 2025, Onno Eberhard, Michael Muehlebach and Claire Vernade present an alternative framework for storing information about past observations, which may be important for future actions. In this blog post, Onno explains more about these “memory traces”.

Focus on the RoboCup Logistics League

The RoboCup Logistics League forms part of the Industrial League and is an application-driven league inspired by the industrial scenario of a smart factory. We spoke with three key members of the league to find out more. Alexander Ferrein is a RoboCup Trustee overseeing the Industrial League, and Till Hofmann and Wataru Uemura are Logistics League Executive Committee members.

Going with the flow: a new framework for graph generation

At this year’s ICML, Yiming Qin, Manuel Madeira, Dorina Thanou and Pascal Frossard introduced DeFoG, a discrete flow matching framework for graph generation. Like diffusion models, DeFoG progressively constructs a clean graph from a noisy one, but it does so in a more flexible formulation, decoupling training from generation. Yiming and Manuel explain their approach and why it matters.

Energy-based transformers

On her YouTube channel, AI Coffee Break with Letitia, Letiția Pârcălăbescu explains how energy-based models (EBMs) work, and how they’re different from standard neural networks. She also takes a look at a recent paper in which the authors combined EBMs with transformers.

AI alignment alignment

Have you ever wondered who is going to align all the AI alignment centres? Well, fear not. The Center for the Alignment of AI Alignment Centers has you covered.


Our resources page
Our events page
Seminars in 2025
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows interview series
AfriClimate AI series
AI around the world focus series



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.

The Machine Ethics podcast: moral agents with Jen Semler

In this episode, Ben and Jen Semler talk about what makes a moral agent, the point of moral agents, philosopher and engineer collaborations, and more.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence