ΑΙhub.org
monthly digest
 

AIhub monthly digest: June 2025 – gearing up for RoboCup 2025, privacy-preserving models, and mitigating biases in LLMs


by
26 June 2025



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we hear about explainable AI for robotics, explore privacy-preserving generative models, and find out what RoboCup 2025 has in store.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

RoboCup is an international scientific initiative with the goal of advancing the state of the art of intelligent robots, AI and automation. The annual RoboCup event, where teams gather from across the globe to take part in competitions across a number of leagues, will this year take place in Brazil, from 15-21 July. We spoke to Marco Simões, one of the General Chairs of RoboCup 2025 and President of RoboCup Brazil, to find out what plans they have for the event, some new initiatives, and how RoboCup has grown in Brazil over the past ten years.

Ana Patrícia Magalhães tells us about RoboCupJunior

An important element of the RoboCup World Cup is RoboCupJunior, designed to introduce school children to the main competition. Ahead of the event, we spoke to Ana Patrícia Magalhães to find out more about the plans for 2025, and how RoboCup has inspired people of all ages.

Interview with Debalina Padariya: Privacy-preserving generative models

In our series of interviews meeting the AAAI/ACM SIGAI Doctoral Consortium participants, we heard from Debalina Padariya about her work on Privacy-Preserving Generative Models, why this is such an interesting area for study, the different projects she’s been involved in so far during her PhD, and her experience at AAAI 2025.

Interview with Amar Halilovic: Explainable AI for robotics

Another Doctoral Consortium participant was Amar Halilovic, whose research focuses on explainable AI for robotics, investigating how robots can generate explanations of their actions in a way that aligns with human preferences and expectations, particularly in navigation tasks. In this interview, Amar explained what he has been up to in his PhD so far.

Understanding and mitigating biases in LLMs with Mahammed Kamruzzaman

In our third interview this month in the Doctoral Consortium series, Mahammed Kamruzzaman told us about his research understanding and mitigating biases in Large Language Models (LLMs). He is particularly interested in how these biases manifest across various sociodemographic and cultural dimensions.

IJCAI 2025 award winners revealed

The winners of three International Joint Conferences on Artificial Intelligence (IJCAI) awards have been announced. These three distinctions, and respective winners are:

  • Award for Research Excellence: Rina Dechter
  • Computers and Thought Award: Aditya Grover
  • John McCarthy Award: Cynthia Rudin

Generating counterfactual explanations

Ahead of the 34th International Joint Conferences on Artificial Intelligence (IJCAI 2025), Shuyang Dong writes about work that she will present at the conference. Reinforcement Learning (RL) has shown great promise in domains like healthcare and robotics but often struggles with adoption due to its lack of interpretability. Counterfactual explanations, which address “what if” scenarios, provide a promising avenue for understanding RL decisions. In her work, Shuyang proposes a framework for generating counterfactual explanations in continuous action RL.

King’s Festival of AI

Across five days in May, London played host to the King’s Festival of AI. Organised by King’s College London, the event was free to attend and suitable for all audiences. Recordings from some of the sessions are now available on the King’s YouTube channel playlist. You can find out how AI can transform the study of modern languages, what AI means for education, the challenges of communicating about AI, and more.


Our resources page
Our events page
Seminars in 2025
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows interview series
AfriClimate AI series
AI around the world focus series



tags: , , , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Using machine learning to track greenhouse gas emissions

  15 Dec 2025
PhD candidate Julia Wąsala searches for greenhouse gas emissions in satellite data.

AAAI 2025 presidential panel on the future of AI research – video discussion on AGI

  12 Dec 2025
Watch the first in a series of video discussions from AAAI.

The Machine Ethics podcast: the AI bubble with Tim El-Sheikh

Ben chats to Tim about AI use cases, whether GenAI is even safe, the AI bubble, replacing human workers, data oligarchies and more.

Australia’s vast savannas are changing, and AI is showing us how

Improving decision-making for dynamic and rapidly changing environments.

AI language models show bias against regional German dialects

New study examines how artificial intelligence responds to dialect speech.

We asked teachers about their experiences with AI in the classroom — here’s what they said

  05 Dec 2025
Researchers interviewed teachers from across Canada and asked them about their experiences with GenAI in the classroom.

Interview with Alice Xiang: Fair human-centric image dataset for ethical AI benchmarking

  04 Dec 2025
Find out more about this publicly-available, globally-diverse, consent-based human image dataset.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence