ΑΙhub.org
monthly digest
 

AIhub monthly digest: August 2025 – causality and generative modelling, responsible multimodal AI, and IJCAI in Montréal and Guangzhou


by
29 August 2025



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we dive into the world of agents, learn about responsible multimodal AI, apply generative AI to computer networks, and dig into the RoboCup@Work League.

Agentic AI

The AIhub coffee corner captures the musings of AI experts over a short conversation. This month, Sanmay Das, Tom Dietterich, Sabine Hauert, Sarit Kraus, and Michael Littman tackled the topic of agentic AI, discussing recent developments, and lessons learned from the decades of research in the autonomous agents and multiagent systems community.

International Joint Conference on Artificial Intelligence

The 34th International Joint Conference on Artificial Intelligence (IJCAI2025) took place in Montréal from 16-22 August, with a satellite event currently being held (from 29-31 August) in Guangzhou, China. You can find out more about the programmes of both venues here, and get a flavour of what attendees got up to in our social media round-ups: Part one | Part two.

We’ve already reported on the prestigious IJCAI awards that were announced ahead of the conference. During the event itself, the distinguished paper awards were presented at the opening ceremony. You can also hear from the next generation of AI researchers based in Canada in this series of 90-second pitches.

Inside the RoboCup@Work League

This year’s annual RoboCup event, where teams gathered from across the globe to take part in competitions across a number of leagues, took place in Salvador, Brazil from 15-21 July. Ahead of the event, we spoke to Christoph Steup to find out more about the @Work League, the tasks that teams need to complete, and future plans for the League.

The value of prediction in identifying the worst-off

At this year’s International Conference on Machine Learning (ICML2025), Unai Fischer-Abaigar, Christoph Kern and Juan Carlos Perdomo won an outstanding paper award for their work We hear from Unai about the main contributions of the paper, why prediction systems are an interesting area for study, and further work they are planning in this space.

Responsible multimodal AI

Our series featuring the AAAI / ACM SIGAI Doctoral Consortium participants continued this month with no fewer than five interviews. Firstly, we heard from Flávia Carvalhido, a PhD student at the University of Porto, and found out about her work on responsible multimodal AI, what inspired her to study AI, and how she found her first conference experience.

Applying generative AI to computer networks

Shaghayegh (Shirley) Shajarian is applying generative AI to computer networks. Shaghayegh told us about her research developing AI-driven agents that assist with some network operations, such as log analysis, troubleshooting, and documentation. Her goal is to reduce the manual work that network teams deal with every day and move toward more autonomous, self-running networks.

Causality and generative modeling

Aneesh Komanduri, a final-year PhD student at the University of Arkansas, gave us the low-down on his research at the intersection of causal inference, representation learning, and generative modeling. His dissertation specifically explores two core areas: causal representation learning and counterfactual generative modeling.

Game-theoretic integration of safety, interaction and learning for human-centered autonomy

Haimin Hu filled us in on his research covering the algorithmic foundations of human-centered autonomy. Through his work, Haimin aims to ensure autonomous systems are performant, verifiable, and trustworthy when deployed in human-populated space.

Computing education and generative AI

In this interview, Benyamin Tabarsi told us about his work at the intersection of generative AI and computing education. We found out more about what he’s investigated so far during his PhD, what is particularly interesting about this research area, and what inspired him to undertake a PhD in the field.


Our resources page
Our events page
Seminars in 2025
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows interview series
AfriClimate AI series
AI around the world focus series



tags: , , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence