ΑΙhub.org
monthly digest
 

AIhub monthly digest: January 2026 – moderating guardrails, humanoid soccer, and attending AAAI


by
30 January 2026



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we find out about a robot to navigate hiking trails, learn from logical constraints, analyse the effectiveness of moderation guardrails, and travel to Singapore to attend AAAI.

Attending AAAI 2026

This year, the annual AAAI Conference on Artificial Intelligence took place outside of North America for the first time. From 20 to 27 January, Singapore Expo played host to the 40th edition of the conference. We had the pleasure of attending, and delivered a talk on science communication for AI researchers. The main four days of the conference, featuring the invited talks and technical programme, were bookended by a bumper selection of tutorials and workshops.

During the opening ceremony, the organisers shared some statistics, revealing that submissions rose from 15,532 in 2025, to a staggering 30,948 this year. The session honoured award winners and the recently elected 2026 Fellows. We also found out the recipients of the outstanding paper prizes.

We have plenty of AAAI content planned over the coming weeks so stay tuned for more. In the meantime, here is our first social media summary post where you can find out some of the activities that attendees were enjoying in Singapore.

Effectiveness of moderation guardrails in aligning LLM outputs

In their paper presented at AIES 2025, “Do Your Guardrails Even Guard?” Method for Evaluating Effectiveness of Moderation Guardrails in Aligning LLM Outputs with Expert User Expectations, Anindya Das Antar, Xun Huan and Nikola Banovic proposed a method to evaluate and select guardrails that best align LLM outputs with domain knowledge from subject-matter experts. In this interview, Anindya told us more about their method, some case studies, and plans for future developments.

Taking humanoid soccer to the next level

In July 2025, the RoboCup Federation announced several changes to their leagues, with one of these being a focus on humanoid robots for the international soccer competition. We spoke with Alessandra Rossi, a trustee who has been involved in the humanoid soccer league for many years, to learn more.

Multi-modal learning and embodied intelligence

For the past couple of years, we’ve been meeting with some of the AAAI doctoral consortium students to find out more about their work. In the first of our interviews with the 2026 cohort, we caught up with Xiang Fang, a PhD student at Nanyang Technological University. He told us more about his research trying to bridge the gap between computer vision and natural language.

Learning from logical constraints

How can we train neural networks efficiently to be more consistent with background knowledge? This is the question that Lucile Dierckx, Alexandre Dubray and Siegfried Nijssen tried to answer in work presented at IJCAI2025. In this blog post from the authors, we learn about the key achievements of their research.

Robots to navigate hiking trails

Hiking trails can be challenging and unpredictable, prone to fallen trees, exposed roots, loose rocks, and uneven ground. Designing and programming a robot to tackle such trails is a difficult task, and one that Christopher Tatsch and colleagues have been working on. Here, Christopher writes about this problem which the team approached using semantic segmentation and geometric analysis.

Zeynep Tufekci on having the wrong nightmares about generative AI

In her keynote talk at NeurIPS2025, Zeynep Tufekci spoke about the dominant fears held about AI, and why these are not the ones we should be focussing on. In this Substack post, Jessica Hullman summarises the talk, noting Zeynep’s argument that “the destabilizing risk doesn’t require AI to be better than humans in any global sense. It’s enough for AI to be good enough, cheap, fast, and deployable at scale”.


Our resources page
Our events page
Seminars in 2026
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows interview series
AfriClimate AI series



tags: , , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Causal models for decision systems: an interview with Matteo Ceriscioli

  21 Apr 2026
How can we integrate causal knowledge into agents or decision systems to make them more reliable?

A model for defect identification in materials

  20 Apr 2026
A new model measures defects that can be leveraged to improve materials’ mechanical strength, heat transfer, and energy-conversion efficiency.

‘Probably’ doesn’t mean the same thing to your AI as it does to you

  17 Apr 2026
Are you sure you and the AI chatbot you’re using are on the same page about probabilities?

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.

2026 AI Index Report released

  15 Apr 2026
Find out what the ninth edition of the report, which was published on 13 April, says about trends in AI.

Formal verification for safety evaluation of autonomous vehicles: an interview with Abdelrahman Sayed Sayed

  14 Apr 2026
Find out more about work at the intersection of continuous AI models, formal methods, and autonomous systems.

Water flow in prairie watersheds is increasingly unpredictable — but AI could help

  13 Apr 2026
In recent years, the Prairies have seen bigger swings in climate conditions — very wet years followed by very dry ones.

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence