ΑΙhub.org
monthly digest
 

AIhub monthly digest: January 2024 – closed-loop robot planning, crowdsourced clustering, and trustworthiness in GPT models


by
30 January 2024



share this:
Panda and tiger reading

We start 2024 with a packed monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we continue our coverage of NeurIPS, meet the first interviewee in our AAAI Doctoral Consortium series, and find out how to build AI openly.

Meeting the AAAI Doctoral Consortium participants

The AAAI/SIGAI Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. Over the course of the next few months, we’ll be meeting the participants and finding out more about their work, PhD life, and their future research plans. In the first interview of the series, Changhoon Kim told us about his research on enhancing the reliability of image generative AI.

Interview with Bo Li: A comprehensive assessment of trustworthiness in GPT models

Bo Li and colleagues won an outstanding datasets and benchmark track award at NeurIPS 2023 for their work DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models. In this interview, Bo tells us about the research, the team’s methodology, and key findings.

NeurIPS invited talks – studying young humans, and data annotation

As part of the programme at the Conference on Neural Information Processing Systems (NeurIPS 2023), a series of invited talks covered a range of fascinating topics. In her presentation, Linda Smith spoke about work monitoring young babies and how the findings could inform ML research. Lora Aroyo tackled the subject of responsible AI, specifically looking at the data annotation process and what this means for models that use those data.

Generating physically-consistent local-scale climate change projections

To model climate on a local-scale, researchers commonly use statistical downscaling (SD) to map the coarse resolution of climate models to the required local-scale. The use of deep-learning to facilitate SD often leads to violation of physical properties. In this blog post, Jose González-Abad writes about work that investigates the scope of this problem and lays the foundation for a framework that guarantees physical relationships between groups of downscaled climate variables.

Crowdsourced clustering via active querying

In this blogpost, Yi Chen, Ramya Korlakai Vinayak and Babak Hassibi write about work presented at the Eleventh AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2023) in which they introduce Active Crowdclustering, an algorithm that finds clusters in a dataset with unlabeled items by querying pairs of items for similarity.

Theoretical remarks on feudal hierarchies and reinforcement learning

Diogo Carvalho, Francisco Melo and Pedro Santos won an ECAI 2023 outstanding paper award for their paper Theoretical Remarks on Feudal Hierarchies and Reinforcement Learning. In this blogpost, Diogo explains hierarchical reinforcement learning, and summarises how the team showed that Q-learning solves the hierarchical decision making process.

Interview with Christopher Chandler: closed-loop robot reactive planning

In their paper Model Checking for Closed-Loop Robot Reactive Planning, Christopher Chandler, Bernd Porr, Alice Miller and Giulia Lafratta show how model checking can be used to create multi-step plans for a differential drive wheeled robot so that it can avoid immediate danger. In this interview, Christopher tells us about model checking and how it is used in the context of autonomous robotic systems.

AI Feminist lecture series recordings available

The Feminist AI lecture series (organised by the University of Arts Linz), which ran from September 2023 to January 2024, presented inspiring lectures on gender and AI. The recordings from the five events are available here.

Erase Indifference Challenge 2024

The Auschwitz Pledge Foundation has recently launched the Erase Indifference Challenge 2024, a competition that aims to support innovative projects leveraging technology to combat indifference to discrimination. They are offering grants of up to €30,000 for the three winning projects. The deadline to enter is 11 February, and you can find out more here.

Living with AI course

We and AI and The Scottish AI Alliance have joined forces on an introductory AI course. The five-week course is perfect for anyone looking to understand how AI is being used in our world and the rapid changes it is making. It is designed to be accessible to all, regardless of prior knowledge or experience with AI. There is still time to sign up if you are interested.

Percy Liang on building AI openly

In a recent TED talk, Percy Liang spoke about the necessity to build AI openly. He presented his vision for a transparent, participatory future, one that credits contributors and gives everyone a voice.


Our resources page
Seminars in 2024
AI around the world focus series
UN SDGs focus series
New voices in AI series



tags: , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.

The Machine Ethics podcast: moral agents with Jen Semler

In this episode, Ben and Jen Semler talk about what makes a moral agent, the point of moral agents, philosopher and engineer collaborations, and more.

Extending the reward structure in reinforcement learning: an interview with Tanmay Ambadkar

  23 Feb 2026
Find out more about Tanmay's research on RL frameworks, the latest in our series meeting the AAAI Doctoral Consortium participants.

The Good Robot podcast: what makes a drone “good”? with Beryl Pong

  20 Feb 2026
In this episode, Eleanor and Kerry talk to Beryl Pong about what it means to think about drones as “good” or “ethical” technologies.

Relational neurosymbolic Markov models

and   19 Feb 2026
Relational neurosymbolic Markov models make deep sequential models logically consistent, intervenable and generalisable

AI enables a Who’s Who of brown bears in Alaska

  18 Feb 2026
A team of scientists from EPFL and Alaska Pacific University has developed an AI program that can recognize individual bears in the wild, despite the substantial changes that occur in their appearance over the summer season.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence