ΑΙhub.org
monthly digest
 

AIhub monthly digest: December 2024 – attending NeurIPS, multi-agent path finding, and tackling illegal mining


by
31 December 2024



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we look back at our week attending NeurIPS, hear about work localising illegal mining sites using machine learning and geospatial data, and discover how a group of agents can minimise their journey length whilst avoiding collisions.

AIhub at NeurIPS 2024

We were lucky enough to attend the thirty-eighth Conference on Neural Information Processing Systems (NeurIPS 2024) which took place in Vancouver, Canada, from Tuesday 10 December to Sunday 15 December. On the first day of the event we held a session on science communication for AI researchers. It was great to see so many people there, and so many thoughtful questions following our presentation. You can find the webpage for the session here.

The 2024 awards for outstanding main track papers (and runners-up), outstanding datasets and benchmarks paper, and the test-of-time award were announced during the opening ceremony. You can find out who won here.

You can also find out what participants got up to in our two social media summaries: #NeurIPS2024 social media round-up part 1 | #NeurIPS2024 social media round-up part 2.

We’ll be posting more content from the conference over the coming weeks, so be sure to check out our NeurIPS collection page.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

Andrews Ata Kangah is a team leader and researcher working on democratizing AI and AI solutions for environmental problems. We spoke to him about his research using machine learning and geospatial data to localise illegal mining sites in Ghana, and his experience attending the AfriClimate AI workshop at the Deep Learning Indaba.

Multi-agent path finding in continuous environments

Multi-agent path finding describes a problem where a group of agents (robots, vehicles, or even people) are each trying to get from their starting positions to their goal positions without colliding. In this blog post, Kristýna Janovská and Pavel Surynek write about their method for multi-agent path finding in continuous environments, where agents move on sets of smooth paths.

RoboCup teams up with Booster, Fourier and Unitree

The RoboCup Federation has announced new partnerships with three robotics companies: Booster Robotics, Fourier Intelligence and Unitree Robotics. The RoboCup Federation, an international initiative, uses the RoboCup competition series and challenges as a platform to promote and advance robotics and AI research. The aim is that the companies’ humanoid robot hardware will be used in future RoboCup competitions.

AI is not a “stochastic parrot,” it’s a mirror

In this interview in Vox, Shannon Vallor talks about some of the ideas from her new book, The AI Mirror. “One thing I hear in every country that I travel to to speak about AI is: Are humans really any different from AI? Aren’t we at the end of the day just predictive text machines? Are we ever doing anything other than pattern matching and pattern generation? That rhetorical strategy is actually what scares me. It’s not the machines themselves. It’s the rhetoric of AI today that is about gaslighting humans into surrendering their own power and their own confidence in their agency and freedom. That’s the existential threat, because that’s what will enable humans to feel like we can just take our hands off the wheel and let AI drive.”

How to benefit from AI without losing your human self

In this fireside chat from IEEE Computational Intelligence Society, Tayo Obafemi-Ajayi (Missouri State University) asks Hava T Siegelmann (University of Massachusetts, Amherst) about how to benefit from AI without losing your human self.

2024 AAAI / ACM SIGAI doctoral consortium interviews

Over the course of the year, we’ve had the privilege of meeting a number of the 2024 AAAI / ACM SIGAI doctoral consortium participants. In this collection we’ve compiled links to all of the interviews.

Looking back over 2024

We’ve had the opportunity to work with many talented researchers during 2024. In these two posts, we’ve collected some of our favourite interview and blog posts. AIhub interview highlights 2024 | AIhub blog post highlights 2024.


Our resources page
Our events page
Seminars in 2024
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows 2024 interview series
AI around the world focus series
New voices in AI series



tags: , , , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Forthcoming machine learning and AI seminars: December 2025 edition

  01 Dec 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 December 2025 and 31 January 2026.
monthly digest

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset

  28 Nov 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

EU proposal to delay parts of its AI Act signal a policy shift that prioritises big tech over fairness

  27 Nov 2025
The EC has proposed delaying parts of the act until 2027 following intense pressure from tech companies and the Trump administration.

Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence