ΑΙhub.org
 

What’s coming up at #NeurIPS2024?


by
05 December 2024



share this:
Vancouver city scape

The thirty-eighth Conference on Neural Information Processing Systems (NeurIPS 2024) will take place in Vancouver, Canada, from Tuesday 10 December to Sunday 15 December. There is a bumper programme of events, including invited talks, orals, posters, tutorials, workshops, and socials, not to mention AIhub’s session on science communication.

Invited talks

There are seven invited talks this year:
Alison Gopnik – The Golem vs. stone soup: Understanding how children learn can help us understand and improve AI
Sepp Hochreiter – Toward industrial artificial intelligence
Fei-Fei Li – From seeing to doing: Ascending the ladder of visual intelligence
Lidong Zhou – A match made in silicon: The co-evolution of systems and AI
Arnaud Doucet – From diffusion models to Schrödinger bridges
Danica Kragic – Learning for interaction and interaction for learning
Rosalind Picard – How to optimize what matters most?

Affinity group workshops

The following affinity group workshops will take place on Tuesday 10 – Thursday 12 December:

Science communication for AI researchers – an introduction

We (AIhub) will be running a short course on science communication on Tuesday 10 December. Find out more here.

Tutorials

There will be a total of 14 tutorials this year. These will be held on Tuesday 10 December.

  • Evaluating Large Language Models – Principles, Approaches, and Applications, Bo Li, Irina Sigler, Yuan Xue
  • Dynamic Sparsity in Machine Learning: Routing Information through Neural Pathways, Edoardo Maria Ponti, André Martins
  • Opening the Language Model Pipeline: A Tutorial on Data Preparation, Model Training, and Adaptation, Kyle Lo, Akshita Bhagia, Nathan Lambert
  • Watermarking for Large Language Models, Yu-Xiang Wang, Lei Li, Xuandong Zhao
  • Causality for Large Language Models, Zhijing Jin, Sergio Garrido
  • Flow Matching for Generative Modeling, Ricky T. Q. Chen, Yaron Lipman, Heli Ben-Hamu
  • Experimental Design and Analysis for AI Researchers, Michael Mozer, Katherine Hermann, Jennifer Hu
  • PrivacyML: Meaningful Privacy-Preserving Machine Learning and How To Evaluate AI Privacy, Mimee Xu, Dmitrii Usynin, Fazl Barez
  • Advancing Data Selection for Foundation Models: From Heuristics to Principled Methods, Jiachen (Tianhao) Wang, Ludwig Schmidt, Ruoxi Jia
  • Cross-disciplinary insights into alignment in humans and machines, Gillian Hadfield, Dylan Hadfield-Menell, Joel Leibo, Rakshit Trivedi
  • Generating Programmatic Solutions: Algorithms and Applications of Programmatic Reinforcement Learning and Code Generation, Levi Lelis, Xinyun Chen, Shao-Hua Sun
  • Out-of-Distribution Generalization: Shortcuts, Spuriousness, and Stability, Maggie Makar, Aahlad Manas Puli, Yoav Wald
  • Beyond Decoding: Meta-Generation Algorithms for Large Language Models, Matthew Finlayson, Hailey Schoelkopf, Sean Welleck
  • Sandbox for the Blackbox: How LLMs Learn Structured Data?, Bingbin Liu, Ashok Vardhan Makkuva, Jason Lee

Find out more about the tutorials here.

Workshops

The workshops will take place on Saturday 14 and Sunday 15 December:

Find out more about the workshops here.

Accepted papers and other events



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.

Learning from failure to tackle extremely hard problems

  12 Nov 2025
This blog post is based on the work "BaNEL: Exploration posteriors for generative modeling using only negative rewards".

How AI can improve storm surge forecasts to help save lives

  10 Nov 2025
Looking at how AI models can help provide more detailed forecasts more quickly.

Rewarding explainability in drug repurposing with knowledge graphs

and   07 Nov 2025
A RL approach that not only predicts which drug-disease pairs might hold promise but also explains why.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence