ΑΙhub.org
 

Congratulations to the NeurIPS 2021 award winners!


by
02 December 2021



share this:
trophy

The thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021) will be held from Monday 6 December to Tuesday 14 December. This week, the awards committees announced the winners of the outstanding paper award, the test of time award and – for the first time – the best paper award in the new datasets and benchmarks track.

Outstanding paper award

Six articles received outstanding paper awards this year. The winners are:

A Universal Law of Robustness via Isoperimetry
Sébastien Bubeck and Mark Sellke
The authors propose a theoretical model to explain why many state-of-the-art deep networks require many more parameters than are necessary to smoothly fit the training data.

On the Expressivity of Markov Reward
David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael Littman, Doina Precup and Satinder Singh
This paper provides a clear exposition of when Markov rewards are, or are not, sufficient to enable a system designer to specify a task, in terms of their preference for a particular behaviour, preferences over behaviours, or preferences over state and action sequences.

Deep Reinforcement Learning at the Edge of the Statistical Precipice
Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville and Marc G. Bellemare
This work presents practical approaches to improve the rigor of deep reinforcement learning algorithm comparison: specifically, that the evaluation of new algorithms should provide stratified bootstrap confidence intervals, performance profiles across tasks and runs, and interquartile means.

MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers
Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi and Zaid Harchaoui
This paper presents MAUVE, a divergence measure to compare the distribution of model-generated text with the distribution of human-generated text.

Continuized Accelerations of Deterministic and Stochastic Gradient Descents, and of Gossip Algorithms
Mathieu Even, Raphaël Berthier, Francis Bach, Nicolas Flammarion, Pierre Gaillard, Hadrien Hendrikx, Laurent Massoulié and Adrien Taylor
This paper describes a “continuized” version of Nesterov’s accelerated gradient method in which the two separate vector variables evolve jointly in continuous-time, but uses gradient updates that occur at random times determined by a Poisson point process. This new approach leads to a (randomized) discrete-time method.

Moser Flow: Divergence-based Generative Modeling on Manifolds
Noam Rozen, Aditya Grover, Maximilian Nickel and Yaron Lipman
In this work, the authors propose a method for training continuous normalizing flow (CNF) generative models over Riemannian manifolds.

Test of time award

This year, the test of time award goes to a paper from 2010:

Online Learning for Latent Dirichlet Allocation
Matthew Hoffman, David Blei and Francis Bach
This paper introduced a stochastic variational gradient based inference procedure for training Latent Dirichlet Allocation (LDA) models on very large text corpora. The idea has had a significant impact on the ML community. It provided the first stepping stone for general stochastic gradient variational inference procedures for a much broader class of models.

Datasets & benchmarks best paper award

There were two awards in this category:

Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research
Bernard Koch, Emily Denton, Alex Hanna, Jacob Gates Foster
This work analyzes thousands of papers, and studies the evolution of dataset use within different machine learning subcommunities. It finds that, in most communities, there is an evolution towards using fewer different datasets over time, and that these datasets come from a handful of elite institutions.

ATOM3D: Tasks on Molecules in Three Dimensions
Authors: Raphael John Lamarre Townshend, Martin Vögele, Patricia Adriana Suriana, Alexander Derry, Alexander Powers, Yianni Laloudakis, Sidhika Balachandar, Bowen Jing, Brandon M. Anderson, Stephan Eismann, Risi Kondor, Russ Altman, Ron O. Dror
The authors introduce a collection of benchmark datasets with 3D representations of small molecules and/or biopolymers for solving a wide range of problems, from single molecular structure prediction to design and engineering tasks.

You can find out more about the awards in this blog post.

More information about the talks, workshops and tutorials can be found here.



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Interview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions

  25 Jul 2025
Hear from PhD student Kate about her work on human-robot interactions.

#RoboCup2025: social media round-up part 2

  24 Jul 2025
Find out what participants got up to during the second half of RoboCup2025 in Salvador, Brazil.

Visualising the digital transformation of work

Does it matter that the existing images of AI and digital technologies are so unrealistic?

#ICML2025 social media round-up part 2

  22 Jul 2025
Find out what participants got up to during the second half of the conference.

#RoboCup2025: social media round-up 1

  21 Jul 2025
Find out what participants got up to during the opening days of RoboCup2025 in Salvador, Brazil.

Livestream of RoboCup2025

  18 Jul 2025
Watch the competition live from Salvador!

A behaviour monitoring dataset of wild mammals in the Swiss Alps

  17 Jul 2025
Scientists at EPFL have created MammAlps, a multi-view, multi-modal video dataset that captures how wild mammals behave in the Swiss Alps.

#ICML2025 social media round-up 1

  16 Jul 2025
Find out what participants have been getting up to during the first couple of days of the conference.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence