ΑΙhub.org
 

Congratulations to the NeurIPS 2021 award winners!

by
02 December 2021



share this:
trophy

The thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021) will be held from Monday 6 December to Tuesday 14 December. This week, the awards committees announced the winners of the outstanding paper award, the test of time award and – for the first time – the best paper award in the new datasets and benchmarks track.

Outstanding paper award

Six articles received outstanding paper awards this year. The winners are:

A Universal Law of Robustness via Isoperimetry
Sébastien Bubeck and Mark Sellke
The authors propose a theoretical model to explain why many state-of-the-art deep networks require many more parameters than are necessary to smoothly fit the training data.

On the Expressivity of Markov Reward
David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael Littman, Doina Precup and Satinder Singh
This paper provides a clear exposition of when Markov rewards are, or are not, sufficient to enable a system designer to specify a task, in terms of their preference for a particular behaviour, preferences over behaviours, or preferences over state and action sequences.

Deep Reinforcement Learning at the Edge of the Statistical Precipice
Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville and Marc G. Bellemare
This work presents practical approaches to improve the rigor of deep reinforcement learning algorithm comparison: specifically, that the evaluation of new algorithms should provide stratified bootstrap confidence intervals, performance profiles across tasks and runs, and interquartile means.

MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers
Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi and Zaid Harchaoui
This paper presents MAUVE, a divergence measure to compare the distribution of model-generated text with the distribution of human-generated text.

Continuized Accelerations of Deterministic and Stochastic Gradient Descents, and of Gossip Algorithms
Mathieu Even, Raphaël Berthier, Francis Bach, Nicolas Flammarion, Pierre Gaillard, Hadrien Hendrikx, Laurent Massoulié and Adrien Taylor
This paper describes a “continuized” version of Nesterov’s accelerated gradient method in which the two separate vector variables evolve jointly in continuous-time, but uses gradient updates that occur at random times determined by a Poisson point process. This new approach leads to a (randomized) discrete-time method.

Moser Flow: Divergence-based Generative Modeling on Manifolds
Noam Rozen, Aditya Grover, Maximilian Nickel and Yaron Lipman
In this work, the authors propose a method for training continuous normalizing flow (CNF) generative models over Riemannian manifolds.

Test of time award

This year, the test of time award goes to a paper from 2010:

Online Learning for Latent Dirichlet Allocation
Matthew Hoffman, David Blei and Francis Bach
This paper introduced a stochastic variational gradient based inference procedure for training Latent Dirichlet Allocation (LDA) models on very large text corpora. The idea has had a significant impact on the ML community. It provided the first stepping stone for general stochastic gradient variational inference procedures for a much broader class of models.

Datasets & benchmarks best paper award

There were two awards in this category:

Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research
Bernard Koch, Emily Denton, Alex Hanna, Jacob Gates Foster
This work analyzes thousands of papers, and studies the evolution of dataset use within different machine learning subcommunities. It finds that, in most communities, there is an evolution towards using fewer different datasets over time, and that these datasets come from a handful of elite institutions.

ATOM3D: Tasks on Molecules in Three Dimensions
Authors: Raphael John Lamarre Townshend, Martin Vögele, Patricia Adriana Suriana, Alexander Derry, Alexander Powers, Yianni Laloudakis, Sidhika Balachandar, Bowen Jing, Brandon M. Anderson, Stephan Eismann, Risi Kondor, Russ Altman, Ron O. Dror
The authors introduce a collection of benchmark datasets with 3D representations of small molecules and/or biopolymers for solving a wide range of problems, from single molecular structure prediction to design and engineering tasks.

You can find out more about the awards in this blog post.

More information about the talks, workshops and tutorials can be found here.



tags: ,


Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



DataLike: Interview with Wuraola Oyewusi

Ndane and Isabella talk to Wuraola Oyewusi about challenging and rewarding aspects of research and how her background in pharmacy has helped her data and AI career

European Union AI Act receives final approval

On 21 May, the Council of the EU formally signed off the artificial intelligence Act.
22 May 2024, by

#ICLR2024 invited talk: Priya Donti on why your work matters for climate more than you think

How is AI research related to climate, and how can the AI community better align their work with climate change-related goals?
21 May 2024, by

Congratulations to the #ICRA2024 best paper winners

The winners and finalists in the different categories have been announced.
20 May 2024, by

Trotting robots offer insights into animal gait transitions

A four-legged robot trained with machine learning has learned to avoid falls by spontaneously switching between walking, trotting, and pronking
17 May 2024, by

Machine learning enhances monitoring of threatened marbled murrelet

CNN analysis of data gathered by acoustic recording devices is a promising new tool for monitoring secretive species.
16 May 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association