ΑΙhub.org
 

ICML 2020 Test of Time award

by
13 July 2020



share this:

ICML
The International Conference on Machine Learning (ICML) Test of Time award is given to a paper from ICML ten years ago that has had significant impact. This year the award goes to Niranjan Srinivas, Andreas Krause, Sham Kakade and Matthias Seeger for their work “Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design“.

The award was announced by the conference chairs on 1 July:

On their award page the ICML Test of Time Award committee members explain the significance of the paper:

This paper brought together the fields of Bayesian optimization, bandits and experimental design by analyzing Gaussian process bandit optimization, giving a novel approach to derive finite-sample regret bounds in terms of a mutual information gain quantity. This paper has had profound impact over the past ten years, including the method itself, the proof techniques used, and the practical results. These have all enriched our community by sparking creativity in myriad subsequent works, ranging from theory to practice.

The authors react to the good news:

In a special award session the authors gave a plenary talk describing their work. To summarise their presentation they took a brief look back over the ten years following publication of their paper and noted that the community have been working on a number of exciting related topics during that time. These areas include:
Theory of Bayesian optimisation and kernalized bandits – exploring other acquisition functions and high dimensions, developing fast algorithms, and establishing lower bounds.
Variants of Bayesian optimisation and kernalized bandits – a lot of this research is motivated by various practical applications. For example, in experimental design settings you might want to schedule experiments to happen in parallel or in batches, you might want to trade-off multiple objectives or take constraints into account (motivated by safety and robustness considerations).
More general models – analysis tools similar to the one reported in this paper have found exciting applications. These include neural bandits and neural tangent kernel, Thompson sampling and reinforcement learning.

In addition to these theoretical developments, there has been a lot of exciting work on applications of GP-UCB and Bayesian optimisation more broadly. Bayesian optimisation is now used extensively in industry for problems related to automatic machine learning, robotics, recommender systems, environmental monitoring, protein design, and much more.

Read the winning paper

The abstract on arXiv.
The full paper as pdf.



tags: ,


Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



AIhub monthly digest: January 2023 – low-resource language projects, Earth’s nightlights and a Lanfrica milestone

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
31 January 2023, by

The Good Robot Podcast: featuring Abeba Birhane

In this episode, Eleanor and Kerry talk to Abeba Birhane about changing computing cultures.
30 January 2023, by

All questions answered: how CLAIRE shapes the future of AI in Europe

Watch the next in the series of CLAIRE's All Questions Answered (AQuA) events.
27 January 2023, by

UrbanTwin: seeing double for sustainability

A digital twin for urban infrastructure: assessing the effectiveness of climate-related policies and actions.
26 January 2023, by

Counterfactual explanations for land cover mapping: interview with Cassio Dantas

Cassio tells us about work applying counterfactual explanations to remote sensing time series data for land-cover mapping classification.
25 January 2023, by





©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association