ΑΙhub.org
 

ICML 2020 Test of Time award


by
13 July 2020



share this:

ICML
The International Conference on Machine Learning (ICML) Test of Time award is given to a paper from ICML ten years ago that has had significant impact. This year the award goes to Niranjan Srinivas, Andreas Krause, Sham Kakade and Matthias Seeger for their work “Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design“.

The award was announced by the conference chairs on 1 July:

On their award page the ICML Test of Time Award committee members explain the significance of the paper:

This paper brought together the fields of Bayesian optimization, bandits and experimental design by analyzing Gaussian process bandit optimization, giving a novel approach to derive finite-sample regret bounds in terms of a mutual information gain quantity. This paper has had profound impact over the past ten years, including the method itself, the proof techniques used, and the practical results. These have all enriched our community by sparking creativity in myriad subsequent works, ranging from theory to practice.

The authors react to the good news:

In a special award session the authors gave a plenary talk describing their work. To summarise their presentation they took a brief look back over the ten years following publication of their paper and noted that the community have been working on a number of exciting related topics during that time. These areas include:
Theory of Bayesian optimisation and kernalized bandits – exploring other acquisition functions and high dimensions, developing fast algorithms, and establishing lower bounds.
Variants of Bayesian optimisation and kernalized bandits – a lot of this research is motivated by various practical applications. For example, in experimental design settings you might want to schedule experiments to happen in parallel or in batches, you might want to trade-off multiple objectives or take constraints into account (motivated by safety and robustness considerations).
More general models – analysis tools similar to the one reported in this paper have found exciting applications. These include neural bandits and neural tangent kernel, Thompson sampling and reinforcement learning.

In addition to these theoretical developments, there has been a lot of exciting work on applications of GP-UCB and Bayesian optimisation more broadly. Bayesian optimisation is now used extensively in industry for problems related to automatic machine learning, robotics, recommender systems, environmental monitoring, protein design, and much more.

Read the winning paper

The abstract on arXiv.
The full paper as pdf.



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.

The Machine Ethics podcast: moral agents with Jen Semler

In this episode, Ben and Jen Semler talk about what makes a moral agent, the point of moral agents, philosopher and engineer collaborations, and more.

Extending the reward structure in reinforcement learning: an interview with Tanmay Ambadkar

  23 Feb 2026
Find out more about Tanmay's research on RL frameworks, the latest in our series meeting the AAAI Doctoral Consortium participants.

The Good Robot podcast: what makes a drone “good”? with Beryl Pong

  20 Feb 2026
In this episode, Eleanor and Kerry talk to Beryl Pong about what it means to think about drones as “good” or “ethical” technologies.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence