ΑΙhub.org
 

Congratulations to the #ICML2023 outstanding paper award winners


by
26 July 2023



share this:
winners' medal

This year’s International Conference on Machine Learning (ICML) is taking place in Honolulu, Hawai’i from 23-29 July. The winners of the outstanding paper awards for 2023 have now been announced. The six papers chosen are as follows:


Learning-Rate-Free Learning by D-Adaptation
Aaron Defazio (FAIR), Konstantin Mishchenko (Samsung AI Center)

This paper introduces an interesting approach that aims to address the challenge of obtaining a learning rate free optimal bound for non-smooth stochastic convex optimization. The authors propose a novel method that overcomes the limitations imposed by traditional learning rate selection in optimizing such problems. This research makes a valuable and practical contribution to the field of optimization.


A Watermark for Large Language Models
John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein (University of Maryland)

This paper proposes a method for watermarking output of large language models, i.e., embedding signals into generated text that are invisible to humans but algorithmically detectable. The watermark can be generated without re-training a language model and detected without access to the API or parameters. The paper also proposes a statistical test for detecting watermarks with interpretable p-values and an information-theoretic framework for analyzing its sensitivity. The proposed method is simple and novel and presents a thorough theoretical analysis and solid experiments. Given the critical challenges that arise in detecting and auditing synthetic text generated by LLMs, this paper has the potential to have a significant impact on the community.


Generalization on the Unseen, Logic Reasoning and Degree Curriculum
Emmanuel Abbe (EPFL, Apple) , Samy Bengio (Apple), Aryo Lotfi (EPFL), Kevin Rizk (EPFL)

This work provides a significant advancement in the learning of Boolean functions, specifically targeting the Generalization on the Unseen (GOTU) setting, which poses a challenging out-of-distribution generalization problem. The paper extensively delves into this important topic, offering a well-structured approach supported by theoretical analysis and extensive experimentation. Moreover, it stands out by outlining a key research direction in the realm of deep neural networks.


Adapting to game trees in zero-sum imperfect information games
Côme Fiegel (CREST, ENSAE, IP Paris), Pierre MENARD (ENS Lyon) , Tadashi Kozuno (Omron Sinic X), Remi Munos (Deepmind), Vianney Perchet (CREST, ENSAE, IP Paris and CRITEO AI Lab), Michal Valko (Deepmind)

This paper introduces near-optimal strategies for imperfect information zero-sum games. It rigorously establishes a novel lower bound and presents two algorithms, Balanced FTRL and Adaptive FTRL. These contributions significantly advance the field of optimization in imperfect information games. The experiments substantiate the claims, providing ample support for the findings.


Self-Repellent Random Walks on General Graphs – Achieving Minimal Sampling Variance via Nonlinear Markov Chains
Vishwaraj Doshi (IQVIA Inc), Jie Hu (North Carolina State University), Do Young Eun (North Carolina State University)

This paper tackles a challenging set of open problems, MCMC with self-repellent random walks. It goes beyond traditional non-backtracking approaches and paves the way for new research directions in MCMC sampling. The authors make an original and nontrivial contribution to the Markov Chain Monte Carlo literature; it is remarkable that the process can be rigorously analyzed and proven. The paper is well-written, presenting clear and intuitive explanations of the main concepts. The results are convincing and comprehensive.


Bayesian Design Principles for Frequentist Sequential Learning
Yunbei Xu, Assaf Zeevi (Columbia University)

This paper tackles the very general problem of designing bandit and other sequential decision-making strategies. It proposes methods for bounding the regret of any strategy using a novel quantity called the algorithmic information ratio, and derives methods for optimizing this bound. The bound is tighter than similar earlier information-theoretic quantities, and the methods do well in both stochastic and adversarial bandit settings, achieving best-of-all-worlds. What is particularly interesting is that the paper possibly opens the door to a whole new line of exploration-exploitation strategies beyond the well-known Thompson Sampling and UCB for bandits. The fact that this principle extends to reinforcement learning is very promising. The paper was unanimously and strongly supported by the expert reviewers.


The awards committee comprised: Danqi Chen, Bohyung Han, Samuel Kaski, Mengdi Wang and Tong Zhang. You can find out more about how the papers were selected here.

Note: paper summaries courtesy of ICML.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.

Multi-agent path finding in continuous environments

and   11 Dec 2024
How can a group of agents minimise their journey length whilst avoiding collisions?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association