ΑΙhub.org
 

Congratulations to the #ICML2023 outstanding paper award winners

by
26 July 2023



share this:
winners' medal

This year’s International Conference on Machine Learning (ICML) is taking place in Honolulu, Hawai’i from 23-29 July. The winners of the outstanding paper awards for 2023 have now been announced. The six papers chosen are as follows:


Learning-Rate-Free Learning by D-Adaptation
Aaron Defazio (FAIR), Konstantin Mishchenko (Samsung AI Center)

This paper introduces an interesting approach that aims to address the challenge of obtaining a learning rate free optimal bound for non-smooth stochastic convex optimization. The authors propose a novel method that overcomes the limitations imposed by traditional learning rate selection in optimizing such problems. This research makes a valuable and practical contribution to the field of optimization.


A Watermark for Large Language Models
John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein (University of Maryland)

This paper proposes a method for watermarking output of large language models, i.e., embedding signals into generated text that are invisible to humans but algorithmically detectable. The watermark can be generated without re-training a language model and detected without access to the API or parameters. The paper also proposes a statistical test for detecting watermarks with interpretable p-values and an information-theoretic framework for analyzing its sensitivity. The proposed method is simple and novel and presents a thorough theoretical analysis and solid experiments. Given the critical challenges that arise in detecting and auditing synthetic text generated by LLMs, this paper has the potential to have a significant impact on the community.


Generalization on the Unseen, Logic Reasoning and Degree Curriculum
Emmanuel Abbe (EPFL, Apple) , Samy Bengio (Apple), Aryo Lotfi (EPFL), Kevin Rizk (EPFL)

This work provides a significant advancement in the learning of Boolean functions, specifically targeting the Generalization on the Unseen (GOTU) setting, which poses a challenging out-of-distribution generalization problem. The paper extensively delves into this important topic, offering a well-structured approach supported by theoretical analysis and extensive experimentation. Moreover, it stands out by outlining a key research direction in the realm of deep neural networks.


Adapting to game trees in zero-sum imperfect information games
Côme Fiegel (CREST, ENSAE, IP Paris), Pierre MENARD (ENS Lyon) , Tadashi Kozuno (Omron Sinic X), Remi Munos (Deepmind), Vianney Perchet (CREST, ENSAE, IP Paris and CRITEO AI Lab), Michal Valko (Deepmind)

This paper introduces near-optimal strategies for imperfect information zero-sum games. It rigorously establishes a novel lower bound and presents two algorithms, Balanced FTRL and Adaptive FTRL. These contributions significantly advance the field of optimization in imperfect information games. The experiments substantiate the claims, providing ample support for the findings.


Self-Repellent Random Walks on General Graphs – Achieving Minimal Sampling Variance via Nonlinear Markov Chains
Vishwaraj Doshi (IQVIA Inc), Jie Hu (North Carolina State University), Do Young Eun (North Carolina State University)

This paper tackles a challenging set of open problems, MCMC with self-repellent random walks. It goes beyond traditional non-backtracking approaches and paves the way for new research directions in MCMC sampling. The authors make an original and nontrivial contribution to the Markov Chain Monte Carlo literature; it is remarkable that the process can be rigorously analyzed and proven. The paper is well-written, presenting clear and intuitive explanations of the main concepts. The results are convincing and comprehensive.


Bayesian Design Principles for Frequentist Sequential Learning
Yunbei Xu, Assaf Zeevi (Columbia University)

This paper tackles the very general problem of designing bandit and other sequential decision-making strategies. It proposes methods for bounding the regret of any strategy using a novel quantity called the algorithmic information ratio, and derives methods for optimizing this bound. The bound is tighter than similar earlier information-theoretic quantities, and the methods do well in both stochastic and adversarial bandit settings, achieving best-of-all-worlds. What is particularly interesting is that the paper possibly opens the door to a whole new line of exploration-exploitation strategies beyond the well-known Thompson Sampling and UCB for bandits. The fact that this principle extends to reinforcement learning is very promising. The paper was unanimously and strongly supported by the expert reviewers.


The awards committee comprised: Danqi Chen, Bohyung Han, Samuel Kaski, Mengdi Wang and Tong Zhang. You can find out more about how the papers were selected here.

Note: paper summaries courtesy of ICML.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



DataLike: Interview with Motunrayo Kilanko

Ndane and Isabella talk to Motunrayo Kilanko about learning on the job, projects, and apprenticeships.

Interview with Salena Torres Ashton: causality and natural language

We spoke to Salena about her research, the AAAI experience, and her career path from professional genealogist and historian to machine learning PhD student.
02 May 2024, by

5 questions schools and universities should ask before they purchase AI tech products

Every few years, an emerging technology shows up at the doorstep of schools and universities promising to transform education.
01 May 2024, by

AIhub monthly digest: April 2024 – explainable AI, access to compute, and noughts and crosses

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
30 April 2024, by

The Machine Ethics podcast: Good tech with Eleanor Drage and Kerry McInerney

In this episode, Ben chats Eleanor Drage and Kerry McInerney about good tech.
29 April 2024, by

AIhub coffee corner: Open vs closed science

The AIhub coffee corner captures the musings of AI experts over a short conversation.
26 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association