ΑΙhub.org
 

#ICML2022 Test of Time award announced


by
20 July 2022



share this:
winners' medal

The International Conference on Machine Learning (ICML) Test of Time award is given to a paper from ICML ten years ago that has had significant impact. This year the award goes to:

Poisoning Attacks against Support Vector Machines
Battista Biggio, Blaine Nelson and Pavel Laskov

The paper investigates adversarial machine learning and, specifically, poisoning attacks on support vector machines (SVMs). The awards committee noted that this paper is one of the earliest and most impactful papers on the theme of poisoning attacks, which are now widely studied by the community. The authors use a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. The method can be kernelized, thereby not needing explicit feature representation. The committee judged that this paper initiated thorough investigation of the problem and inspired significant subsequent work.

The abstract of the paper:
We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM’s test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM’s decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier’s test error.

You can read the paper in full here.

In a special award session, first author Battista Biggio gave a plenary talk describing the research, and subsequent developments. In the early days of this research field (around 2006-2010) work on poisoning attacks focussed on simple models and concerned security-related applications, such as spam filtering and network intrusion detection. The challenge that the authors set themselves with this work was to find out if these attacks were possible against a more complex classifier, and something closer to the state-of-the-art. They decided to study SVMs because they were theoretically grounded and quite popular at the time. Their strategy was to find an optimal attack point (to inject the “poisoned” samples) that maximised the classification error.

Battista also spoke about some of the work that has followed in this space, including adversarial examples for deep neural networks, machine learning security, and different types of attacks against machine learning models.

You can watch Battista’s original talk on this paper, at ICML 2012.

There were also two Test of Time honourable mentions:

  • Building high-level features using large scale unsupervised learning
    Quoc Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg Corrado, Jeff Dean, Andrew Ng
    Read the paper here.
  • On causal and anticausal learning
    Bernhard Schölkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, Joris Mooij
    Read the paper here.


tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



monthly digest

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset

  28 Nov 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

EU proposal to delay parts of its AI Act signal a policy shift that prioritises big tech over fairness

  27 Nov 2025
The EC has proposed delaying parts of the act until 2027 following intense pressure from tech companies and the Trump administration.

Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence