ΑΙhub.org
 

#ICML2022 Test of Time award announced


by
20 July 2022



share this:
winners' medal

The International Conference on Machine Learning (ICML) Test of Time award is given to a paper from ICML ten years ago that has had significant impact. This year the award goes to:

Poisoning Attacks against Support Vector Machines
Battista Biggio, Blaine Nelson and Pavel Laskov

The paper investigates adversarial machine learning and, specifically, poisoning attacks on support vector machines (SVMs). The awards committee noted that this paper is one of the earliest and most impactful papers on the theme of poisoning attacks, which are now widely studied by the community. The authors use a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. The method can be kernelized, thereby not needing explicit feature representation. The committee judged that this paper initiated thorough investigation of the problem and inspired significant subsequent work.

The abstract of the paper:
We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM’s test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM’s decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier’s test error.

You can read the paper in full here.

In a special award session, first author Battista Biggio gave a plenary talk describing the research, and subsequent developments. In the early days of this research field (around 2006-2010) work on poisoning attacks focussed on simple models and concerned security-related applications, such as spam filtering and network intrusion detection. The challenge that the authors set themselves with this work was to find out if these attacks were possible against a more complex classifier, and something closer to the state-of-the-art. They decided to study SVMs because they were theoretically grounded and quite popular at the time. Their strategy was to find an optimal attack point (to inject the “poisoned” samples) that maximised the classification error.

Battista also spoke about some of the work that has followed in this space, including adversarial examples for deep neural networks, machine learning security, and different types of attacks against machine learning models.

You can watch Battista’s original talk on this paper, at ICML 2012.

There were also two Test of Time honourable mentions:

  • Building high-level features using large scale unsupervised learning
    Quoc Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg Corrado, Jeff Dean, Andrew Ng
    Read the paper here.
  • On causal and anticausal learning
    Bernhard Schölkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, Joris Mooij
    Read the paper here.


tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Forthcoming machine learning and AI seminars: July 2025 edition

  30 Jun 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 July and 31 August 2025.
monthly digest

AIhub monthly digest: June 2025 – gearing up for RoboCup 2025, privacy-preserving models, and mitigating biases in LLMs

  26 Jun 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

RoboCupRescue: an interview with Adam Jacoff

  25 Jun 2025
Find out what's new in the RoboCupRescue League this year.

Making optimal decisions without having all the cards in hand

Read about research which won an outstanding paper award at AAAI 2025.

Exploring counterfactuals in continuous-action reinforcement learning

  20 Jun 2025
Shuyang Dong writes about her work that will be presented at IJCAI 2025.

What is vibe coding? A computer scientist explains what it means to have AI write computer code − and what risks that can entail

  19 Jun 2025
Until recently, most computer code was written, at least originally, by human beings. But with the advent of GenAI, that has begun to change.

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

  18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.

Interview with Mahammed Kamruzzaman: Understanding and mitigating biases in large language models

  17 Jun 2025
Find out how Mahammed is investigating multiple facets of biases in LLMs.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence