ΑΙhub.org
 

#ICML2022 Test of Time award announced


by
20 July 2022



share this:
winners' medal

The International Conference on Machine Learning (ICML) Test of Time award is given to a paper from ICML ten years ago that has had significant impact. This year the award goes to:

Poisoning Attacks against Support Vector Machines
Battista Biggio, Blaine Nelson and Pavel Laskov

The paper investigates adversarial machine learning and, specifically, poisoning attacks on support vector machines (SVMs). The awards committee noted that this paper is one of the earliest and most impactful papers on the theme of poisoning attacks, which are now widely studied by the community. The authors use a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. The method can be kernelized, thereby not needing explicit feature representation. The committee judged that this paper initiated thorough investigation of the problem and inspired significant subsequent work.

The abstract of the paper:
We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM’s test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM’s decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier’s test error.

You can read the paper in full here.

In a special award session, first author Battista Biggio gave a plenary talk describing the research, and subsequent developments. In the early days of this research field (around 2006-2010) work on poisoning attacks focussed on simple models and concerned security-related applications, such as spam filtering and network intrusion detection. The challenge that the authors set themselves with this work was to find out if these attacks were possible against a more complex classifier, and something closer to the state-of-the-art. They decided to study SVMs because they were theoretically grounded and quite popular at the time. Their strategy was to find an optimal attack point (to inject the “poisoned” samples) that maximised the classification error.

Battista also spoke about some of the work that has followed in this space, including adversarial examples for deep neural networks, machine learning security, and different types of attacks against machine learning models.

You can watch Battista’s original talk on this paper, at ICML 2012.

There were also two Test of Time honourable mentions:

  • Building high-level features using large scale unsupervised learning
    Quoc Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg Corrado, Jeff Dean, Andrew Ng
    Read the paper here.
  • On causal and anticausal learning
    Bernhard Schölkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, Joris Mooij
    Read the paper here.


tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



A deep learning pipeline for controlling protein interactions

  30 Jan 2025
Scientists have used deep learning to design new proteins that bind to complexes involving other small molecules like hormones or drugs.
monthly digest

AIhub monthly digest: January 2025 – artists’ perspectives on GenAI, biomedical knowledge graphs, and ML for studying greenhouse gas emissions

  29 Jan 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Public competition for better images of AI – winners announced!

  28 Jan 2025
See the winning images from the Better Images of AI and Cambridge Diversity Fund competition.

Translating fiction: how AI could assist humans in expanding access to global literature and culture

  27 Jan 2025
Dutch publishing house Veen Bosch & Keuning (VBK) has confirmed plans to experiment using AI to translate fiction.

Interview with Yuki Mitsufuji: Improving AI image generation

  23 Jan 2025
Find out about two pieces of research tackling different aspects of image generation.

The Good Robot podcast: Using feminist chatbots to fight trolls with Sarah Ciston

  22 Jan 2025
Eleanor and Kerry chat to Sarah Ciston about the difficult labor of content moderation, chatbots to combat trolls, and more.

An open-source training framework to advance multimodal AI

  22 Jan 2025
EPFL researchers have developed 4M, a next-generation, framework for training versatile and scalable multimodal foundation models.

Optimizing LLM test-time compute involves solving a meta-RL problem

  20 Jan 2025
By altering the LLM training objective, we can reuse existing data along with more test-time compute to train models to do better.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association