ΑΙhub.org
 

Does ‘federated unlearning’ in AI improve data privacy, or create a new cybersecurity risk?


by
15 May 2026



share this:

Deborah Lupton / Pop Chips / Licenced by CC-BY 4.0

By Abbas Yazdinejad, University of Regina and Ann Fitz-Gerald, Balsillie School of International Affairs

As the capacity of artificial intelligence (AI) increases at an exponential rate, so do concerns about the privacy of user data.

Increasingly, organizations around the world are adopting something called federated unlearning that enables AI training without centralizing sensitive data. This allows hospitals, banks and government agencies to collaborate while keeping data local — an approach that’s regarded as a major advance in privacy.

Federated unlearning promises that user data can be removed from a trained AI system. A hospital, for example, could ask its AI system to forget a patient’s data.

In the European Union, this is defined as the “right to be forgotten.” Similar data deletion rights exist globally, though with different legal strengths and technical interpretations.

Federated learning allows hospitals train powerful AI models without sharing patient data, solving privacy barriers that limit medical AI innovation. #machinelearninginhealthcare

[image or embed]

— HackerNoon (@hackernoon.com) 16 March 2026 at 15:01

But what if the request to forget is not itself trustworthy? Our research shows that while federated unlearning appears to be a natural extension of data rights, it also introduces new hidden security risks that undermine trust in our digital world.

New stealth vulnerabilities

During a process of federated unlearning, participants train local models on personal data, then send updates for those models to a central server. The server aggregates these updates to learn a single, shared system, which allows models to benefit from both the scale and scope of data.

Researchers already know these federated systems can become affected by data poisoning attacks where attackers bias the data they use to train their local model to alter the shared model’s performance.

Poisoning attacks can create stealth vulnerabilities, also known as “backdoors,” that only activate under specific conditions.

Federated unlearning introduces a new and subtle dimension to this threat.

An attacker could first inject harmful patterns into the model. Later, they could submit a request to remove their data. If the unlearning process is imperfect — as many current methods are — the visible traces of the attack may disappear, while the hidden effects remain.

A new security blind spot

This issue creates a new kind of cross-sectoral national security vulnerability that is easy to overlook.

In one hypothetical scenario, repeated unlearning requests could gradually degrade a model’s performance — a slow, hard-to-detect disruption. Unlike traditional cyberattacks, this would not cause the immediate failure of a model, but would erode its reliability over time.

In another case, carefully timed data removal could bias outcomes. A financial risk model, for instance, could be subtly shifted by removing certain data contributions at key moments.

These risks are amplified by the very nature of federated systems. Because data remains distributed, there is often limited visibility into how individual contributions affect the final model.

What emerges is a security blind spot — a mechanism designed to enhance privacy that may also weaken system integrity.

Why current solutions fall short

Many federated unlearning techniques are designed with efficiency in mind. Instead of retraining a model from scratch — which can be costly — the techniques attempt to approximate the removal of data influence. While practical, this approach has limits.

Emerging evidence shows that machine learning models can retain complex patterns even after attempts to remove data and, in adversarial settings, harmful effects may persist even after “unlearning.”

At the same time, there are few safeguards to verify whether an unlearning request itself is legitimate. This gap is not only technical, but also structural, and can lead to multiple security vulnerabilities.

www.policyalternatives.ca/news-researc… 'Though federal policymakers have developed many non-binding frameworks around AI, Canada lacks binding AI regulation, leaving Canadians without proper protections against AI harms to privacy and human rights.' @policyalternatives.ca

[image or embed]

— Erika Shaker (@ershaker.bsky.social) 12 February 2026 at 14:44

Unlearning is a security problem

Federated unlearning is often framed as a privacy feature. This framing is incomplete. In practice, removing data from a model changes its behaviour — sometimes in unpredictable ways. This makes unlearning a security-sensitive operation, and not just a data management tool.

Like other critical system actions, federated unlearning should be subject to verification, auditing and monitoring. These additional actions could include:

  • Validating the origin of unlearning requests.
  • Tracking how model behaviour changes after data removal.
  • Detecting repeat or suspicious requests.
  • Designing methods that ensure complete removal of harmful influence.

A critical moment for AI governance

AI systems are increasingly used in decisions affecting people’s lives — from medical diagnoses to financial approvals. Here, privacy and reliability both matter.

Federated unlearning sits at this intersection. It aims to protect data rights, but may introduce risks not widely understood. If ignored, systems which are designed to enhance trust could become undermined.

Canada is at an important juncture in shaping how AI systems are governed. Policies around data deletion, accountability and transparency are evolving rapidly.

Federated unlearning will likely become part of this landscape. As it’s adopted, it must be treated with the same level of scrutiny as other security-critical mechanisms.

The challenge is no longer to just make AI forget data. It is to ensure that, in the process of forgetting, we are not allowing something more dangerous to remain.The Conversation

Abbas Yazdinejad, Assistant Professor, Department of Computer Science, University of Regina and Ann Fitz-Gerald, Director and Professor, International Security, Wilfrid Laurier University, Balsillie School of International Affairs

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.

            AUAI is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Reflections from #AIES2025

and   14 May 2026
We reflect on AIES 2025, outlining a discussion session on LLMs for clinical usage and human rights.

Deep learning-powered biochip to detect genetic markers

System can detect extremely small amounts of microRNAs, genetic markers linked to diseases such as heart disease.

Half of AI health answers are wrong even though they sound convincing – new study

  12 May 2026
Imagine you have just been diagnosed with early-stage cancer and, before your next appointment, you type a question into an AI chatbot.

Gradient-based planning for world models at longer horizons

  11 May 2026
What were the problems that motivated this project and what was the approach to address them?

It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea

  08 May 2026
Increased offloading to new tools has raised the fear that people will become overly reliant on AI.

Making AI systems more transparent and trustworthy: an interview with Ximing Wen

  07 May 2026
Find out more about Ximing's work, experience as a research intern, and what inspired her to study AI.

Report on foundation model impacts released

  06 May 2026
Partnership on AI publish a progress report on post-deployment governance practices.

Forthcoming machine learning and AI seminars: May 2026 edition

  05 May 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 May and 30 June 2026.



AUAI is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence