ΑΙhub.org
 

Your right to be forgotten in the age of AI


by
22 September 2023



share this:
person sat on a sofa with a laptop

By Alice Trend, David Zhang and Thierry Rakotoarivelo

Earlier this year, ChatGPT was briefly banned in Italy due to a suspected privacy breach. To help overturn the ban, the chatbot’s parent company, OpenAI, committed to providing a way for citizens to object to the use of their personal data to train artificial intelligence (AI) models.

The right to be forgotten (RTBF) law plays an important role in the online privacy rights of some countries. It gives individuals the right to ask technology companies to delete their personal data. It was established via a landmark case in the European Union (EU) involving search engines in 2014.

But once a citizen objects to the use of their personal data in AI training, what happens next? It turns out, it’s not that simple.

CSIRO cybersecurity researcher Thierry Rakotoarivelo is co-author of a recent paper on machine unlearning. He explains that applying RTBF to large language models (LLMs) like ChatGPT is much harder than search engines.

“If a citizen requests that their personal data be removed from a search engine, relevant web pages can be delisted and removed from search results,” Thierry said.

“For LLMs, it’s more complex, as they don’t have the ability to store specific personal data or documents, and they can’t retrieve or forget specific pieces of information on command.”

So, how do LLMs work?

LLMs generate responses based on patterns they learned from a large dataset during their training process.

“They don’t search the internet or index websites to find answers. Instead, they predict the next word in a response based on the context, patterns and relationships of words provided by the query,” Thierry said.

Another CSIRO cybersecurity researcher David Zhang is the first author of Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions. He has a great analogy for how humans use training data they have learned for speech generation as well.

“Just as Australians can predict that after ‘Aussie, Aussie, Aussie’ comes ‘oi, oi, oi’ based on training data from international sports matches, so too do LLMs use their training data to predict what to say next,” David said.

“Their goal is to generate human-like text that is relevant the question and makes sense. In this way, an LLM is more like a text generator than a search engine. Its responses are not retrieved from a searchable database, but rather generated based on its learned knowledge.”

Is this why LLMs hallucinate?

When a LLM outputs incorrect answers to prompts it is said to be “hallucinating“. However, David says hallucination is how LLMs do everything.

“Hallucination is not a bug of Large Language Models, but rather a feature based on their design,” David said.

“They also don’t have access to real-time data or updates post their training cut-off, which can lead to generating outdated or incorrect information.”

How can we make LLMs forget?

Machine unlearning is the current front-runner application to enable LLMs to forget training data, but it’s complex. So complex, in fact, that Google have issued a challenge to researchers worldwide to progress this solution.

One approach to machine unlearning removes exact data points from the model through accelerated retraining of specific parts of the model. This avoids having to retrain the entire model, which is costly and takes time. But first you need to find which parts of the model need to be retrained, and this segmented approach could generate issues with fairness by removing potentially important data points.

Other approaches include approximate methods with ways to verify, erase, and prevent data degradation and adversarial attacks on algorithms. David and his colleagues suggest several band-aid approaches, including model editing to make quick fixes to the model while a better fix is developed or a new model with modified dataset is being trained.

In their paper the researchers use clever prompting to get a model to forget a famous scandal, by reminding it the information is subject to a right to be forgotten request.

The case to remember and learn from mistakes

The data privacy concerns that continue to create issues for LLMs might have been avoided if responsible AI development concepts were embedded throughout the lifecycle of the tool.

Most well-known LLMs on the market are “black boxes”. In other words, their inner workings and how they arrive at outputs or decisions are inaccessible to users. Explainable AI describes models where decision making processes can be traced and understood by humans (the opposite of “black box” AI).

When used well, explainable AI and responsible AI techniques can provide insight into the root cause of any issues in models – because each step is explainable – which helps find and remove issues. By using these and other AI ethics principles in new technology development, we can help assess, investigate and alleviate these problems.

Read the research in full

Learn to Unlearn: A Survey on Machine Unlearning, Youyang Qu, Xin Yuan, Ming Ding, Wei Ni, Thierry Rakotoarivelo, David Smith.




CSIRO

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

  08 Apr 2026
Francesco tells us how LLMs behave in the social network Moltbook, and what this reveals about network dynamics.

Scaling up multi-agent systems: an interview with Minghong Geng

  07 Apr 2026
We sat down with Minghong in the latest of our interviews with the 2026 AAAI/SIGAI Doctoral Consortium participants.

Forthcoming machine learning and AI seminars: April 2026 edition

  02 Apr 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 April and 31 May 2026.

#AAAI2026 invited talk: machine learning for particle physics

  01 Apr 2026
How is ML used in the search for new particles at CERN?
monthly digest

AIhub monthly digest: March 2026 – time series, multiplicity, and the history of RoboCup

  31 Mar 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

  30 Mar 2026
We launch our new series with a conversation with Ross King - a pioneer in the field of AI-enabled scientific discovery.

A multi-armed robot for assisting with agricultural tasks

and   27 Mar 2026
How can a robot safely manipulate branches to reveal hidden flowers while remaining aware of interaction forces and minimizing damage?



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence