ΑΙhub.org
 

New AI technique sounding out audio deepfakes


by
21 November 2025



share this:

Researchers from Australia’s national science agency CSIRO, Federation University Australia and RMIT University have developed a method to improve the detection of audio deepfakes.

The new technique, Rehearsal with Auxiliary-Informed Sampling (RAIS), is designed for audio deepfake detection — a growing threat in cybercrime risks such as bypassing voice-based biometric authentication systems, impersonation and disinformation. It determines whether an audio clip is real or artificially generated (a ‘deepfake’) and maintains performance over time as attack types evolve.

In Italy earlier this year, an AI-cloned voice of its Defence Minister requested a €1M ‘ransom’ from prominent business leaders, convincing some to pay. This is just one of many examples, highlighting the need for audio deepfake detectors.

As deepfake audio technology advances rapidly, newer ‘fake techniques’ often look nothing like the older ones.

“We want these detection systems to learn the new deepfakes without having to train the model again from scratch. If you just fine-tune on the new samples, it will cause the model to forget the older deepfakes it knew before,” said joint author, Dr Kristen Moore from CSIRO’s Data61.

“RAIS solves this by automatically selecting and storing a small, but diverse set of past examples, including hidden audio traits that humans may not even notice, to help the AI learn the new deepfake styles without forgetting the old ones,” explained Dr Moore.

RAIS uses a smart selection process powered by a network that generates ‘auxiliary labels’ for each audio sample. These labels help identify a diverse and representative set of audio samples to retain and rehearse. By incorporating extra labels beyond simple ‘fake’ or ‘real’ tags, RAIS ensures a richer mix of training data, improving its ability to remember and adapt over time.

Outperforming other methods, RAIS achieves the lowest average error rate of 1.95 per cent across a sequence of five experiences. The code, available on GitHub, remains effective with a small memory buffer and is designed to maintain accuracy as attacks become more sophisticated.

“Audio deepfakes are evolving rapidly, and traditional detection methods can’t keep up,” said Falih Gozi Febrinanto, a recent PhD graduate of Federation University Australia.

“RAIS helps the model retain what it has learned and adapt to new attacks. Overall, it reduces the risk of forgetting and enhances its ability to detect deepfakes.”

“Our approach not only boosts detection performance, but also makes continual learning practical for real-world applications. By capturing the full diversity of audio signals, RAIS sets a new standard for efficiency and reliability,” said Dr Moore.

Read and download the full paper: Rehearsal with Auxiliary-Informed Sampling for Audio Deepfake Detection.




CSIRO

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.

The Machine Ethics podcast: moral agents with Jen Semler

In this episode, Ben and Jen Semler talk about what makes a moral agent, the point of moral agents, philosopher and engineer collaborations, and more.

Extending the reward structure in reinforcement learning: an interview with Tanmay Ambadkar

  23 Feb 2026
Find out more about Tanmay's research on RL frameworks, the latest in our series meeting the AAAI Doctoral Consortium participants.

The Good Robot podcast: what makes a drone “good”? with Beryl Pong

  20 Feb 2026
In this episode, Eleanor and Kerry talk to Beryl Pong about what it means to think about drones as “good” or “ethical” technologies.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence