ΑΙhub.org
 

Google’s SynthID is the latest tool for catching AI-made content. What is AI ‘watermarking’ and does it work?


by
16 June 2025



share this:

This image features an image of a pixelated street. Overlying the image are different shapes which are arranged to look like QR code symbols. These are in white/blue colours and intersect one another. Elise Racine & The Bigger Picture / Web of Influence II / Licenced by CC-BY 4.0. Note: only the third panel from the original image has been used here.

By T.J. Thomson, RMIT University; Elif Buse Doyuran, Queensland University of Technology, and Jean Burgess, Queensland University of Technology

Last month, Google announced SynthID Detector, a new tool to detect AI-generated content. Google claims it can identify AI-generated content in text, image, video or audio.

But there are some caveats. One of them is that the tool is currently only available to “early testers” through a waitlist.

The main catch is that SynthID primarily works for content that’s been generated using a Google AI service – such as Gemini for text, Veo for video, Imagen for images, or Lyria for audio.

If you try to use Google’s AI detector tool to see if something you’ve generated using ChatGPT is flagged, it won’t work.

That’s because, strictly speaking, the tool can’t detect the presence of AI-generated content or distinguish it from other kinds of content. Instead, it detects the presence of a “watermark” that Google’s AI products (and a couple of others) embed in their output through the use of SynthID.

A watermark is a special machine-readable element embedded in an image, video, sound or text. Digital watermarks have been used to ensure that information about the origins or authorship of content travels with it. They have been used to assert authorship in creative works and address misinformation challenges in the media.

SynthID embeds watermarks in the output from AI models. The watermarks are not visible to readers or audiences, but can be used by other tools to identify content that was made or edited using an AI model with SynthID on board.

SynthID is among the latest of many such efforts. But how effective are they?

There’s no unified AI detection system

Several AI companies, including Meta, have developed their own watermarking tools and detectors, similar to SynthID. But these are “model specific” solutions, not universal ones.

This means users have to juggle multiple tools to verify content. Despite researchers calling for a unified system, and major players like Google seeking to have their tool adopted by others, the landscape remains fragmented.

A parallel effort focuses on metadata – encoded information about the origin, authorship and edit history of media. For example, the Content Credentials inspect tool allows users to verify media by checking the edit history attached to the content.

However, metadata can be easily stripped when content is uploaded to social media or converted into a different file format. This is particularly problematic if someone has deliberately tried to obscure the origin and authorship of a piece of content.

There are detectors that rely on forensic cues, such as visual inconsistencies or lighting anomalies. While some of these tools are automated, many depend on human judgement and common sense methods, like counting the number of fingers in AI-generated images. These methods may become redundant as AI model performance improves.

An AI-generated image shows a woman waving with a six-fingered hand.Logical inconsistencies, such as extra fingers, are some of the visual ‘tells’ of the current era of AI-generated imagery. T J Thomson. CC BY-NC

How effective are AI detection tools?

Overall, AI detection tools can vary dramatically in their effectiveness. Some work better when the content is entirely AI-generated, such as when an entire essay has been generated from scratch by a chatbot.

The situation becomes murkier when AI is used to edit or transform human-created content. In such cases, AI detectors can get it badly wrong. They can fail to detect AI or flag human-created content as AI-generated.

AI detection tools don’t often explain how they arrived at their decision, which adds to the confusion. When used for plagiarism detection in university assessment, they are considered an “ethical minefield” and are known to discriminate against non-native English speakers.

Where AI detection tools can help

A wide variety of use cases exist for AI detection tools. Take insurance claims, for example. Knowing whether the image a client shares depicts what it claims to depict can help insurers know how to respond.

Journalists and fact checkers might draw on AI detectors, in addition to their other approaches, when trying to decide if potentially newsworthy information ought to be shared further.

Employers and job applicants alike increasingly need to assess whether the person on the other side of the recruiting process is genuine or an AI fake.

Users of dating apps need to know whether the profile of the person they’ve met online represents a real romantic prospect, or an AI avatar, perhaps fronting a romance scam.

If you’re an emergency responder deciding whether to send help to a call, confidently knowing whether the caller is human or AI can save resources and lives.

Where to from here?

As these examples show, the challenges of authenticity are now happening in real time, and static tools like watermarking are unlikely to be enough. AI detectors that work on audio and video in real time are a pressing area of development.

Whatever the scenario, it is unlikely that judgements about authenticity can ever be fully delegated to a single tool.

Understanding the way such tools work, including their limitations, is an important first step. Triangulating these with other information and your own contextual knowledge will remain essential.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University; Elif Buse Doyuran, Postdoctoral Research Fellow, Queensland University of Technology, and Jean Burgess, Distinguished Professor of Digital Media, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:


Related posts :



Introducing the NASA Onboard Artificial Intelligence Research (OnAIR) platform: an interview with Evana Gizzi

  03 Jul 2025
Find out about the OnAIR platform, some of the particular challenges of deploying AI-based solutions in space, and how the tool has been used so far.

An interview with Nicolai Ommer: the RoboCupSoccer Small Size League

  01 Jul 2025
We caught up with Nicolai to find out more about the Small Size League, how the auto referees work, and how teams use AI.

Forthcoming machine learning and AI seminars: July 2025 edition

  30 Jun 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 July and 31 August 2025.
monthly digest

AIhub monthly digest: June 2025 – gearing up for RoboCup 2025, privacy-preserving models, and mitigating biases in LLMs

  26 Jun 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

RoboCupRescue: an interview with Adam Jacoff

  25 Jun 2025
Find out what's new in the RoboCupRescue League this year.

Making optimal decisions without having all the cards in hand

Read about research which won an outstanding paper award at AAAI 2025.

Exploring counterfactuals in continuous-action reinforcement learning

  20 Jun 2025
Shuyang Dong writes about her work that will be presented at IJCAI 2025.

What is vibe coding? A computer scientist explains what it means to have AI write computer code − and what risks that can entail

  19 Jun 2025
Until recently, most computer code was written, at least originally, by human beings. But with the advent of GenAI, that has begun to change.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence