SynthID Solves Google's Problem, Not Yours
Invisible watermarks embedded in images are one of the smartest ideas in the authenticity space. But they have a fundamental flaw that no amount of technical refinement can fix.
What Is Micromarking?
Micromarking embeds invisible signatures into images that is undetectable to humans, verifiable by tools. Two approaches are emerging as serious contenders.
Google's SynthID embeds watermarks during generation itself, baking the signature directly into pixel values as the image is created. The mark confirms: "Gemini made this."
Wacom's Yuify applies provenance data after an image is created, embedding it into the image's frequency layers using wavelet-based watermarking. Combined with C2PA metadata and blockchain verification, artists can retroactively prove ownership. Scan an image through Yuify and trace it back to its original creator.
Both are technically impressive. Both survive recompression, cropping, and casual manipulation surprisingly well. Research is still emerging, but early signs suggest these watermarks are genuinely difficult to strip without destroying image quality.
The Technical Achievement
What makes micromarking impressive is its resilience.
Strip the C2PA metadata? The micromark remains. Crop the image down to a fraction of the original? Still detectable. Recompress, resize, screenshot? The signature persists.
This solves a real problem. Metadata is trivially removed. Micromarks aren't.
The Collective Action Failure
Here's where Google's approach falls apart.
SynthID only marks images from Gemini. It tells you nothing about content from Flux, Midjourney, Stable Diffusion, DALL-E, or the growing ecosystem of open-source models.
For micromarking to work as an AI detection solution, every AI provider would need to participate. That's not happening.
Bad actors won't use models that mark their output. If you're generating disinformation, you'll use an unmarked model like Flux, a local Stable Diffusion instance, or any of dozens of uncensored alternatives. None mark their output.
So what does SynthID actually accomplish? It lets Google confirm when their AI was used. It doesn't help anyone detect AI fakes from other sources.
The Gatekeeper Problem
Making matters worse: SynthID's detector isn't open source. Only Google can verify their own watermark. For journalists, fact-checkers, and verification teams, this creates an uncomfortable dependency: trust Google's word, or get no answer.
Yuify offers partner API access, which is more open but still gated. Independent verification at scale requires partnership agreements. If micromarking is going to fulfil its promise, detectors need to be accessible to those doing the verifying.
Attribution vs Verification
This reveals a deeper issue: attribution and verification are different problems.
- SynthID: "Did Gemini generate this?"
- Yuify: "Which artist created this, and how do I contact them?"
These are valuable answers. But they're not the same as: "Is this real or AI-generated?"
Yuify solves a genuine problem for artists: proving ownership and enabling licensing. That's useful regardless of what other tools exist.
SynthID, marketed as a solution for AI detection, only works if you assume all AI content comes from participating providers. It doesn't. The compliant ecosystem and the uncensored ecosystem are improving at roughly equal rates.
What's Still Needed
Micromarking is a genuine step forward for attribution. Knowing that a specific artist created a work has real value for rights management and provenance tracking. Yuify demonstrates this well.
But verification, answering "is this a real photograph?", requires analysis that works regardless of which model created the image, or whether the creator wanted to be traced.
Google's SynthID is a partial solution being marketed as a complete one.
