Can You Trust What You See Online?
Let me ask you something. Last week, did you stop scrolling at a video and think even for just a second “wait, is this real?”
If you did, you’re not alone. And honestly, that moment of hesitation? That’s the whole problem.
We’re not talking about obviously fake content anymore. We’re talking about synthetic media so convincing that trained journalists, security researchers, and forensic experts are getting fooled. A 2025 study found that 59% of people openly admit they can no longer reliably tell the difference between human-created and AI-generated content. And deepfake incidents didn’t just grow last year they exploded. From roughly 500,000 documented cases in 2023 to over 8 million by end of 2025. That’s a 900% jump in two years.
First, Let's Talk About Digital Provenance Because Most People Get It Wrong
People hear “digital provenance” and immediately think it sounds like a tech term they don’t need to care about. It’s not. It’s actually one of the most practical concepts in media and content right now.
At its core, digital provenance is just the verifiable history of a piece of content. Where it came from, who made it, what tools were used, and whether anything was changed after the fact. Think of it like a paper trail except the paper can’t be forged.
Gartner named digital provenance one of its Top 10 Strategic Technology Trends for 2026. That’s not marketing speak that’s a sign that enterprise risk teams, regulators, and boardrooms are taking this seriously. Organizations that don’t invest in provenance infrastructure by 2029, Gartner estimates, could face regulatory penalties in the billions.
So it matters. A lot. But here’s the thing most content creators, journalists, and even marketing teams are still treating it like someone else’s problem.
The building blocks of digital provenance
Without getting too technical, here’s how it actually works under the hood
Cryptographic hashing gives every piece of content a unique digital fingerprint at the moment it’s created. Change even one pixel of an image, and the fingerprint changes. It’s instant, it’s automatic, and it’s essentially impossible to fake.
Digital signatures work like a notary stamp they confirm who created the content and when, tied to a verified identity.
Metadata embedding is where the story gets told. Every tool used, every AI model involved, every edit applied gets logged inside the file itself.
Audit trails stack all of that into a time stamped record that anyone can independently verify later.
Blockchain attestation takes it a step further anchoring those records into a decentralized system where no single company or government can quietly alter the history.
Blockchain attestation takes it a step further anchoring those records into a decentralized system where no single company or government can quietly alter the history.
Blockchain attestation takes it a step further anchoring those records into a decentralized system where no single company or government can quietly alter the history.
Put it all together and you’ve got a system that can answer three questions about any piece of digital content:
- Where did this come from?
- Has it been changed?
- Should I trust the source?
The Deepfake Problem Is Already Bigger Than Most People Realize
Okay, let’s talk about deepfakes properly. Because “deepfake” has become one of those words that gets thrown around so casually that people have stopped registering how serious the underlying problem actually is.
Deepfakes are AI-generated or AI-manipulated media videos, images, audio clips that make real people appear to say or do things they never said or did. They’re built using deep learning techniques, mostly Generative Adversarial Networks (GANs) and increasingly diffusion-based models that are even harder to detect. What used to require a Hollywood visual effects budget can now be done on a laptop in under an hour.
And the damage is real.
A CFO’s voice gets cloned by fraudsters to authorize a wire transfer. Gone: millions of dollars, in minutes. A politician appears on video making a statement they never made and by the time it’s debunked, the clip has been shared 400,000 times. A journalist’s face gets attached to fabricated content. A brand’s CEO “confesses” to wrongdoing that never happened. None of this is hypothetical. These are documented incidents from the last 18 months.
The deepfake detection market is responding fast it was valued at $5.5 billion in 2023 and is on track to hit $15.7 billion by 2026. That’s 42% annual growth. Not bad for a market that barely existed five years ago.
But here’s the uncomfortable truth: detection alone is a losing battle. Generative models improve continuously. Every better detector creates training data for a better generator. It’s an arms race, and the offense almost always has the advantage over the defense.
That’s why the smartest people working on this problem have shifted their focus. Instead of only trying to catch fakes after they’re out in the world, they’re trying to certify what’s real at the source.
That’s where C2PA comes in.
C2PA: The Standard That's Quietly Changing Everything
If you haven’t heard of C2PA yet, you will. The Coalition for Content Provenance and Authenticity is the global technical body behind what’s becoming the internet’s authentication infrastructure for digital media.
It started in 2019, when Adobe launched the Content Authenticity Initiative built on a pretty smart insight. Fighting disinformation by playing whack-a-mole with fakes is reactive and exhausting. What if you focused on certifying authentic content from the start, so audiences always have a way to verify what’s real?
Around the same time, Microsoft and the BBC were running something called Project Origin, aimed specifically at the news industry building systems to verify that articles, images, and videos genuinely came from the outlet publishing them. In 2021, these two efforts merged to form the C2PA.
Today, the coalition has 200+ member organizations. Adobe, Microsoft, BBC, Intel, Canon, Sony, Arm, Truepic. These aren’t small players hedging their bets — they’re the companies building the cameras, software, and platforms that the entire content industry runs on.
So what does a Content Credential actually do?
The C2PA’s main contribution is something called a Content Credential sometimes described as a nutrition label for digital media.
Here’s what it is, plainly: a cryptographically signed record embedded directly inside a digital file. It logs who created the content, when, with what tools, whether AI was involved in any part of the process, and every meaningful edit applied since creation.
The critical bit is that it’s tamper-evident. If anyone tries to alter the manifest even slightly the cryptographic signature breaks. The file doesn’t just become unverified; it actively signals that something has been changed. You can’t fake a valid C2PA manifest without the original creator’s signing credentials.
And importantly, as of 2026, this isn’t just theoretical. Real products are shipping with it built in.
Adobe Creative Suite now generates Content Credentials by default. Designers aren’t doing anything extra provenance is just part of the workflow.
Canon cameras sign images cryptographically at the moment of capture. Before any editing software even touches the file, it already has a verified origin record.
Microsoft Office records AI-enhanced edits to documents and images in C2PA manifests.
Sony’s PXW-Z300 became the first camcorder with C2PA video support, designed specifically for broadcast journalism.
Drupal, the CMS powering roughly 1.7 million websites can now display Content Credentials to visitors.
The infrastructure is real. It’s deployed. It’s growing. The question now is just how fast the rest of the ecosystem catches up.
Deepfake Detection: The Other Side of the Equation
Here’s something important to understand: digital provenance and deepfake detection are two different tools solving related but distinct parts of the same problem. You need both.
C2PA tells you the history of a piece of content. It doesn’t tell you whether that content is malicious. A deepfake produced by an AI tool that implements C2PA will have a manifest that honestly records “created by [AI tool name].” That’s useful information but it doesn’t automatically flag harmful intent.
Detecting manipulation is a separate discipline, and it’s getting more sophisticated fast.
How detection actually works
Transformer-based detection models look for statistical patterns that generative AI introduces into synthetic media artifacts in frequency space, inconsistencies in texture, subtle irregularities in how light falls across a face things completely invisible to the human eye but detectable by trained models.
Multimodal biometric analysis checks for mismatches between what you see and what you hear. Real people’s lip movements, facial micro-expressions, and vocal patterns sync in very specific ways. Deepfakes often get 99% of this right and fail on the last 1% in ways algorithms can catch.
Liveness detection is used in identity verification contexts to confirm that a biometric sample comes from a real, present human being not a photograph, pre-recorded video, or synthetic face.
Forensic attestation produces legally admissible proof of content origin. This is increasingly important not just for journalism but for financial institutions, law enforcement, and courtroom proceedings.
The challenge and it’s a real one is generalization. Models trained on one type of deepfake often don’t perform well against a new generation technique. Real-world video has varied lighting, compression artifacts, motion blur. The gap between lab accuracy and real-world performance remains a genuine issue that researchers are still working to close.
What This Means for Journalists Specifically
Only 38% of publishers currently feel confident about journalism’s future a drop of 22 percentage points since 2022. A majority of teenagers now describe journalism using words like “fake,” “lies,” and “bias.” That’s not entirely because of AI, but synthetic media has absolutely accelerated the trust collapse.
Here’s the flip side though: in this environment, the ability to prove your content is real is a genuine competitive advantage. Not just ethically important commercially valuable.
Newsrooms that embed provenance into their workflows are building something their competitors without it can’t easily replicate: verifiable credibility. When a publication can show an audience not just that a photo exists but exactly when it was taken, on what device, by which verified photographer, with no edits applied that’s a trust signal that no amount of editorial branding can manufacture.
Some publishers are already moving on this:
The BBC’s IBC Accelerator project developed open-source tools specifically to lower the barrier for smaller newsrooms to embed provenance at the point of publication. ABC News Loop, launched in early 2026, bakes provenance into its “made-for-social” explainer journalism model. The Washington Post’s Ripple project is working toward extending provenance workflows even to independent contributors.
One real problem: the current C2PA ecosystem risks excluding exactly the people who need it most. Citizen journalists, independent photographers, local newsrooms in developing countries they may not be able to afford the annual certificate costs required to be in the “trusted” tier. The Creator Assertions Working Group (CAWG) is actively working on more accessible frameworks, but this gap hasn’t been solved yet.
The Numbers Worth Knowing
- Deepfake incidents grew 900% from 2023 to 2025 from 500K to over 8 million documented cases
- Synthetic content is projected to make up 90% of online media by end of 2026
- The deepfake detection market is growing at 42% annually, expected to reach $15.7B in 2026
- 62% of online content may already be fake or AI-assisted, per recent studies
- 84% of people believe AI-generated content should always be clearly labeled
- 59% of people can no longer reliably tell human from AI content
- C2PA now has 200+ member organizations including Adobe, Microsoft, BBC, Canon and Sony
- Google organic search traffic to publisher sites fell 33% between late 2024 and late 2025
- Only 38% of news publishers feel confident about journalism’s future down from 60% in 2022
Common Questions People Ask About Deepfakes and Digital Provenance
No and this is a really common misconception. C2PA is a provenance system, not a detection system. It records the history of how content was created and modified. If an AI tool implements C2PA and generates a deepfake, the manifest will show that the content was AI-generated which is useful context. But it won’t flag it as malicious. You need separate detection tools for that.
Currently, yes credentials can be stripped when content passes through platforms that don’t support the standard yet, which is most social networks. That’s a real limitation. Steganographic watermarking provides a more resilient backup layer. As for faking credentials: technically very difficult without the original creator’s signing keys, though researchers continue to probe the edges of this.
In the EU, yes from August 2026. In the US, it varies by state, and federal requirements are developing. The direction globally is clearly toward mandatory labeling. Better to build the workflow now than retrofit it under pressure.
The Content Authenticity Initiative offers free browser tools and a web verifier at verify.contentauthenticity.org for checking C2PA credentials on supported files. It won’t catch everything, but it’s a solid starting point for newsroom verification workflows.
C2PA (Coalition for Content Provenance and Authenticity) writes the technical specification. CAI (Content Authenticity Initiative) is the Adobe-led community of 6,000+ members that promotes adoption and builds open-source implementation tools. Think of C2PA as the standard and CAI as the advocacy and implementation arm.
Digital provenance isn’t a tech industry trend you can afford to observe from the sidelines. It’s becoming infrastructure like HTTPS was for web security, or like verified accounts were for social media. The early movers are already embedding it into their workflows. The laggards will be playing catch-up under regulatory pressure.
For journalists, it’s a trust signal in an era when trust is the scarcest resource in the industry. For content creators, it’s both a legal safeguard and a brand differentiator. For marketers, it’s a hedge against the growing consumer backlash against “AI slop” the sea of undifferentiated, personality-free synthetic content drowning organic reach.
The tools exist. The standards are in place. The regulations are landing. What’s left is the decision to actually use them.
Because in a world where 90% of online media may soon be synthetic, the brands and creators and newsrooms that can say “here’s proof that this is real” that’s not a small thing. That’s everything.
