Noti.Group RSS Feed
  • Contact Us
Tuesday, March 17, 2026
Noti Group Logo
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment
No Result
View All Result
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment
No Result
View All Result
Noti Group
No Result
View All Result
ADVERTISEMENT

Sora is showing us how broken deepfake detection is

in Technology
Reading Time: 15 mins read
407 4
A A
0
An example of the AI labels on YouTube content.
137
SHARES
6.9k
VIEWS
ShareShareShareShareShare

OpenAI’s new deepfake machine, Sora, has proven that artificial intelligence is alarmingly good at faking reality. The AI-generated video platform, powered by OpenAI’s new Sora 2 model, has churned out detailed (and often offensive or harmful) videos of famous people like Martin Luther King Jr., Michael Jackson, and Bryan Cranston, as well as copyrighted characters like SpongeBob and Pikachu. Users of the app who voluntarily shared their likenesses have seen themselves shouting racial slurs or turned into fuel for fetish accounts.

On Sora, there’s a clear understanding that everything you see and hear isn’t real. But like any piece of social content, videos made on Sora are meant to be shared. And once they escape the app’s unreality quarantine zone, there’s little protection baked in to ensure viewers know that what they’re looking at isn’t real.

The app’s convincing mimicry doesn’t just run the risk of misleading viewers. It’s a demonstration of how profoundly AI labeling technology has failed, including a system OpenAI itself helps oversee: C2PA authentication, one of the best systems we have for distinguishing real images and videos from AI fakes.

C2PA authentication is more commonly known as “Content Credentials,” a term championed by Adobe, which has spearheaded the initiative. It’s a system for attaching invisible but verifiable metadata to images, videos, and audio at the point of creation or editing, appending details about how and when it was made or manipulated.

OpenAI is a steering committee member of the Coalition for Content Provenance and Authenticity (C2PA), which developed the open specification alongside the Adobe-led Content Authenticity Initiative (CAI). And in fact, C2PA information is embedded in every Sora clip — you’d just probably never know it, unless you’re the type to pore over some brief footnotes on a meager handful of OpenAI’s blog posts.

This is the label that’s supposed to appear on AI-generated or manipulated videos uploaded to YouTube Shorts, but it only applies to content around sensitive topics.
Image: YouTube

C2PA only works if it’s adopted at every step of the creation and posting process, including being clearly visible to the person viewing the output. In theory, it’s been embraced by Adobe, OpenAI, Google, YouTube, Meta, TikTok, Amazon, Cloudflare, and even government offices. But few of these platforms use it to clearly flag deepfake content to their users. Instagram, TikTok, and YouTube’s efforts are either barely visible labels or collapsed descriptions that are easy to miss, and they provide very little context if you actually were to spot them. And for TikTok and YouTube, I’ve never once encountered them myself while browsing the platforms, even on videos that are clearly AI-generated, given the uploader has likely removed the metadata or not disclosed their origins.

Meta initially added a small “Made by AI” tag to images on Facebook and Instagram last year, but it later changed the tag to say “AI Info” after photographers complained that work they edited using Photoshop — which automatically applies Content Credentials — was being mislabeled. And most online platforms don’t even do that, despite being more than capable of scanning uploaded content for AI metadata.

C2PA’s creators insist they’re getting closer to widespread adoption. “We’re seeing meaningful progress across the industry in adopting Content Credentials, and we’re encouraged by the active collaboration underway to make transparency more visible online,” Andy Parsons, senior director of Content Authenticity at Adobe, said to noti.group. “As generative AI and deepfakes become more advanced, people need clear information about how content is made.”

Yet after four years, that progress is still all but invisible. I’ve covered CAI since I started at noti.group three years ago, and I didn’t realize for weeks that every video generated using Sora and Sora 2 has Content Credentials embedded. There’s no visual marker that alludes to it, and in every example I’ve seen where these videos are reposted to other platforms like X, Instagram, and TikTok, I have yet to see any labels that identify them as being AI-generated, let alone provide a full accounting of their creation.

One example noted by AI detection platform Copyleaks is a viral AI-generated video on TikTok that shows CCTV footage of a man catching a baby that’s seemingly fallen out of an apartment window. The video has almost two million views and appears to have a blurred-out Sora watermark. TikTok hasn’t visibly flagged that the video is AI-generated, and there are thousands of commenters questioning whether the footage is real or fake.

If a user wants to check images and videos for C2PA metadata, the burden is almost entirely on them. They have to save and then upload a supported file into the CAI or Adobe web app, or they have to download and run a browser extension that will flag any online assets that have metadata with a “CR” icon. Similar provenance-based detection standards, such as Google’s invisible SynthID watermarks, are no simpler to use.

“The average person should not worry about deepfake detection. It should be on platforms and trust and safety teams,” Ben Colman, cofounder and CEO of AI detection company Reality Defender, told noti.group. “People should know if the content they’re consuming is or is not using generative AI.”

People are already using Sora 2 to generate convincing videos of fake bomb scares, children in warzones, and graphic scenes of violence and racism. One clip reviewed by The Guardian shows a Black protester in a gas mask, helmet, and goggles yelling the “you will not replace us” slogan used by white supremacists — the prompt used to create that video was simply “Charlottesville rally.” OpenAI attempts to identify Sora’s output with watermarks that appear throughout its videos, but those marks are laughably easy to remove.

TikTok, Amazon, and Google haven’t yet provided comment to noti.group about C2PA support. Meta told noti.group that it is continuing to implement C2PA and evaluating its labeling approach as AI evolves. OpenAI simply directed us to its scant blog posts and help center article about C2PA support. Meta, like OpenAI, has an entire platform for its AI slop, complete with dedicated feeds for social and video content, and both companies are pumping AI-generated videos into social media.

X, which has its own controversies regarding nude celebrity deepfakes, pointed us to its policy that supposedly bans deceptive AI-generated media, but did not provide any information about how this is moderated beyond relying on user reports and community notes. X was notably a founding member of the CAI back when it was still known as Twitter, but pulled itself from the initiative without explanation after Elon Musk purchased the platform.

Parsons says that “Adobe remains committed to helping scale adoption, supporting global policy efforts, and encouraging greater transparency across the content ecosystem.” But the honor system C2PA has relied upon so far isn’t working. And OpenAI’s position at C2PA seems hypocritical given that, as it’s creating a tool that actively promotes deepfakes of real people, it’s offering such half-baked protections against their abuse. Reality Defender reported that it managed to bypass Sora 2’s identity safeguards entirely less than 24 hours after the app launched, allowing it to consistently generate celebrity deepfakes. It feels like OpenAI is using its C2PA membership as a token cover while largely ignoring the commitments it comes with.

The frustrating thing is that as difficult as AI verification is, Content Credentials does have merit. The embedded attribution metadata can help artists and photographers be reliably credited for their work, for example, even if someone takes a screenshot of it and reposts it across other platforms. There are also supplemental tools that could improve it. Inference-based systems like Reality Defender — also a member of the C2PA Initiative — rate the likelihood that something was generated or edited using AI by scanning for subtle signs of synthetic generation. This system is unlikely to rate something with a 100 percent confidence ranking, but it’s improving over time and doesn’t rely on reading watermarks or metadata to detect deepfakes.

“C2PA is a fine solution, but it is not a fine solution on its own.”

“C2PA is a fine solution, but it is not a fine solution on its own,” said Colman. “It needs to be done in conjunction with other tools, where if one thing doesn’t catch it, another may.”

Metadata can also be easily stripped. Adobe research scientist John Collomosse openly admits this on a CAI blog last year, and said it’s common for social media and content platforms to do so. Content Credentials uses image fingerprinting tech to counteract this, but all tech can be cracked, and it’s ultimately unclear if there’s a truly effective technical solution.

Some companies don’t seem to be trying very hard to support the few tools we have anyway. Colman said he believes that the means for warning everyday people about deepfake content are “going to get worse before they get better,” but that we should see tangible improvements within the next couple of years.

While Adobe is championing Content Credentials as part of the ultimate solution to address deepfakes, it knows the system isn’t enough. For one, Parsons directly admitted this in a CAI post last year, saying the system isn’t a silver bullet.

“We’re seeing criticism circulating that relying solely on Content Credentials’ secure metadata, or solely on invisible watermarking to label generative AI content, may not be sufficient to prevent the spread of misinformation,” Parsons wrote. “To be clear, we agree.”

And where a reactive system clearly isn’t working, Adobe is also throwing its weight behind legislation and regulatory efforts to find a proactive solution. The company proposed that Congress establish a new Federal Anti-Impersonation Right (the FAIR Act) in 2023 that would protect creators from having their work or likeness replicated by AI tools, and backed the Preventing Abuse of Digital Replicas Act (PADRA) last year. Similar efforts, like the “No Fakes Act” that aims to protect people from unauthorized AI impersonations of their faces or voices, have also garnered support from platforms like YouTube.

“We’re in good conversations with a bipartisan coalition of senators and congresspeople who actually recognize that deepfakes are an everyone problem, and they’re actually working on building legislation that is proactive, not reactive,” Colman said. “We’ve relied too long on the good graces of tech to self-police themselves.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Jess Weatherbed

    Jess Weatherbed

    Jess Weatherbed

    Posts from this author will be added to your daily email digest and your homepage feed.

    See All by Jess Weatherbed

  • AI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All AI

  • OpenAI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All OpenAI

  • Report

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Report


[Notigroup Newsroom in collaboration with other media outlets, with information from the following sources]

Tags: AIOpenAIReport
Previous Post

At the Stratford Festival, a back-flipping ‘Annie’ and hilarious ‘ Dirty Rotten Scoundrels’

Next Post

How to watch Cavaliers vs. Pistons for free on Peacock tonight

Related Posts

Tom Warren
Technology

Microsoft appoints a new Copilot boss after AI leadership shakeup

March 17, 2026
Lauren Feiner
Technology

The Live Nation trial restarts with a ‘velvet hammer’

March 17, 2026
UK pushes up a law criminalizing deepfake nudes in response to Grok
Technology

Teens sue Elon Musk’s xAI over Grok’s AI-generated CSAM

March 16, 2026
DLSS 5 looks like a real-time generative AI filter for video games
Technology

DLSS 5 looks like a real-time generative AI filter for video games

March 16, 2026
Load More
Next Post
How to watch Cavaliers vs. Pistons for free on Peacock tonight

How to watch Cavaliers vs. Pistons for free on Peacock tonight

No Result
View All Result

Recent Posts

  • Tyrod Taylor gets engaged to girlfriend in Italy
  • Details of Alabama hoops star Aden Holloway’s arrest emerge
  • Two more EVs for the trash heap: Volvo EX30 and Honda Prologue
  • Bet $10, get $365 in bonus bets for Knicks vs. Pacers
  • Intel announces Core Ultra 200HX Plus CPUs for high-end gaming laptops

Recent Comments

  • Stefano on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Van Hens on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Ioannis K on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Panagiotis Nikolaos on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • John Miele on UK government suggests deleting files to save water

Noti Group All rights reserved

No Result
View All Result
Noti Group

What’s New Here

  • Tyrod Taylor gets engaged to girlfriend in Italy
  • Details of Alabama hoops star Aden Holloway’s arrest emerge
  • Two more EVs for the trash heap: Volvo EX30 and Honda Prologue

Topics to Cover!

  • Business (4,751)
  • Entertainment (1,863)
  • General News (326)
  • Health (327)
  • Investigative Journalism (11)
  • Lifestyle (4)
  • Sports (8,184)
  • Technology (6,089)
  • World News (1,336)
  • Contact Us
  • Terms and Conditions
  • Privacy Policy
  • RSS
  • Contact News Room
  • Code of Conduct
  • Careers
  • Values
  • Advertise
  • DMCA

© 2025 - noti.group - All rights reserved - noti.group runs on 100% green energy.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment

© 2025 - noti.group - All rights reserved - noti.group runs on 100% green energy.