Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Why isn’t my new favorite Windows ultraportable laptop made by Lenovo or Dell?

    November 9, 2025

    Upgrading your office? 12+ Accessories That Turned My Laptop Into the Ultimate Work Machine

    November 8, 2025

    Amazon is selling the M4 MacBook Air at its lowest price ever – and it’s an easy buy for me

    November 8, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»AI/ML»Sora is showing us how broken deepfake detection is
    AI/ML

    Sora is showing us how broken deepfake detection is

    PineapplesUpdateBy PineapplesUpdateOctober 28, 2025No Comments10 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Sora is showing us how broken deepfake detection is
    Share
    Facebook Twitter LinkedIn Pinterest Email

    OpenAI’s new deepfake machine, Sora, has proven that artificial intelligence is dangerously good at deceiving reality. AI-generated video platform powered by OpenAI’s new Sora 2 model detailed (and) brainstormed often offensive or harmful) Videos of famous people like Martin Luther King Jr., Michael Jackson and Bryan Cranston, as well as copyrighted characters like sponge bob and pikachuUsers of the app who voluntarily shared their likenesses have found themselves subjected to racial slurs or turned into fuel for fetishized accounts.

    On Sora, there is a clear understanding that not everything you see and hear is real. But like any social content, videos created on Sora are meant to be shared. And once they escape the app’s surreal quarantine zone, viewers are provided with little protection to ensure that what they’re watching isn’t real.

    Credible imitations of the app simply do not run the risk of misleading the audience. This is a demonstration of how deeply AI labeling technology has failed, including in a system that OpenAI itself helps oversee: C2PA certification, one of the best systems we have for distinguishing real images and videos from AI fakes.

    C2PA certification is commonly known as “content credentials”, a term supported by Adobe, which has led the initiative. It is a system of attaching invisible but verifiable metadata to images, video and audio at the time of creation or editing, adding details about how and when it was created or manipulated.

    OpenAI is one steering committee member of Alliance for Content Provenance and Authenticity (C2PA), which developed the open specification under the leadership of Adobe Content Authenticity Initiative (CAI). And in fact, C2PA information is embedded in every Sora clip – you’d probably never know it, unless you’re the kind of person focusing on a few brief footnotes on a handful of OpenAI blog posts.

    An example of AI labels on YouTube content.

    This is the label that should appear on AI-generated or manipulated videos uploaded to YouTube Shorts, but it only applies to content related to sensitive topics.
    Image: YouTube

    C2PA only works when it is adopted at every step of the creation and posting process, including making the output clearly visible to the person viewing it. In theory, it has been adopted by Adobe, OpenAI, Google, YouTube, Meta, TikTok, Amazon, Cloudflare, and even government offices. But some of these platforms use it to clearly flag deepfake content for their users. The efforts of Instagram, TikTok, and YouTube are either barely visible labels or Brief summary These are easy to overlook, and provide little context if you actually recognize them. And as for TikTok and YouTube, I have never encountered them while browsing the platforms, even on videos that are clearly AI-generated, given that the uploader presumably removed metadata or did not disclose their origins.

    Meta initially added a small “Created by AI” tag to images on Facebook and Instagram last year, but later changed the tag to say “AI Info” after photographers complained that work they edited using Photoshop — which automatically applies content credentials — was being mislabeled. And despite AI being able to scan uploaded content for metadata, most online platforms don’t even do that.

    The creators of C2PA insist that they are getting closer to widespread adoption. “We’re seeing meaningful progress across the industry in the adoption of content credentials, and we’re encouraged by the active collaboration underway to make transparency more visible online,” said Andy Parsons, senior director of content authenticity at Adobe. The Verge“As generic AI and deepfakes become more advanced, people need clear information about how content is created.”

    Yet four years later, that progress is still invisible. I have covered CAI since the beginning The Verge Three years ago, and I didn’t realize for several weeks that every video was made using Sora and Sora 2 Content credentials are embedded inThere are no visual markers that indicate this, and in every instance I’ve seen where these videos have been reposted on other platforms like X, Instagram, and TikTok, I have yet to find any labels that identify them as AI-generated, let alone provide a full account of their creation.

    An example noted AI detection platform Copyleaks There’s a viral AI-generated video on TikTok that shows CCTV footage a man holding a child It appears that he has fallen from an apartment window. The video has nearly two million views and appears to have the Sora watermark blurred out. TikTok has not clearly marked whether the video is AI-generated, and thousands of commenters are questioning whether the footage is real or fake.

    If a user wants to check images and videos for C2PA metadata, the burden is almost entirely on them. They must save and then upload a supported file ca or Adobe web app, or they have to be downloaded and run browser extension This will mark any online property with metadata with a “CR” icon. Similar provenance-based identification standards, such as Google’s invisible SynthID watermark, are not as easy to use.

    “The average person shouldn’t have to worry about detecting deepfakes. It should be up to the platforms and the trust and security teams,” said Ben Coleman, co-founder and CEO of the AI ​​detection company. reality protectortold The Verge“People should know whether the content they are consuming is using generative AI.”

    People are already using Sora 2 to create believable videos of fake bomb scares, children in battlefields, and graphic scenes of violence and racism. a clip reviewed by Guardian A black protester wearing a gas mask, helmet and goggles is shown shouting “You will not replace us”, a slogan used by white supremacists – the sign used to create that video was simply “Charlottesville Rally”. OpenAI attempts to identify Sora’s output with watermarks that appear throughout his videos, but those marks These are very easy to remove,

    TikTok, Amazon and Google have not yet commented The Verge About C2PA support. Meta told The Verge It is continuing to implement C2PA and evaluate its labeling approach as AI evolves. OpenAI simply directed us to its short blog posts and Help Center Articles About C2PA support. Like OpenAI, Meta has an entire platform for its AI slop, complete with dedicated feeds for social and video content, and both companies are pushing AI-generated videos into social media.

    X, who has his own controversies regarding nude celebrity deepfakes, told us its policy It reportedly bans misleading AI-generated media, but no information has been provided about how it regulates this beyond relying on user reports and community notes. Ax was notably a founding member of CAI when it was still known as Twitter, but he distanced himself from the initiative without any explanation after Elon Musk purchased the platform.

    Parsons says that “Adobe is committed to helping drive mass adoption, supporting global policy efforts, and encouraging greater transparency in the content ecosystem.” But the honor system that C2PA has relied on so far isn’t working. And OpenAI’s position on C2PA seems hypocritical, because while it is building a tool that actively promotes deepfakes of real people, it is providing such half-baked protections against their abuse. Reality Defender reported that it was successful Completely bypass Sora 2’s identity protection measures Less than 24 hours after the app was launched, it allowed celebrity deepfakes to be generated continuously. It seems that OpenAI is using its C2PA membership as a token cover, while largely ignoring the commitments that come with it.

    The disappointing thing is that as difficult as AI verification is, there is merit in content credentials. Embedded attribution metadata can help artists and photographers be reliably credited for their work, for example, even if someone takes a screenshot of it and reposts it on other platforms. There are also supplemental tools that can make it better. Inference-based systems like Reality Defender – which is also a member of the C2PA initiative – rate the likelihood of something being generated or edited using AI by scanning subtle signals of synthetic generation. This system is unlikely to give some ratings with 100 percent confidence ranking, but it is improving over time and does not rely on reading watermarks or metadata to detect deepfakes.

    “C2PA is a good solution, but it is not a good solution in itself.”

    “C2PA is a good solution, but it is not a good solution in itself,” Coleman said. “It needs to be done in conjunction with other tools, where if one thing doesn’t catch it, another thing can.”

    Metadata can also be easily snatched. Adobe Research Scientist John Kolomose Openly admit this on CAI blog last year, and said it was common for social media and content platforms to do so. Content Credentials uses image fingerprinting technology to counter this, but all technology can be cracked, and it is ultimately unclear whether there is actually an effective technical solution.

    Some companies are not doing very much to support some of the devices we have. Coleman said he believes the means to warn everyday people about deepfake content are “going to get worse before they get better”, but that we should see solid improvements over the next few years.

    While Adobe is supporting content credentials as part of the final solution to address deepfakes, it knows the system is not enough. for one, Parsons admitted it directly A CAI post last year said the system is not a silver bullet.

    “We are seeing criticism that relying solely on secure metadata of content credentials, or relying solely on invisible watermarking to label generative AI content, may not be enough to stop the spread of misinformation,” Parsons wrote. “To be clear, we agree.”

    And where a reactive system clearly isn’t working, Adobe is also throwing its weight behind legislative and regulatory efforts to find a proactive solution. The company proposed that Congress establish a New federal anti-counterfeiting authority (FAIR Act) in 2023 that will protect creators from having their work or likeness copied by AI tools, and supports Preventing Abuse of Digital Replication Act (Padra) last year. Similar efforts, such as the “No Fake Act” which aims to protect people from unauthorized AI cloning of their face or voice, have also received support from platforms such as YouTube.

    “We’re having good conversations with a bipartisan coalition of senators and congressmen who really believe that deepfakes are everyone’s problem, and they’re really working on creating legislation that is proactive, not reactive,” Coleman said. “We have long relied on the help of technology to regulate ourselves.”

    Follow topics and authors To see more like this in your personalized homepage feed and get email updates from this story.

    • Jess Weatherbed

      Jess Weatherbed

      Jess Weatherbed

      Posts from this author will be added to your daily email digest and your homepage feed.

      See all by Jess Weatherbed

    • Aye

      Posts in this topic will be added to your daily email digest and your homepage feed.

      see all Aye

    • OpenAI

      Posts in this topic will be added to your daily email digest and your homepage feed.

      see all OpenAI

    • report

      Posts in this topic will be added to your daily email digest and your homepage feed.

      see all report

    broken deepfake detection showing Sora
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleGoogle Cloud takes aim at CoreWeave and AWS with managed Slurm for enterprise-scale AI training
    Next Article The cloud can now integrate with Excel – and it gets 7 new connectors
    PineapplesUpdate
    • Website

    Related Posts

    AI/ML

    Forget fine-tuning: SAP’s RPT-1 brings ready-to-use AI to business tasks

    November 4, 2025
    AI/ML

    ClickUp adds new AI assistant to better compete with Slack and Notion

    November 4, 2025
    AI/ML

    How to Prepare Your Company for a Passwordless Future – in 5 Steps

    November 4, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025797 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025724 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025650 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    10,000 steps or Japanese walk? We ask experts if you should walk ahead or fast

    June 16, 20250 Views

    FIFA Club World Cup Soccer: Stream Palmirus vs. Porto lives from anywhere

    June 16, 20250 Views

    What do chatbott is careful about punctuation? I tested it with chat, Gemini and Cloud

    June 16, 20250 Views
    Our Picks

    Why isn’t my new favorite Windows ultraportable laptop made by Lenovo or Dell?

    November 9, 2025

    Upgrading your office? 12+ Accessories That Turned My Laptop Into the Ultimate Work Machine

    November 8, 2025

    Amazon is selling the M4 MacBook Air at its lowest price ever – and it’s an easy buy for me

    November 8, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2025 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.