it is stepbackA weekly newsletter that presents an essential story from the tech world. For more information on the legal status of AI, follow Adi Robertson, stepback Delivers to our subscribers’ inboxes at 8am ET. opt in for stepback Here.
The song was called “Heart on My Sleeve” and if you didn’t know better, you might guess you were listening to Drake. if you Did Better know, you were hearing the opening bells of a new legal and cultural battle: a fight over how AI services should be able to use people’s faces and voices, and how platforms should respond.
In 2023, the AI-generated Fox-Drake track “Heart on My Sleeve” was a novelty; Nevertheless, the problems it presented were obvious. The close imitation of a lead artist in the song shocked the musicians. Streaming services removed it on copyright legal technical grounds. But the manufacturer was not making direct copy Of anything – just a very close copy. Attention therefore immediately turned to the separate area of equality law. This is an area that was once synonymous with celebrities post-unauthorized endorsements and parodies, and as audio and video deepfakes proliferated, it felt like one of the few tools available to regulate them.
Unlike copyright, which is governed by the Digital Millennium Copyright Act and several international treaties, there is no federal law around likeness. It’s a mix of different state laws, none of which were originally designed with AI in mind. But the last few years have seen an increase in efforts to change this. In 2024, Tennessee Governor Bill Lee And California Governor Gavin Newsom – both of whose states rely heavily on their media industries – signed bills that expand protections against unauthorized reproductions of entertainers.
But the law has predictably moved more slowly than the technology. Last month OpenAI launched Sora, an AI video generation platform specifically aimed at capturing and remixing the likeness of real people. This opened the floodgates to a flood of often surprisingly realistic deepfakes, including those from people who did not agree with their creation. OpenAI and other companies are responding by implementing their own equality policies – which, in the absence of anything else, could turn into new rules of the road for the Internet.
OpenAI has denied that it launched Sora carelessly, with CEO Sam Altman claiming that if anything, it was “too restrictive” with guardrails. Yet the service still generated a lot of complaints. It launched with only minimal restrictions on the likeness of historical figures reverse course The estate of Martin Luther King Jr. complained about “offensive portrayal” of the slain civil rights leader spreading racism or committing crimesIt cautiously banned the unauthorized use of likenesses of living people, but users found ways to insert celebrities like Bryan Cranston into Sora videos. taking selfie As with Michael Jackson, complaints came from SAG-AFTRA, prompting OpenAI to strengthen the guardrails in unspecified ways.
Even some people who Did Authorize Sora Cameos (its term for videos using a person’s likeness) were unsettled by the results, which included women, All types of erotic outputAltman said she did not realize that people might have “in between” feelings about official equality, such as not wanting to publicly say “offensive things or things that they find deeply problematic.”
Sora is addressing the problems with changes like a change in historical data policy, but it’s not the only AI video service, and things are – in general – getting pretty weird. AI negligence has become a problem for President Donald Trump’s administration and some other politicians, including blatant or blatantly racist portrayals of specific political enemies: Trump responded to last week’s No Kings protests with a video that showed him dropping dirt on a man. Liberal influencer Harry Sisson resembledWhile the candidate for mayor of New York City Posted by Andrew Cuomo (and promptly deleted) A “Criminal for Zohran Mamdani” video that showed his Democratic opponent eating a handful of rice. As Written by Kat Tenberg spitfire news Earlier this month, AI videos are also becoming ammunition in influencer drama.
The threat of legal action over unauthorized videos remains an almost constant threat, as celebrities such as Scarlett Johansson have advocated over the use of their likeness. But unlike allegations of AI copyright infringement, which have spawned numerous high-profile lawsuits and almost constant deliberations inside regulatory agencies, few incidents of similarity have risen to that level – perhaps partly because the legal landscape is still in flux.
When SAG-AFTRA thanked OpenAI for replacing Sora’s guardrails, it used the occasion to promote the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, a years-old effort to codify protections against “unauthorized digital reproductions.” no fake actwhich has also gained support from YouTube, introduces nationwide powers to regulate the use of “computer-generated, highly realistic electronic representations” of the voice or visual likeness of a living or dead person. This includes liability for online services that knowingly allow unauthorized digital reproductions.
The No Fakes Act has generated severe criticism from online free speech groups. EFF dubbed it A “new censorship infrastructure” mandates that forces platforms to filter content so comprehensively that it will almost inevitably lead to unintentional removals and online “heckler’s vetoes.” The organization warned that the bill included provisions for parody, satire and commentary that should be allowed even without authorisation, but that they would be “cold comfort to those who cannot afford to prosecute the question”.
Opponents of the No Fakes Act may take solace from how little legislation Congress passes these days – that’s exactly what we’re currently living through. Second longest federal government shutdown in historyAnd there is also a distinct push to block state AI regulation that could void new equality laws. But in practice, equality rules are still coming up. Earlier this week YouTube announced that it would let Partner Program creators discover unauthorized uploads using their likeness and request their removal. The move expands on existing policies that, among other things, let music industry partners remove content that “imitates an artist’s unique singing or rapping voice.”
And throughout all this, social norms are still evolving. We’re entering a world where you can easily make a video of almost anyone doing anything – but when Needed You? In many cases, those hopes remain dashed.
- Much of this recent conversation has been about AI videos of people doing weird or silly things, but historically, research indicates The overwhelming majority of deepfakes Indecent photographs of women are often made without their consent. There’s a whole different conversation about things like beyond Sora. Output of AI Nudify Servicesand this legal issues Others are similar to non-consensual sexual fantasies.
- In addition to the basic legal issue of whether a likeness is unauthorized, there are also questions such as when a video may be defamatory (if it is sufficiently realistic) or harassing (if it is part of a larger pattern of stalking and threats), which can make individual situations even more complex.
- Social platforms are almost always protected from liability through Section 230, which says they cannot be treated as a publisher or speaker of third-party content. As more and more services are taking proactive steps to help users create content, how far Section 230 will protect the resulting images and videos seems like an interesting question.
- Despite long-standing fears that AI will make it virtually impossible to distinguish illusion from reality, using context and “tells” (ranging from specific editing tics) is still often simple clear watermark) to detect if a video was AI-generated. The problem is that many people don’t look closely enough or don’t care whether it’s a fake or not.
- Sarah Jeong’s warning about blatantly manipulated photos is even more relevant now than it was when she published it in 2024.
- the new York Times is a broad form Trump’s special interest in AI-generated content.
- Max Reed’s analysis of Sora as a social platform and whether it will “work.”

