Sora 2 Does A Copyright Somersault Upon Launch

OpenAI’s rollout of its Sora 2 video app faced backlash last week from the entertainment industry because of its ability to allow users to generate videos containing copyrighted content and upload it across the internet, resulting in a massive wave of users doing just that.

Sora 2 launched with a questionable third-party rights model, inviting intellectual property owners to opt-out of the app, effectively permitting users to access and manipulate copyrighted material, voices and likenesses, until the rightsholder requests they stop. Then, within 72 hours of the launch, OpenAI CEO Sam Altman released a statement stating he wanted to give rightsholders “more granular control” over their intellectual property, and switched the program to an opt-in model.

But why now and not at launch? Was it not foreseeable that users would create infringing videos using this technology, or that rightsholders would prefer to opt-in? By what legal analysis did OpenAI conclude that its opt-out model was defensible? The launch of the Sora 2 app appears to be an example of OpenAI’s copyright policy: infringe first, apologize later.

Users Bear the Burden of Liability

OpenAI’s Terms of Use apply to any of their “associated software applications and websites” including their new app Sora 2. Those terms provide that users may not “use our services in a way that infringes, misappropriates or violates anyone’s rights.” Additionally, users are responsible for ensuring that they have “all rights, licenses, and permissions needed” for what content they input. This legal framework attempts to shift the burden from OpenAI to the users, helping OpenAI to establish plausible deniability.

OpenAI trains its services on user-posted content, profits from subscriptions, yet disclaims responsibility for what users create. Unless users affirmatively opt-out, OpenAI uses their content to train its models, further reflecting the default consent policy OpenAI adopts in relation to intellectual property.

The Sora 2 Launch

Upon Sora 2’s release on September 30, users were able to freely generate videos of copyrighted characters interacting in whatever way imaginable. Characters from all major franchises, including SpongeBob SquarePants, South Park and Scooby-Doo, were fair game until rightsholders discovered the infringement and affirmatively opted out. While individual users’ likenesses received stringent protections, granting users the ability to revoke consent upon registering and uploading their profile to the app, the likenesses of celebrities such as Jake Paul and Mark Cuban, and fictional intellectual property characters such as Bugs Bunny received no baseline protections. User generated content was also watermarked with Sora’s logo. This allowed for a tidal wave of Sora watermarked AI slop to be uploaded to every corner of the internet, effectively advertising the infringing uses of this new OpenAI service.

Within three days, and following widespread criticism, on October 3, OpenAI suddenly reversed course. Through his blog, Altman announced Sora’s new opt-in model with “additional controls” to grant rightsholders “the ability to specify how their characters can be used (including not at all).” This change was implemented relatively quickly and with an overinclusive framework that flags vague prompts marginally associated with “third-party content.” A quick test resulted in ….” vague descriptions eliciting violations.

Altman’s newly proposed model for the Sora 2 app will be to “somehow make money” and “try sharing some of this revenue with rightsholders” who opt-in. If these strict and effective guardrails which are now in place could be implemented so quickly, why was this not the model to begin with?

Why the Flip-Flop?

Was it pressure from Hollywood? A sudden reminder that the major studios have sued Midjourney for basically the same thing? The $1.5 billion settlement that Anthropic just made with book authors? Or was it just to create initial interest and buzz for investors of a promise to come? One commentator’s theory is that OpenAI “used copyrighted characters to drive engagement and media coverage.” Or was it nothing more than a billionaire’s risky “let’s see what happens” learning opportunity?

Legal and Ethical Considerations

Even after Altman’s statement on October 3, OpenAI has not clarified whether the new opt-in policy subsequently prevents the use of that same intellectual property in training data. While users may no longer be able to generate protected content, that in no way means Sora 2 has not already been trained and thus influenced by that copyrighted material. A rightsholder who decides not to opt-in merely bars the output of their intellectual property, yet they have no control over how their intellectual property will influence the creation of future content.

By placing the burden on users, and daring rightsholders to object to the infringement, OpenAI has created a self-reinforcing cycle: Drive engagement through infringement, create value for their new service, use this value to fund settlements, and license with companies after the fact.

Conclusion

OpenAI’s shift from an opt-out to an opt-in model might seem like a victory for copyright owners, but it merely demonstrates a reactive policy: act first, justify later. The future of this technology is still unclear, and it will be interesting to see if/which rightsholders will eventually opt-in, and what users will generate with their intellectual property. Stay tuned.

Thanks to Phoenix Silkensen, JD Candidate and Sora 2 user, who co-authored this article.

Source: https://www.forbes.com/sites/legalentertainment/2025/10/17/sora-2-does-a-copyright-somersault-upon-launch/