OpenAI Explores Steps With SAG-AFTRA and Major Agencies to Curb Sora Deepfakes and Enhance Identity Protections

  • Open policy framework with opt‑in consent, fast takedowns, and clear impersonation reporting.

  • Collaborative governance with talent agencies and guilds to standardize rights management and response times.

  • Support for the NO FAKES Act and ongoing policy refinements to limit misuse and protect livelihoods.

OpenAI Sora policy updates empower rights holders to control AI-generated voices and likenesses, reduce misuse, and protect performer identity; opt-in reviews.

What is the OpenAI Sora policy on deepfakes?

OpenAI Sora policy on deepfakes defines when and how AI-generated voices and likenesses may be used, prioritizing consent, takedown processes, and clear rights-management rules from the outset.

How does Sora handle consent for voice likeness?

The policy requires explicit opt‑in consent for using a person’s voice or visual likeness in Sora outputs. Rights holders can customize allowances, set expiration for permissions, and trigger rapid removals if impersonations occur. This approach is a collaborative effort with SAG‑AFTRA, UTA, CAA, and other industry bodies to tighten governance and accountability.

Frequently Asked Questions

What is the NO FAKES Act and how does it relate to AI voices?

The NO FAKES Act is a U.S. bill aimed at preventing unauthorized AI replicas. OpenAI supports the principle, anticipating stricter controls on generation and distribution of impersonations. The act complements policy updates by providing a legislative framework for enforcement and rights protection.

How can performers protect their voice and likeness in AI models?

Performers should work with their agencies to establish opt‑in controls, monitor for misuse, request fast takedown procedures, and regularly update consent terms. Engaging with guilds, reporting incidents promptly, and understanding how their portrayals can be used are essential steps in safeguarding identity.

Key Takeaways

  • Takeaway 1: Consent-first governance and rapid response are central to the updated Sora policy.
  • Takeaway 2: Industry collaboration with major agencies strengthens protections and standardizes rights management.
  • Takeaway 3: Endorsement of the NO FAKES Act aligns policy with performer protections and broader regulatory clarity.

Conclusion

The OpenAI Sora policy update, coupled with active collaboration with SAG-AFTRA, UTA, CAA, and other talent organizations, marks a decisive step toward safeguarding performers from misappropriation of voice and likeness. By enforcing opt‑in consent, streamlining complaint workflows, and aligning with the NO FAKES Act, the industry signals a shared commitment to responsible AI use. Moving forward, continued policy refinements and governance will be essential as AI tools evolve, and actors, agents, and lawmakers work together to balance innovation with rights protection.

Author: COINOTAG

Publication date: October 2025 | Updated: October 2025

Sources (plain text)

SAG-AFTRA; United Talent Agency (UTA); Creative Artists Agency (CAA); Association of Talent Agents; Martin Luther King Jr. estate; Zelda Williams; OpenAI official statements; NO FAKES Act

Source: https://en.coinotag.com/openai-explores-steps-with-sag-aftra-and-major-agencies-to-curb-sora-deepfakes-and-enhance-identity-protections/