OpenAI Faces Legal and Ethical Turmoil Over Sora 2 AI Video Tool
OpenAI's new Sora 2 AI video tool sparks battles over copyright, ethics, and regulatory subpoenas amid industry and internal pushback.
- • OpenAI's Sora 2 AI video tool enables creating videos with real people in AI environments, provoking Hollywood's backlash over likeness rights.
- • Hollywood unions and agencies demand control and compensation, rejecting OpenAI's opt-out likeness use model.
- • Internal OpenAI conflicts surface concerning ethical responsibilities and mission direction amid the tool's copyright issues.
- • OpenAI serves subpoenas to AI regulation advocates, prompting accusations of intimidation related to legal battles with Elon Musk.
- • OpenAI aims to improve rights controls and compensation models while navigating complex legal and regulatory challenges.
Key details
OpenAI's recent launch of Sora 2, an advanced AI video generation tool, has ignited a significant clash involving copyright disputes, ethical debates, and regulatory pressures. Sora 2 enables users to create videos featuring real people seamlessly integrated into AI-generated environments, including sound and dialogue. While OpenAI CEO Sam Altman highlighted the tool's innovative capabilities showcasing synthetic celebrity representations, Hollywood entities have voiced strong opposition. The Motion Picture Association, led by Charles Rivkin, alongside major agencies like WME and unions such as SAG-AFTRA, have demanded control and fair compensation for likeness rights, criticizing OpenAI's opt-out model as insufficient and legally questionable. WME confirmed all its clients would opt out of likeness use on the platform, underscoring the deep concern in the entertainment industry over unauthorized AI replications of actors' images and voices (96302).
At the heart of OpenAI's controversy is its intellectual property strategy, intersecting with complex copyright laws that many see as insufficiently addressed by the company's current policies. Legal experts suggest Hollywood's resistance may serve negotiation leverage for licensing agreements with OpenAI (96302). Internally, OpenAI grapples with ethical concerns as well. Chris Lehane, VP of global policy, described Sora as a democratizing tool reminiscent of the printing press but acknowledged the need to shift from an opt-out to an opt-in rights model amid mounting calls for better protections and fair use debates. OpenAI's energy footprint from massive data centers and internal dissent voiced by mission alignment head Josh Achiam—who warned about the risk of OpenAI morphing into a "frightening power"—highlight challenges balancing innovation with responsibility (96305).
Further complicating matters, OpenAI has served subpoenas to AI regulation advocates opposing its business practices, notably targeting groups supporting California's SB 53 AI transparency law. Nathan Calvin of Encode received subpoenas demanding records of communications with lawmakers and others, which he framed as intimidation tactics falsely linking him to Elon Musk, a key figure in OpenAI's ongoing legal battle. OpenAI claims these subpoenas relate to evidence preservation in lawsuits involving Musk. The Midas Project, critical of OpenAI's for-profit pivot, also received subpoenas amid regulatory debates over the company’s valuation and direction. These legal maneuvers have sparked internal criticism about the optics and strategy behind OpenAI's regulatory approach (96303).
OpenAI finds itself at a crossroads where rapid AI innovation collides with longstanding intellectual property rights, emergent ethical considerations, and intensifying regulatory scrutiny. The company's response includes promises to improve controls for rights holders and explore compensation models for creators amid a heated multimedia IP landscape. Meanwhile, the unfolding legal disputes and internal tensions suggest OpenAI's quest to balance technological advancement with ethical and societal responsibilities remains fraught and unresolved as of October 11, 2025.