OpenAI Updates Sora Video App With Enhanced Intellectual Property Controls Amid Popularity and Disinformation Concerns
OpenAI's Sora app tops Apple's charts but faces IP and disinformation challenges as OpenAI shifts to an opt-in model for content control and explores monetization for rightsholders.
- • OpenAI's Sora app becomes top free app on Apple's App Store soon after launch.
- • OpenAI shifts from 'opt-out' to 'opt-in' IP model, granting rightsholders more control over character use.
- • Sora generates videos featuring recognizable characters, sparking Hollywood backlash and potential lawsuits.
- • The app raises disinformation concerns due to its ability to create hyper-realistic fake videos.
- • OpenAI explores revenue-sharing to monetize character-generated videos and commits to iterating safety measures.
Key details
OpenAI's video generation app, Sora, which quickly reached the top of Apple's App Store for free apps, is now undergoing significant changes to its intellectual property (IP) controls in response to concerns from the entertainment industry. Initially launched as an invite-only app on iOS, Sora enables users to create and remix short-form AI-generated videos using OpenAI's advanced Sora 2 model, known for generating highly realistic video and audio content (86136).
The app's rapid popularity has been accompanied by controversy, notably for generating videos featuring recognizable characters from franchises like "South Park" and "Dune," sparking outrage among Hollywood executives who foresee potential legal challenges (86130). In response, OpenAI CEO Sam Altman announced a shift from an "opt-out" to an "opt-in" model for content generation concerning rightsholders' characters, giving them more granular control over how their intellectual property is used within the app. Additionally, OpenAI is exploring a revenue-sharing model that would allow rightsholders to monetize such user-generated character videos (86130).
Alongside IP issues, Sora has raised alarms over its potential to facilitate disinformation. Users have already created hyper-realistic videos depicting fabricated events, such as false claims of ballot fraud, immigration arrests, and bomb explosions, raising concerns about the ease of producing convincing fake content. Experts warn that this capability represents a significant leap in disinformation risks in the digital age (86137).
Altman acknowledged these risks but emphasized OpenAI's commitment to iterating on safety measures and responsible product development. The company has incorporated controls allowing users to manage their own likeness permissions, aiming to balance innovation with accountability (86136).
Sora’s increasing influence underlines the dual-edged nature of generative AI tools — empowering creativity and fan interaction while amplifying challenges around IP rights and digital misinformation. OpenAI’s responses, including granting rightsholders more control and considering monetization options, reflect an evolving approach to navigating these complex issues in the rapidly changing AI video landscape (86130).