According to engadget, OpenAI’s Sora app has launched on Android through the Google Play Store while the iOS version remains limited to select markets and requires invitations. The text-to-video generative AI tool reportedly reached over 1 million downloads in less than five days despite these restrictions. Already the app has generated controversy with users creating disrespectful Martin Luther King Jr. clips and drawing Japanese government censure over anime copycats. OpenAI faces a Cameo lawsuit alleging copyright infringement, which dropped the day before the company released its own “cameo” feature for inserting existing entities into AI videos. Some form of persona licensing appears to be part of OpenAI’s eventual monetization strategy for Sora.
<h2 id="legal-trouble”>The Legal Storm Is Building
Here’s the thing about hitting 1 million downloads in five days – that kind of explosive growth attracts attention, and not the good kind. OpenAI is basically playing with fire here. The Martin Luther King Jr. clips and anime copycats are just the visible tip of the iceberg. And that Cameo lawsuit timing? It’s either incredibly tone-deaf or deliberately provocative.
Think about it – you get sued for copyright infringement one day, then release a feature with the exact same name the next? That doesn’t feel like an accident. It feels like OpenAI is testing boundaries, seeing how much they can get away with before regulators really step in. But here’s the real question: at what point does the legal risk outweigh the growth potential?
The Monetization Problem
So OpenAI wants to license personas eventually. That’s their big monetization plan? Good luck with that. We’ve seen how messy rights management can be in the music industry, and that’s with established legal frameworks. Now imagine trying to license digital personas when the technology itself is creating unauthorized versions of real people and characters daily.
And let’s be honest – the “cameo” feature sounds like they’re trying to have it both ways. They want users to be able to insert existing entities while somehow managing the rights later? That’s putting the cart miles before the horse. It seems like they’re building the plane while flying it, and the turbulence is getting pretty severe.
The Content Moderation Nightmare
Look, text-to-video is inherently harder to moderate than text or even images. You can’t just filter for keywords when someone can generate a convincing video of pretty much anything. The MLK situation was probably just the first of many controversies waiting to happen.
What happens when this scales to 10 million users? Or 100 million? The Japanese government complaint shows this isn’t just a US problem – every country has cultural icons and IP they want protected. OpenAI’s going to need an army of moderators and lawyers, and even that might not be enough. Basically, they’ve created a content moderation problem that makes social media look simple by comparison.
