A Week of Pure Chaos: Sora 2 and the Proliferation of the New-Era Deepfake

“The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which. The research done by this centre is crucial to the future of our civilisation and of our species.” – Stephen Hawking


The release of OpenAI’s Sora 2 has just shattered the fundamental relationship between what we see and what we believe. This invite-only launch has introduced a video-generating module with unprecedented realism, accurate physical modeling, and the ability to generate synchronized native audio 1. Unlike its predecessor, Sora 2 reportedly exhibits a sophisticated understanding of physics, realistically modeling even “failure states,” making its outputs virtually indistinguishable from real footage. OpenAI CEO Sam Altman himself described the launch as a “ChatGPT for creativity moment,” highlighting the model’s transformative potential 2.

But amidst the technological brilliance, the first week of Sora 2 has spiraled into an ethical and legal crisis that challenges the very concept of verifiable truth. The tool has not just advanced video generation; it has proliferated deepfakes to an entirely new level of disturbing plausibility. The central piece of evidence from this week of chaos is the deepfake that quickly went viral: a hyper-realistic, AI-generated video showing Sam Altman stealing GPUs, which became the embodiment of the tool’s unprecedented realism 3.

It is a stunningly realistic video, crafted with a simple text prompt, and it cuts straight to the core problem: Sora 2 has made it virtually impossible for the average viewer to distinguish authentic content from fabricated videos. We have discussed AI’s “hallucination” problem in text models like ChatGPT, where the AI sometimes invents facts and lies to us 4.

Now, with Sora 2, the problem is no longer just about AI lying; it’s about people using generative AI to create extreme, personalized harm—and rendering visual evidence moot. The urgency for regulation has never been clearer.


The Cameo Crisis: When AI Guardrails Fail to Protect

The reason for Sora 2’s lifelike appearance is its standout feature, Cameos. This system allows users to inject themselves—or any consenting user—into an AI-generated scene with remarkable likeness and voice reproduction, simply by uploading a short video clip 5. OpenAI positioned this as a feature designed to put users “in control of your likeness end-to-end,” with safeguards that were supposed to prevent the unauthorized use of non-consenting public figures’ likenesses 6. The user is meant to retain control, review, and even revoke access to their “Cameo”.

However, the real-world application of these safeguards crumbled on day one.

The creation of the Sam Altman deepfake—depicting him engaging in petty theft—demonstrates the immediate functional failure of the very security features meant to protect public figures. The fact that users could so quickly generate content of the CEO committing a crime, even as a joke, proves that the system’s capacity for misuse is already outpacing its protections.

This is more than just a joke; it is a profound demonstration of the system’s potential for harmful misuse. While Altman’s likeness was used for creative purposes (some reports suggest he had allowed his likeness to be used for testing 3). The ease with which users twisted his allowed likeness into a criminal narrative demonstrates that if the founder of the company, with all the internal controls available, is subject to this abuse, any individual—especially those with a public profile—could be. OpenAI’s own safety reports noted a small but significant risk (1.6%) of the system bypassing safeguards to create egregious policy-violating content using a person’s likeness, such as sexual deepfakes, illustrating that even layered defenses are not foolproof against determined adversarial users 8.


The IP Inferno: Copyright Chaos and the Burden of Legal Action

The deepfake chaos was quickly compounded by an intellectual property (IP) catastrophe. Within hours of the invite-only release, the app’s feed was reportedly “plagued by violent and racist images,” alongside rampant infringement of copyrighted characters 2. Users generated videos of beloved fictional icons—such as Pikachu raising tariffs on China or SpongeBob SquarePants participating in political protests or even promoting fraudulent cryptocurrency scams—with highly realistic results.

The issue was rooted in OpenAI’s controversial initial licensing stance. Sora 2 initially operated under an opt-out policy for copyrighted material, which drew immediate criticism from industry leaders. This meant the model trained on and allowed the generation of protected IP by default, effectively placing the burden on studios and rights holders to proactively submit a request to have their content removed 10.

This policy drew immediate and strong criticism from the entertainment industry, including Hollywood studios like Disney, and artist advocacy groups, who argued the initial approach placed an “undue burden” on creators to police and protect their work 11. Critics like George Washington University’s David Karpf stated plainly that the existing “guardrails are not real” if copyrighted characters can be used to promote fake crypto scams 2.

As a result of the immense industry pressure, OpenAI was forced to overhaul its stance within a week, announcing a shift to granular opt-in controls 13. This new system aims to grant rights holders the ability to explicitly control and monetize the depiction of their characters, coupled with a proposed revenue-sharing scheme for those who permit the use of their IP 14.

This episode underscores a core tension: in the rush to launch brilliant tools, AI companies are creating systemic chaos by prioritizing “creative freedom” over established legal and ethical norms, effectively using their product to force a shift in IP law by setting the default to their advantage 15. The move was a clear response to legal and industry demands for clarity, demonstrating that the market and legal system will not quietly accept a blanket default to permissiveness.


The Crisis of Visual Epistemology: When Video Is No Longer Evidence

The most ethically and socially unacceptable aspect of the Sora 2 chaos is its long-term potential to undermine the social contract around video evidence.

For decades, video has been the ultimate arbiter of truth and a powerful tool for accountability. Back in the early 1990s, the bystander video capturing the brutal beating of Rodney King by LAPD officers was pivotal. This video—captured by George Holliday—provided irrefutable visual proof that was used to expose and challenge systemic police brutality in the United States 16. Despite the challenges and debates surrounding its interpretation by the courts, the video’s raw, unverified nature was sufficient to ignite massive public scrutiny, precisely because the cultural default was to believe your own eyes 17.

Now, with a tool that allows for the creation of completely fabricated, hyper-realistic videos of crimes, violence, and social disorder—like the videos that falsely depicted mass-shooting scares, explosions, or masked men stuffing ballots into a mailbox—that trust is fundamentally shattered.

In the future, a defence attorney will not have to say, “The video is fake.” They will simply have to raise the plausible possibility that a video captured on an ordinary smartphone could have been manipulated by an AI as sophisticated as Sora 2. This proliferation of plausible deepfakes will place the heavy burden of proof onto the videographer or the victim to provide “high-tech proof” that their bystander videos are real. As the Brookings Institution has pointed out, this requirement disproportionately affects marginalized communities whose members are less likely to own the high-end devices equipped with the necessary “verified at capture” or provenance metadata technologies 17.

In essence, we are moving from a world where we can say, “I have the video, so it’s true,” to a world where, “I have the video, and now I must prove it’s not fake.” This technological advancement weaponizes doubt, making it harder to find trustworthy sources and significantly harder to believe them once found. This is the true crisis of visual epistemology: the death of ‘seeing is believing’ and the erosion of accountability in the digital age. This isn’t a future threat; it is a present reality, as demonstrated by President Donald Trump’s recent use of AI-generated videos on social media to attack political opponents with fabricated audio and racially charged imagery, further demonstrating that the plausible deepfake is now a central tool of political communication.


The Political Use of the Plausible Deepfake: A Case Study in Disinformation

The real-world threat to visual truth has already been made manifest at the highest levels of government. Recently, President Donald Trump posted multiple AI-generated deepfake videos to his social media platforms. One such series of posts, made in the context of an impending government shutdown, depicted Democratic leaders with fabricated audio and racially charged visual stereotypes, including showing House Minority Leader Hakeem Jeffries with a superimposed sombrero and mustache, and using mariachi music 19. The video also included fabricated audio to make Senate Minority Leader Chuck Schumer appear to admit to supporting a far-right conspiracy theory about providing healthcare subsidies to unauthorized immigrants 20. When Minority Leader Jeffries condemned the content as “racist and fake,” the President doubled down by posting a second, similar AI-generated video mocking that condemnation 21. Separately, the President also posted (and later deleted) another AI-generated video that appeared to be a fake news segment with an AI version of himself promoting a “historic new healthcare system” based on the QAnon-linked “MedBed” conspiracy theory. This pattern of deploying hyper-realistic, AI-generated content—whether for political attack or to promote false conspiracy theories—demonstrates that the weaponization of deepfakes is no longer a fringe threat, but a central component of high-stakes political communication that further erodes the public’s ability to trust visual media.


Moving from Brilliant Tools to Ethical Governance

The week of chaos with Sora 2 is a chilling preview of a future where visual evidence is routinely dismissed as “just a deepfake.” The technology itself is undeniably brilliant, but as I have often emphasized in my research career, the brilliance of a tool does not negate the urgent need for ethical and legal governance. It is not the AI companies that are inherently “evil,” but rather the rapid and sometimes careless deployment of tools that facilitate unacceptable ethical outcomes, and the users who quickly exploit those avenues for damage.

We must move beyond making jokes about Sam Altman stealing GPUs and focus on the serious ramifications for democracy, justice, and personal integrity. Regulations must be implemented that mandate robust, unremovable provenance for all AI-generated content (far beyond simple watermarks) and impose strict, legally enforceable penalties for non-consensual likeness use, regardless of the target’s public status. If we do not act quickly to enforce provenance and penalize misuse, we risk trading in the last bastion of reliable evidence for a flood of convincing lies, ultimately surrendering accountability in the digital age.


Final Thoughts

So yeah, that’s my rambling take on all this. Honestly, it’s been both exciting and pretty unsettling to watch the kind of chaos that’s unfolding with tools like Sora 2. It’s fascinating to see how quickly tech is evolving, but at the same time, it’s kind of terrifying to think about where things might go from here. We’re at a crossroads, and I’m genuinely curious to hear what you all think.

Where do we go from here? How do we strike the balance between innovation and accountability in a world where even our eyes can’t be trusted anymore? Drop a comment, let me know your thoughts—whether you agree, disagree, or just want to talk about it.

Oh, and I realize I’ve written a lot this week—almost double what I usually write. But, honestly, there’s so much happening right now with all of this, I just had to get it all out.

Thanks for sticking through my rambling—looking forward to hearing from you all! 🙂


References

  1. OpenAI’s Sora: Major Updates and Rapid Ascent in AI Video Generation ↩︎
  2. OpenAI launch of video app Sora plagued by violent and racist images: ‘The guardrails are not real’ ↩︎
  3. Sora 2 Can Generate SpongeBob, Pikachu, and Other Copyrighted Characters ↩︎
  4. Is the AI Telling You the Truth? Discerning the Perils of the Generative Age ↩︎
  5. OpenAI’s Sora goes viral – here’s how to grab invite codes for the newest AI video craze ↩︎
  6. Launching Sora responsibly ↩︎
  7. Sora 2 Can Generate SpongeBob, Pikachu, and Other Copyrighted Characters ↩︎
  8. OpenAI: There’s a Small Chance Sora Would Create a Sexual Deepfake of You ↩︎
  9. OpenAI launch of video app Sora plagued by violent and racist images: ‘The guardrails are not real’ ↩︎
  10. OpenAI’s Sora 2 is an unholy abomination ↩︎
  11. OpenAI Overhauls Sora Copyright Controls, Announces Rights Holder Features and Revenue Sharing ↩︎
  12. OpenAI launch of video app Sora plagued by violent and racist images: ‘The guardrails are not real’ ↩︎
  13. AI and Copyright: Sora Shifts to Granular Opt In Rights Controls: What This Means for Creative Automation ↩︎
  14. Sora, Not Sorry: OpenAI Backtracks on Opt-Out Copyright Policy ↩︎
  15. I’ve been using Sora 2, and it’s SpongeBob, memes, and deepfakes all the way down ↩︎
  16. Ways of seeing: The power and limitation of video evidence across law and policy ↩︎
  17. The threat posed by deepfakes to marginalized communities ↩︎
  18. The threat posed by deepfakes to marginalized communities ↩︎
  19. Trump Shares Racist Deepfake Videos Mocking House Democratic Leader Hakeem Jeffries ↩︎
  20. White House plays racist deepfake video of Democratic leaders on loop ↩︎
  21. Trump’s Deepfake Isn’t Just Offensive—It’s a Racist Distraction from Real Governance ↩︎


ABOUT THE AUTHOR

Austin Zhao, FRSA

Austin Zhao, FRSA – Founder & CEO of NorTech Innovations & Solutions

Meet Austin Zhao, the mind behind NorTech Innovations & Solutions and your guide to mastering the digital world. As Founder and CEO, Austin is on a mission to cut through the tech jargon and deliver practical, impactful insights. Drawing on his academic foundation in Communication & Media Studies from York University (Dean’s Honour Roll), he explores the most pressing tech topics in his weekly blogs – from decoding the mysteries of AI and quantum computing to equipping you with strategies for ironclad cybersecurity and a calmer digital existence. Beyond the tech, Austin is an accomplished visual artist and photographer, recognized with a Fellowship of the Royal Society of Arts (FRSA), a testament to the creative problem-solving he brings to every technological challenge.


Stay Ahead with the Latest Tech Tips!

Want to keep up with the latest tech advice, tips, and insights? Subscribe to our blog and get fresh content delivered straight to your inbox—never miss an update!

Sign up to receive exclusive content, helpful guides, and updates on all things tech.

Your privacy is important to us. The information you provide through our blog and newsletter subscription form will only be used to send you updates, insights, and relevant content. We do not share your data with third parties without your consent. You can unsubscribe at any time. If you have any questions about how your information is handled, feel free to contact us.


Your voice counts! Leave a comment and let us know what you think