Social media is flooded with AI-generated content. Text and images created by AI are spreading like wildfire, and it’s becoming harder and harder to spot it. Companies are adopting these technologies en masse as they are cheap and readily available. The positive result is hilarious viral news stories involving AI misadventures. The negative is the continuing proliferation of fake-news and misinformation. The credulous are becoming more and more misguided, whilst the incredulous can’t trust anything they see or read. Then in comes OpenAI’s Sora, and there goes our ability to trust video content. We’ve recently seen the complete impotency of world governments when it comes to regulating potentially dangerous new technologies. They are not capable of taking any action against the threats posed by these new AI-generated videos.

What is Sora?

OpenAI is an AI research organisation that hilariously claim to develop “safe and beneficial” AI technology, and Sora is their latest creation. Using a text prompt, Sora can create realistic minute-long videos. Whilst still flawed, if the viewer isn’t looking too closely they’re quite believable. If you’re unfamiliar with the promotional footage from Sora, I suggest you check out this Washington Post article. In it you can see examples of the Sora-generated footage, as well as some of the issues each piece of footage has. AI-manipulated footage was already becoming more commonplace; for example adding and removing objects from video footage, or filling in gaps in footage. Sora‘s videos however are 100% AI-generated. They’re not perfect, but their potential is clear. A time will come when we can’t tell it’s fake anymore.

Why is Sora existing a problem?

Sora, and other AI video-generating technologies, will exacerbate the problems we’re already facing with AI:

  • Deep-fakes are going to become easier to make and harder to spot. It will be possible to make create videos of anyone doing absolutely anything. This won’t just be a problem for celebrities and politicians. There aren’t many people that don’t have images of themselves online, so every one of us are vulnerable to deep-fakes. Once fake videos of you committing criminal/sexual/embarrassing acts are on social media, they’re not going away.
  • Related to deep-fakes, misinformation and fake news are only going to become more common. So many people will already happily believe any fake image or piece of text is real, even when it is clearly AI-generated. Video will be even more believable. We’ll see more threats to democracy, peace, equality. Anything that holds society together is already crumbling at the hands of fake news; Sora is not going to help.
  • Entire creative industries are under threat. AI-generated art already ruins the careers of artists, graphic designers, illustrators, etc. Why have copywriters, authors, script-writers, songwriters, when AI can do all that too? Photographers, filmmakers, documentarians: all replaceable. So much time, energy and money has gone in to replicating, replacing and ruining human creativity. For what?

Here are some examples of problems caused by AI, and how they’re going to be made worse by Sora:

Upcoming Elections

In January, thousands of residents in the state of New Hampshire in the US received a robocall from a voice that sounded like and claimed to be Joe Biden, encouraging them not to vote. This was the result of an AI voice-cloning software being employed by a Texan telemarketing company. In response, the FCC quickly ruled AI voices illegal in robocalls. Of course, this campaign had little success. People are used to ignoring robocalls, a traditional tool of the scammer. But the result would have been different if the format they had used was video, and the spread wider if it was posted to social media. Many more people would have been fooled by that. With the 2024 US presidential election coming up, we have many more cases of politician deep-fakes and misinformation coming our way.

Deepfake Pornography

Around a month ago X (previously Twitter) was flooded with AI deep-fake pornographic images of Taylor Swift, created from a prompt on 4chan. Eventually the images were taken down and ‘Taylor Swift’ temporarily blocked as a possible search term on the platform. This was not before the images racked up millions of views. In fact X was so slow to act on the issue that it was left up to her legions of fans to mass-report the images and encourage others not to view them. X‘s policies do not allow for non-consensual nudity, but in practice that policy has little meaning or effect. This is going to happen again and again, especially with the introduction of AI videos, to not only celebrities, but regular people who don’t have the leverage to do anything about it.

Sora will do nothing but harm the state of social media.

As we have seen time and again, fake news, misinformation and harmful material spread like wildfire on social media. Social media algorithms are designed to spread content that people find engaging as much and as fast as possible, even if it’s not true. Sora is going to be used maliciously to create inflammatory and dangerous content. I know this for certain. Social media is already a dangerous place to be, and it’s only going to get worse. We lack the critical thinking and education required as a species to navigate the new era of social media that AI is creating. So we need to leave. We need to stop using it. For our own sakes, and those of everyone around us.

Polly Cumming

Polly Cumming is a British literary graduate keen on writing about human existence in this moment in time. She's thrilled to see some positive change in the world of social media.