Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

OpenAI’s new AI tool generating surreal videos from text prompts sparks concerns

‘We are working with experts in areas like misinformation, hateful content, and bias, who are testing Sora,’ OpenAI says

Vishwam Sankaran
Friday 16 February 2024 05:34 GMT
Comments
Related video: Revolut launches AI scam detection tool

OpenAI has unveiled a new tool to make ultra-realistic artificial intelligence-generated videos from text inputs, sparking concerns about such AI systems being misused to manipulate voters ahead of elections.

The AI tool, named Sora, can be used to create videos of up to 60 seconds with “highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions,” the ChatGPT company said in a blog post on Thursday.

OpenAI shared multiple sample videos that were made using the AI tool, which looked surreal.

One example video shows two people, seemingly a couple, walking through a snowy Tokyo street with their backs to the “camera” walking.

The very lifelike video was generated by the AI tool from a detailed text prompt: “Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.”

Another video made using the tool and shared by OpenAI chief Sam Altman shows ultrarealistic wooly mammoths treading through a snowy landscape with snowcapped mountains at a distance.

The ChatGPT company says Sora’s model understands how objects “exist in the physical world,” and “accurately interpret props and generate compelling characters that express vibrant emotions”.

The AI tool’s announcement has sparked concerns among many social media users, particularly about the potential public release of Sora in an election year in the US.

Experts have already raised numerous concerns about the misuse of such AI technology, including deepfake videos and chatbots in spreading political misinformation ahead of elections.

“My biggest concern is how this content could be used to trick, manipulate, phish, and confuse the general public,” ethical hacker Rachel Tobac, a member of the technical advisory council of the US government’s Cybersecurity and Infrastructure Security Agency (CISA), posted on X.

AI scams on the rise

Even though OpenAI acknowledged risks associated with the widespread use of the tool, stating it was “taking several important safety steps ahead of making Sora available in OpenAI’s products, Ms Tobac said she was “still concerned”.

Citing examples of how the tool may be misused, she said adversaries may use the AI tool to build a video that appears to show a vaccine side effect that doesn’t exist.

In the context of elections, she said such a tool may be misused to show “unimaginably long lines in bad weather” to convince people it’s not worth it to head out to vote.

OpenAI said its teams were implementing rules to limit potentially harmful use of Sora such as showing extreme violence, celebrity likeness, or hateful imagery.

“We are working with red teamers – domain experts in areas like misinformation, hateful content, and bias — who are adversarially testing the model,” the ChatGPT creator said.

But Ms Tobac fears adversaries may find ways to skirt rules.

“Take my example above, prompting this AI tool for ‘a video of a very long line of people waiting in a torrential downpour outside a building’ isn’t in violation of these policies — the danger is in how it’s used,” she explained.

“If that AI-generated video of an impossibly long line of people in torrential downpour is used by an adversary to post on social media on Election Day, now it could be used to convince certain folks to stay home and avoid the polls and line/weather,” the hacker explained.

She called on OpenAI to discuss how it could partner with social media channels to auto-recognize and label AI-generated videos shared on platforms, along with developing guidelines for labeling such content.

OpenAI did not immediately respond to The Independent’s request for comment.

This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet. Crafting a new false narrative can now be done at dramatic scale, and much more frequently – it’s like having AI agents contributing to disinformation,” Gordon Crovitz, co-chief of NewsGuard, a misinformation tracking company, told The New York Times.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in