Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

OpenAI value triples in nine months as it releases transformative ‘Sora’ video tool

In less than a decade the AI firm has risen to become the world’s third most valuable private company

Anthony Cuthbertson
Monday 19 February 2024 11:33 GMT
Comments
A screenshot of a video generated from a text prompt using OpenAI’s Sora AI tool
A screenshot of a video generated from a text prompt using OpenAI’s Sora AI tool (OpenAI)

OpenAI has tripled in value in just nine months after securing a new deal with venture capital firm Thrive Capital, according to reports.

The ChatGPT creator, which was founded in 2015 as a non-profit, now ranks as the world’s third most valuable private company behind Elon Musk’s SpaceX and TikTok parent ByteDance.

The latest deal, first reported by The New York Times, values OpenAI at around $80 billion – up from $27 billion last year.

Reports of the new valuation comes after OpenAI released a new artificial intelligence tool called Sora that creates videos from a simple text prompt.

Sora has already prompted both praise and concern since it was unveiled last week due to the highly realistic videos it is capable of creating.

View more

AI experts have warned that it could be used to spread disinformation, while some fear it could lead to the mass automation of entire creative industries.

“You guys are going to end so many careers for people,” one user wrote on an OpenAI community forum following Sora’s release.

“Photographers, artists, animators, filmmakers, and possibly even actors. Being in these industries is hard already, and now with this people might not have jobs anymore.”

The product, which has not yet been released to the public, follows successful roll outs of other leading generative AI, including text-based chatbot ChatGPT and image-based tool Dall-E.

OpenAI has frequently addressed safety concerns about its products, noting with the unveiling of Sora that it could potentially be misused.

“We’ll be engaging policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology,” the company said.

“Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in