Stolen ChatGPT accounts for sale on the dark web

Security researchers discover 200,000 OpenAI logins on illicit markets

Anthony Cuthbertson
Thursday 20 July 2023 15:43 BST
Comments
The app for OpenAI’s ChatGPT on a smartphone screen in Oslo, on 12 July, 2023
The app for OpenAI’s ChatGPT on a smartphone screen in Oslo, on 12 July, 2023 (Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Hundreds of thousands of stolen login credentials for ChatGPT are being listed for sale on dark web markets, security researchers have warned.

Cyber security firm Flare discovered over 200,000 OpenAI logins on the dark web – a section of the internet unreachable through conventional web browsers – offering criminals a way to access users’ accounts or simply use the premium version of the AI tool for free.

The Independent has reached out to OpenAI for further information and comment. The AI firm previously defended its security practices after a smaller batch of credentials were discovered online.

“OpenAI maintains industry best practices for authenticating and authorising users to services including ChatGPT,” a spokesperson said last month. “We encourage our users to use strong passwords and install only verified and trusted software to personal computers.”

The listings come amid a surge in interest in generative artificial intelligence from malicious actors, with discussions about ChatGPT and other AI chatbots flooding criminal forums.

Research published in March found that the number of new posts about ChatGPT on the dark web grew seven-fold between January and February this year.

Security firm NordVPN described the exploitation of ChatGPT as “the dark web’s hottest topic”, with cyber criminals seeking to “weaponise” the technology.

Among the topics under discussion were how to create malware with ChatGPT and ways to hack the AI tool to make it carry out cyber attacks.

Earlier this month, researchers discovered a ChatGPT-style AI tool with “no ethical boundaries or limitations” called WormGPT.

The AI tool WormGPT features similar functionality to ChatGPT, without any of the restrictions
The AI tool WormGPT features similar functionality to ChatGPT, without any of the restrictions (iStock/ The Independent)

It was described as ChatGPT’s “evil twin”, allowing hackers to perform attacks on a never-before-seen scale.

“ChatGPT has carried out certain measures to limit nefarious use of its application but it was inevitable that a competitor platform would soon take advantage of using technology for illicit gain,” Jake Moore, an advisor at the cyber security firm ESET, told The Independent.

“AI chat tools create a powerful tool but we are wandering into the next phase which casts a dark cloud over the technology as a whole.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in