Hundreds of fake ChatGPT apps have flooded app stores in recent weeks, as hackers seek to take advantage of the viral AI chatbot’s popularity.
Google’s Play Store has been inundated with unofficial ChatGPT apps, some with millions of downloads, while privacy researcher Alex Kleber noted that an “alarming” number of scam apps had appeared within the MacOS App Store.
“Most of these apps are nothing but cheap imitations or outright scams that fail to deliver on their promises,” he wrote in a report published this week.
“These scams not only deceive users but also tarnish the reputation of legitimate developers and hinder the growth of the app ecosystem on the MacOS platform.”
The trend has forced OpenAI to seek to fast-track a patent for the GPT acronym, which stands for Generative Pre-trained Transformer, having initially applied for a trademark in December.
The US Patent and Trademark Office refused the request to speed up the process, meaning OpenAI will have to continue to wait for its trademark application to be approved. A spokesperson for OpenAI did not immediately respond to a request for comment.
Beyond the malicious apps plaguing app stores, more legitimate clones have also appeared or been announced that borrow the GPT name.
Earlier this month, Elon Musk said that he was working on a ChatGPT alternative called TruthGPT, which he claimed would serve as a “maximum truth-seeking AI”.
Mr Musk, who helped co-found OpenAI before it transitioned from a non-profit to a for-profit, said the AI would try to “understand the nature of the universe” and offer the “best path to safety” for humanity.
The tech billionaire was among thousands of signatories to an open letter from the Future of Life Institute to pause development of all AI systems more powerful than OpenAI’s GPT-4 and divert research into developing safety protocols for artificial intelligence.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one – not even their creators – can understand, predict or reliably control,” the letter stated.
“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies