AI worm that infects computers and reads emails created by researchers

Morris II represents new breed of ‘zero-click malware’, researchers warn

Anthony Cuthbertson
Monday 04 March 2024 14:47 GMT
The UK National Cyber Security Centre (NCSC) said in January that the commercial cyber intrusion sector is doubling in size every ten years
The UK National Cyber Security Centre (NCSC) said in January that the commercial cyber intrusion sector is doubling in size every ten years (PA Media)

Security researchers have developed a self-replicating AI worm that can infiltrate people’s emails in order to spread malware and steal data.

Dubbed Morris II, after the first ever computer worm from 1988, the computer worm was created by an international team from the US and Israel in an effort to highlight the risks associated with generative artificial intelligence (GenAI).

The worm is designed to target AI-powered apps that use popular tools like OpenAI’s ChatGPT and Google’s Gemini. It has already been demonstrated against GenAI-powered email assistants to steal personal data and launch spamming campaigns.

The researchers warned that the worm represented a new breed of “zero-click malware”, as the victim does not have to click on anything to trigger the malicious activity or even propagate it. Instead, it is carried out by the automatic action of the generative AI tool.

“The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication) and engage in malicious activities (payload),” the researchers wrote.

“Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem.”

The research was detailed in a study, titled ‘ComPromptMized: Unleashing zero-click worms that target GenAI-powered applications’.

Since the launch of ChatGPT in 2022, security researchers have noted the potential for hackers and cyber criminals to use some element of generative AI in order to carry out attacks.

The technology’s ability to realistically imitate human-generated text means non-native speakers could use it to generate convincing fraudulent emails and texts.

Cyber security firm CrowdStrike warned in its annual Global Threat Report, published last month, that its researchers had observed nation-state actors and hactivists experimenting with tools like ChatGPT.

“Generative AI [can] democratise attacks and lower the barrier of entry for more sophisticated operations,” a company representative wrote in an email to The Independent. “Generative AI will likely be used for cyber activities in 2024 as the technology continues to gain popularity.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies


Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in