A leading science fiction magazine has been forced to pause submissions after being inundated with AI-generated short stories.
Clarkesworld Magazine announced the suspension of short story submissions after its editor warned about the “concerning trend” of generative AI systems like ChatGPT, which create content through simple text prompts.
“We are not considering stories written, co-written, or assisted by AI at this time,” state the submission guidelines for the magazine, which pays 12 cents a word for any story accepted.
Editor Neil Clarke said the technology posed an existential threat to the entire short-story ecosystem by blocking new writers and plagiarising existing authors.
“If the field can’t find a way to address this situation, things will begin to break,” he wrote in a blog post earlier this month.
“I’ve reached out to several editors and the situation I’m experiencing is by no means unique... It’s not just going to go away on its own and I don’t have a solution. I’m tinkering with some, but this isn’t a game of whack-a-mole that anyone can ‘win’. The best we can hope for is to bail enough water to stay afloat.”
Mr Clarke noted that the spike in plagiarism and suspicious submissions began around November, when OpenAI released its ChatGPT chatbot.
He did not elaborate on what made him suspect the submissions were not created by a human, but said that there were some “very obvious patterns” that he did not want to share in case people tried to get around them.
Anyone suspected of submitting AI-generated content received an automatic ban, with the number of bans rising 38 per cent in February.
The surge in AI-generated submissions to literary magazines comes as hundreds of AI-written books are being listed for sale on Amazon.
Close to 300 books written or co-written by AI – ranging from self-help to children’s fiction – are currently being sold through the online retailer, though there may be thousands more that do not openly admit to being authored by artificial intelligence.
There are several tools for detecting AI-generated text, including one built by ChatGPT creator OpenAI, however the evolving and complicated nature of chatbots mean they are not always reliable.
OpenAI said its system is “not fully reliable” and only identifies around a quarter of AI-written text. It also occasionally labels human text incorrectly.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies