Artificial intelligence researchers have discovered neurons within an AI system that have only previously been seen in the human brain.
The discovery was made using a general-purpose vision system called CLIP, which trains itself on complex datasets to recognise objects and people within abstract contexts, such as cartoons or statues.
“We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually,” OpenAI explained in a blog post.
“Our discovery of multinodal neurons in CLIP gives us a clue as to what may be a common mechanism of both synthetic and natural vision systems: abstraction.”
Multimodal neurons were first discovered in the human brain in 2005, when scientists realised that a single neuron could identify a common theme through clusters of abstract concepts delivered through sensory information.
Rather than millions of neurons working together to identify a picture of a celebrity, for example, just one neuron is responsible for recognising them.
This means that the human brain has an individual neuron devoted to every family member, friend and celebrity that a person knows, with that neuron responding to photographs, drawings and images of their name but not other names.
Similar to their biological counterparts, OpenAI researchers found artificial neurons “that respond to emotions, animals, and famous people”.
One such neuron, which they named the ‘Spiderman neuron’ bore a “remarkable resemblance” to the multinodal neurons first outlined in the 2005 study.
The neuron “responds to an image of a spider, an image of the text ‘spider’, and the comic book character Spiderman, either in costume or illustrated,” the researchers wrote.
Neural networks hold huge potential in the development of advanced artificial intelligence systems, having already powered breakthroughs in areas like facial recognition, digital assistants and self-driving vehicles.
They are composed of artificial neurons or nodes that take inspiration from the architecture of biological neural systems in order to process data.
The drawback of such powerful technology is that it can be difficult to know why it makes certain decisions or how it comes to particular conclusions.
This can lead to unwanted outcomes, such as forming sexist or racist associations with certain categories due to the vast datasets they use to train with.
OpenAI’s model was trained on a curated subset of the internet, yet still inherits certain biases and associations that could prove harmful if it were to be used within commercial applications.
“We have observed, for example, a ‘Middle East’ neuron with an association with terrorism; and an ‘immigration’ neuron that responds to Latin America,” OpenAI stated.
“We have even found a neuron that fires for both dark-skinned people and gorillas, mirroring earlier photo tagging incidents on other models we consider unacceptable.”
The tools the researchers developed to understand such neural networks could help others to preempt potential problems that might arise in the future.
Join our new commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies