ChatGPT and other chatbots respond to emotions, study says

Systems will perform better if they are given emotional prompts, researchers discover

Andrew Griffin
Monday 06 November 2023 22:38 GMT
Comments
Boost your productivity with these 8 ChatGPT commands

Support truly
independent journalism

Our mission is to deliver unbiased, fact-based reporting that holds power to account and exposes the truth.

Whether $5 or $50, every contribution counts.

Support us to deliver journalism without an agenda.

Louise Thomas

Louise Thomas

Editor

Chatbots such as ChatGPT respond to the emotions of their users, according to a new study.

The systems will actually respond better if users give them emotional prompts.

The researchers, which included representatives from Microsoft, note that there is a general recognition that large language models such as ChatGPT are recognised as a move towards artificial general intelligence, or a system that could learn at the same level as a human.

But they said that one of the key things holding them back is their lack of emotional intelligence. “Understanding and responding to emotional cues gives humans a dis- tinct advantage in problem-solving,” the researchers note in a paper that has been posted online.

To understand whether those models are able to understand emotional stimuli, researchers used a variety of different systems to see how they performed in tests of emotional intelligence. They used ChatGPT and GPT-4 as well as other systems such as Meta’s Llama 2.

They fed it phrases that stressed how importance the task was, such as telling it that the task was important for its users career or that it should take pride in its work. They also gave it other prompts that were intended to make it question itself, such as asking whether it was sure about its answers.

The researchers refer to those phrases as “EmotionPrompts” and were built on the basis of a number of psychological theories. Some encouraged “self-monitoring” by asking it about its own confidence, for instance, while others used social cognitive theory with encouragements such as “stay determined”.

Those prompts worked, the researchers found. Using them significantly boosted the performance of the systems in generative tasks: they were on average 10.9 per cent better, as measured on performance, truthfulness and responsibility, the authors write.

The paper concludes that much remains mysterious about how the emotional prompts work. They say that more work should be done to understand how psychology interacts with large language models.

They also note that the response to emotional stimuli is different between large language models and humans, since studies do not suggest that humans will have better reasoning or cognitive abilities if they are just given more emotional stimuli. “The mystery behind such divergence is still unclear, and we leave it for future work to figure out the actual difference between human and LLMs’ emotional intelligence,” the researchers conclude.

A paper describing the findings, ‘Large Language Models Understand and Can be Enhanced by Emotional Stimuli’, is published on ArXiv.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in