Study: Can humans feel compassion for robots?

Under certain circumstances, robots can make people feel sorry for them. However, this form of emotional manipulation is not always desirable.

listen Print view
Robot Pepper is looked at by a human

(Image: MikeDotta/Shutterstock.com)

3 min. read

Scientists at Radboud University in the Netherlands have investigated the question of whether humans can feel compassion for a robot. Yes, they can and the robot can also be set up emotionally to exploit human compassion, says the study.

A robot knows no pain. Everyone knows that, but humans are emotional beings who can also be emotionally manipulated by robots. All it takes is for the robot to emit whining sounds, make sad eyes and make its arms tremble.

"If a robot can pretend to feel emotional pain, people feel more guilty if they treat the robot badly," explains Marieke Wieringa, PhD student at Radboud University.

In their previously unpublished doctoral thesis, Wieringa and her colleagues investigated how people react to violence against robots. They carried out various tests for this purpose. For example, they showed test subjects videos in which robots were mistreated or treated well.

Videos by heise

Individual participants in the study were asked to shake a robot. The robots sometimes made pitiful noises and showed other reactions commonly associated with pain, sometimes not.

The experiments showed that the robot that emitted pitying noises, for example, also elicited more pity. The test subjects were then no longer willing to shake the robot again. Conversely, they had no problem with the robot showing no emotions. The study participants were then able to shake it again without any problems.

The researchers extended this test by giving the test subjects the choice of either completing a boring task or shaking a robot. The longer they shook it, the longer they were able to delay completing the boring task.

"Most people had no problem shaking a dumb robot, but as soon as the robot started making miserable noises, they chose to do the boring task," says Wieringa.

Wieringa sees the danger in emotionally responsive robots that people interacting with them could be manipulated by them. Wieringa sees parallels here with Tamagotchis, as they appeared at the end of the 1990s. The virtual pets had to be cared for and fed in order to avoid virtual death. Robotics companies could also exploit people's compassion for robots by requiring the robot to be fed in exchange for money. Wieringa and her colleagues therefore consider it appropriate to introduce state regulations on whether and when a robot or chatbot is allowed to fake emotions.

However, Wieringa considers a complete ban on emotions for robots to be inappropriate. It could be useful for certain applications, such as therapies, if a robot shows emotions so that the people undergoing therapy can process certain things better.

The study also showed that people react with restraint to violence against robots and feel compassion when the robot shows emotions. In such cases, an emotional reaction from the robot could be useful.

One possible application example could be delivery robots. In the USA, they are repeatedly the target of attacks by people who kick them, knock them over or otherwise paralyze the robots because they feel disturbed by them. A few snivelling sounds from the robot could possibly curb such behavior.

(olb)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.