What is it about?

We tested whether people are influenced by robots during a simple decision‑making task. Participants labeled images while two Furhat robots gave their own answers. One robot spoke in a way that suggested it understood human values (“value‑aware”); the other only described the images visually. We measured: - whether people changed their answers because of the robots’ opinions, - how long they took to make their final choice, and - where they looked during the task (their gaze). We also asked participants to tell us after the experiment whether they noticed a difference between the two robots.

Featured Image

Why is it important?

Robots are becoming social partners in education, care, and daily life. Understanding how they influence people helps us design safer, more transparent interactions. Our results show that: - People notice when a robot appears more value‑aware. - People sometimes conform when both robots disagree (about 25% of the time). - They take longer to give their final answer when both robots disagree (trend, not statistically significant). This highlights both a risk (robots could pressure or mislead users) and a potential benefit (robots could encourage reflection in situations like scam detection).

Perspectives

Writing this paper was rewarding because it explored an emerging topic: how “value awareness” affects trust in robots. It also strengthened collaborations across institutions and inspired new ideas for designing robots that support human decision‑making without manipulating it.

Giulia Pusceddu
Italian Institute of Technology

Read the Original

This page is a summary of: If they disagree, will you conform?, Interaction Studies Social Behaviour and Communication in Biological and Artificial Systems, December 2025, John Benjamins,
DOI: 10.1075/is.25030.pus.
You can read the full text:

Read

Contributors

The following have contributed to this page