What is it about?

Researchers across the world are trying to incorporate human values in AI agents, especially those agents which are in some way involved in (helping humans with) decision making. This works shows that the value similarity between an AI agent and a human is positively related to how much that human trusts the agent.

Featured Image

Why is it important?

We argue that there is a research gap in understanding the role of values on the trust a human has in an AI agent. An agent with similar values to the human will be trusted more which can be very important in any risk-taking scenario. In synopsis, the results of this study can help designers of explanation and feedback-giving AI agents to create agents that outline human values which is especially important in trust-critical situations.

Perspectives

This is the first article I have written as a part of my Ph.D. thesis. The idea of this work came from the Indian epic Mahabharata where two different armies jointly fought for a capital kingdom because they shared similar values. Despite the growing attention in research on trust in AI agents, a lot is still unknown about people’s perceptions of trust in AI agents. Therefore, I wish to know what it is that makes people (appropriately) trust or distrust AI? & this work is a part of understanding this question from the lens of shared values. Finally, it was a great pleasure & learning experience writing this article with my co-authors (supervisors).

Siddharth Mehrotra
Technische Universiteit Delft

Read the Original

This page is a summary of: More Similar Values, More Trust? - the Effect of Value Similarity on Trust in Human-Agent Interaction, July 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3461702.3462576.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page