What is it about?

Chatbots are increasingly able to pose as humans. However, this does not hold true if their identity is explicitly disclosed to users-a practice that will become a legal obligation for many service providers in the imminent future. Previous studies hint at a chatbot disclosure dilemma in that disclosing the non-human identity of chatbots comes at the cost of negative user responses. As these responses are commonly attributed to reduced trust in algorithms, this research examines how the detrimental impact of chatbot disclosure on trust can be buffered. Based on computer-mediated communication theory, the authors demonstrate that the chatbot disclosure dilemma can be resolved if disclosure is paired with presentation of the chatbot's capabilities.

Featured Image

Why is it important?

Study results show that while disclosing chatbot identity does reduce trust, pairing chatbot disclosure with presenting information on the chatbot's expertise or weaknesses mitigates this detrimental effect.

Perspectives

"Not disclosing chatbot identity is unethical, but transparently disclosing can destroy customer trust. We inform managers how to solve this dilemma."

Dr Maik Hammerschmidt
Georg-August-Universitat Gottingen

Read the Original

This page is a summary of: Resolving the Chatbot Disclosure Dilemma: Leveraging Selective Self-Presentation to Mitigate the Negative Effect of Chatbot Disclosure, January 2021, HICSS Conference Office,
DOI: 10.24251/hicss.2021.355.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page