What is it about?
We foresee the future of voice assistants (like Siri or Alexa) as one in which they are able to recognize when people display confusion or look upset based on their interactions. This study examines voice assistants' responses to the possibility of a mistake.
Featured Image
Photo by Omid Armin on Unsplash
Why is it important?
Voice assistants are becoming increasingly human-like, and this study shows what they may evolve into in the future. We find that self-repair improves interactions, but we also surface concerns about the "creepiness" of making this possible.
Perspectives
I hope this article accentuates the tension created by the human-likeness of machines (and thus the inextricable values they meet surrounding usability and desirability), and the ethical concerns that the human-likeness of machines generates by blurring the lines of what is authentic and what is artificial.
Andrea Cuadra
Read the Original
This page is a summary of: My Bad! Repairing Intelligent Voice Assistant Errors Improves Interaction, Proceedings of the ACM on Human-Computer Interaction, April 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3449101.
You can read the full text:
Contributors
The following have contributed to this page







