What is it about?
This study looks at how to automatically detect when a robot makes a mistake during a conversation with a human, and how to recognise when the human tries to correct it. We analyse facial expressions, voice patterns, and speech content to build a system that can spot these moments in real time during human–robot dialogue.
Featured Image
Why is it important?
Robots and AI assistants are becoming more common in daily life, but they still frequently misunderstand users or respond in awkward ways. Detecting these breakdowns quickly is essential for keeping conversations smooth, trustworthy, and safe. Our work shows a practical way to recognise these errors early using lightweight machine-learning methods, which can help future robots respond more naturally, reduce frustration, and improve the overall quality of human–robot interaction.
Read the Original
This page is a summary of: Beyond Technical Failures: Multimodal Time-Series Modelling for Detecting Social Breakdowns and User Repair Attempts in Human-Robot Interaction, October 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3746027.3762074.
You can read the full text:
Contributors
The following have contributed to this page







