What is it about?

This work presents a web-based system that helps people feel more connected when watching videos online. The system uses a webcam to recognise simple facial expressions, such as happiness, surprise, sadness, or neutrality, and turns them into visual feedback during video playback. Instead of asking viewers to type comments or click reaction buttons, the system automatically shares emotional reactions in a lightweight and anonymous way. We designed two visual styles: Emotion Bubble, which shows emotions calmly at the side of the video, and Emotion Danmaku, which displays emoji reactions moving across the screen. Early user feedback suggests that these designs can make watching videos feel more social and enjoyable, while also raising important questions about distraction, privacy, and user control.

Featured Image

Why is it important?

Most online co-watching tools depend on chat messages, likes, or manual emoji reactions. Our work is different because it explores how people’s natural facial expressions can be used to create shared emotional feedback while they watch videos. This is timely because more people now watch media remotely or alone, but still want a sense of connection with others. The findings may help designers build future video platforms that feel more social, expressive, and less distracting.

Perspectives

I started this work because I was interested in why watching videos online can still feel lonely, even when people are technically connected. Small shared reactions, such as laughing at the same moment, are an important part of watching together, but they are often missing online. This project allowed me to explore how facial expression recognition could be used in a more human-centred way: not just to detect emotions, but to help create a sense of shared experience. I hope this work encourages people to think about how AI can support social connection in everyday media use.

Yusen Zhang
University of Glasgow

Read the Original

This page is a summary of: Enhancing Co-Watching with Real-Time Emotion Feedback via Facial Expression Recognition, November 2025, BCS Learning and Development Limited,
DOI: 10.14236/ewic/bcshci2025.62.
You can read the full text:

Read

Contributors

The following have contributed to this page