What is it about?
Ever wish you could see exactly what your colleague is looking at when discussing a complex 3D protein structure? GazeMolVR makes that possible. This tool puts two scientists (or students, or anyone!) together in a shared virtual reality space, where you can explore, point, and talk about molecules in real time. The magic twist: GazeMolVR shows each person’s eye-gaze as visual cues—like trails, arrows, or spotlights—so you always know what part of the molecule your collaborator is focusing on. We tested different ways to show these gaze cues and found that some work better for certain molecule styles (cartoon, ball-and-stick, surface). Our studies show that sharing eye-gaze makes remote scientific discussions faster, clearer, and a lot more like being in the same room.
Featured Image
Photo by julien Tromeur on Unsplash
Why is it important?
When you’re working with complex 3D data, “showing” is way better than “telling.” GazeMolVR bridges the gap between remote collaboration and face-to-face teamwork by letting you see what your partner is looking at, right inside the VR environment. This boosts understanding, cuts down on confusion, and makes online science meetings way more productive—especially when you can’t just point at a screen together. It’s a leap forward for virtual teamwork in research and education.
Perspectives
The future of molecular science is interactive, immersive, and truly collaborative. Tools like GazeMolVR mean researchers and students can work together across the globe as if they’re standing side-by-side, sharing focus and ideas in real time. Expect even more intuitive ways to communicate and explore together in VR, opening up new possibilities for discovery, learning, and scientific teamwork.
Dr Marc Baaden
CNRS
Read the Original
This page is a summary of: GazeMolVR: Sharing Eye-Gaze Cues in a Collaborative VR Environment for Molecular Visualization, December 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3701571.3701599.
You can read the full text:
Resources
Contributors
The following have contributed to this page







