What is it about?
Smart cars have gained a lot more functionalities in recent years. This work investigates the use of pointing and gaze to select objects (i.e. famous landmarks and buildings) outside the vehicle while driving and ask the smart car to tell you more information about it (possibly online through popular search engines). It also shows the possibility of personalizing the algorithm to fit each driver’s unique way of pointing and gazing at objects they want to select.
Featured Image
Photo by Sarah Brown on Unsplash
Why is it important?
Our findings open the way for seamless interaction between the car, the driver, and the surrounding environment. It shows possible ways of implementing this feature inside a vehicle and help understand users’ needs and differences.
Perspectives
Our second publication on the topic of multimodal gesture recognition while driving exploring a machine learning based fusion approach instead of a rule-based one.
Amr Gomaa
Deutsches Forschungszentrum fur Kunstliche Intelligenz GmbH
Read the Original
This page is a summary of: ML-PersRef: A Machine Learning-based Personalized Multimodal Fusion Approach for Referencing Outside Objects From a Moving Vehicle, October 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3462244.3479910.
You can read the full text:
Contributors
The following have contributed to this page







