What is it about?

This research is about protecting the "eyes" of self-driving cars. These cars often use a single camera to judge distances (a task called monocular depth estimation). We show how this critical system can be tricked by physical "hacks" on the road, like altered markings or stuck-on patches. Our work introduces a new safety checklist (called AdvFMEA) to proactively find, measure, and rank these risks before they cause accidents. We tested this method on different types of vision models in simulator and real-world driving settings to see which are most vulnerable.

Featured Image

Why is it important?

This work is crucial because it moves safety testing for autonomous vehicles beyond just checking for standard errors. We provide a systematic way to quantify the real-world danger posed by intentional, adversarial attacks on a car's perception system. Our findings reveal a critical trade-off: some of the most advanced AI models are fooled by visual tricks, while simpler models make distance mistakes. The framework we developed helps engineers choose safer vision systems and set safety limits, directly supporting international safety standards (like ISO 8800:2024) to build more resilient and trustworthy self-driving cars.

Perspectives

A key insight from this research is that integrating advanced AI into established engineered systems can create both upstream and downstream disruptions. However, I view these not as setbacks, but as necessary challenges to be systematically identified and resolved. This work has reinforced the importance of continuous documentation and knowledge transfer between scientific research and real-world engineering. Keeping this connection strong through ongoing study is essential to build safer, more reliable autonomous systems in the long run.

Dr. Sanjay Singh
Manipal Institute of Technology, Manipal

Read the Original

This page is a summary of: AdvFMEA: Adversarial-Aware Failure Mode and Effects Analysis for Safety-Critical Monocular Depth Estimation in Autonomous Vehicles, IEEE Access, January 2025, Institute of Electrical & Electronics Engineers (IEEE),
DOI: 10.1109/access.2025.3638855.
You can read the full text:

Read

Contributors

The following have contributed to this page