What is it about?

Autonomous vehicles rely on cameras and other sensors to understand their surroundings, just like humans do while driving. The vehicle reacts accordingly using the output (data) from these sensors. This ability is known as Object Event Detection and Recognition (OEDR). It is possible to make autonomous vehicles react inappropriately in a harmful sense during the OEDR cycle. An adversarial attack can cause such scenarios by making models misinterpret the incoming data, which is done by adding imperceptible distortions. We explain how easy it is to corrupt this data in the case of images from the camera. We further illustrate how to detect these harmful alterations possibly.

Featured Image

Why is it important?

Statistical models that utilize deep learning learn patterns from the data. The performance of these models depends on their ability to identify the underlying patterns. Deep learning architectures are used for the OEDR. To mimic this setup, we use simple convolutional neural networks (CNNs) that act as classifiers for the image data. We considered images relevant to the scenario of autonomous vehicles. Using the information from these CNNs, we crafted an attack that added malicious patterns to the images. A process that can 'explain' the predictions of these deep learning models by asking the question 'why' is known as Explainability. Using Explainability, we assess the magnitude of the formulated attack on CNN. Further, this process helps us in providing early signs of incoming attacks. The failure of correct predictions is reflected in the model’s behavior.

Perspectives

As the level of automation increases in today’s modern automobiles, it is pertinent that this shift is accompanied by safety. Making these vehicles perceive their surroundings securely so that no harm falls on the user or the surroundings is an engaging issue. Our aim in conducting this study was to understand the behavior of deep learning perception models. Specifically when exposed to attacks like adversarial attacks. This research can further provide a basis for explanation-based detection and mitigation methods.

Dr. Sanjay Singh
Manipal Institute of Technology, Manipal

Read the Original

This page is a summary of: Explainability of Image Classifiers for Targeted Adversarial Attack, November 2022, Institute of Electrical & Electronics Engineers (IEEE),
DOI: 10.1109/indicon56171.2022.10039871.
You can read the full text:

Read

Contributors

The following have contributed to this page