What is it about?

In-vehicle networks, particularly those using the Controller Area Network (CAN) protocol, are vulnerable to cyberattacks which can disrupt the normal operation of vehicles and cause dangerous accidents. Intrusion detection systems (IDS) that are designed to detect such attacks have emerged as a means of securing the in-vehicle network. To gain an understanding of which IDS are the most effective for CAN, this paper presents replicable experiments in which six CAN IDS have been tested against attack samples in a publicly-available dataset. By reporting all details of implementation as well as the results of 10 evaluation metrics, we demonstrate how we can benchmark different CAN IDS under equivalent experimental settings and make a fair comparison of their detection capability and performance.

Featured Image

Why is it important?

In-vehicle networks are crucial to the operation of a vehicle, which represents a safety-critical system. It is therefore not only important for an in-vehicle IDS to detect a large variety of cyberattacks, but also to accurately detect them when and as soon as they occur. The IDS must also not falsely identify normal in-vehicle network traffic as an attack, which would make the IDS unreliable. Given the large number of CAN IDS available and the variety of ways in which they can be evaluated against these criteria, it is necessary to have uniform, repeatable evaluation methods so that the performance of new CAN IDS can be fairly compared with already existing CAN IDSs. To enable reproducible experiments, we have used the Real ORNL Automotive Dynamometer (ROAD) CAN intrusion dataset which is a publicly-available dataset containing realistic attack samples. Some of the attacks (like the targeted ID and masquerade attacks) are more difficult to detect and some (like the masquerade attacks) have not been used for CAN IDS benchmarking before. To further enhance the comparability of evaluation results, we report a total of 10 evaluation metrics that quantify detection capability (e.g. accuracy, F1-score, Matthews Correlation Coefficient) and performance aspects (training and testing times). Our selection of evaluation metrics, particularly balanced accuracy, informedness, markedness, and Matthews Correlation Coefficient, addresses the challenge of class imbalance in CAN intrusion datasets. On the other hand, we provide an indication of real-time performance of the IDS by reporting the time taken to train and test the detectors.

Read the Original

This page is a summary of: Comparative Evaluation of Anomaly-Based Controller Area Network IDS, February 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3587828.3587861.
You can read the full text:

Read

Contributors

The following have contributed to this page