What is it about?

Poisoning attack is the most immediate threat against the training process of machine learning models. The core idea of poisoning attacks is to introduce malicious data into training datasets of target models to hinder the model training. The security challenges brought by poisoning attacks have prompted many researchers to devote themselves to the development of countermeasures. Existing countermeasures are largely attack-specific: they can only defend against several known attack methods, and once the adversary knows the existence of these countermeasures, it is easy to bypass them. Many reasons have led to the current disadvantage of defenders. For example, the development of countermeasures is often only based on some observations, rather than a global understanding of attack methods and learning algorithms. Therefore, to better countering poisoning attacks, a comprehensive and in-depth survey is needed.

Featured Image

Why is it important?

A comprehensive understanding on poisoning attacks will be helpful to guide the academia and industry to develop more robust machine learning methods.

Perspectives

This survey aims at offering a comprehensive and up-to-date overview of poisoning attacks and countermeasures in both centralized and federated learning. It provides a unified perspective to observe the existing poisoning attacks across different learning architectures. We categorize poisoning attacks from two dimensions: the goal of the attack and the poisoning technique. The differences and connections among different poisoning attacks are analyzed based on this taxonomy. The analysis has allowed us to highlight the potential vulnerabilities of conventional machine learning, deep learning, and federated learning. Based on our observations, poisoning attacks are developing in a direction that is more efficient, stealthier, and robuster. On the other hand, countermeasures are in a disadvantaged position in this battle. The existing countermeasures are still unable to effectively defend against known subtle poisoning attacks, and even less able to effectively deal with unknown threats.

Zhiyi Tian
University of Technology Sydney

Read the Original

This page is a summary of: A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning, ACM Computing Surveys, December 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3551636.
You can read the full text:

Read

Contributors

The following have contributed to this page