What is it about?

The paper is a comprehensive study of attacks on privacy of machine learning systems that appear in the scientific literature from 2015 to 2022. It provides a taxonomy of attacks against machine learning privacy, a discussion on the probable causes of privacy leaks in machine learning systems, an in-depth presentation of the implementation of the attacks, and an overview of the different defensive measures tested to protect against the different attacks.

Featured Image

Why is it important?

It provides a comprehensive analysis of privacy attacks against machine learning, which is a growing concern as machine learning is increasingly used in real-world applications. The paper proposes a taxonomy of attacks and a threat model that allows the categorization of different attacks based on the adversarial knowledge and the assets under attack. The paper also presents an overview of the most commonly proposed defenses. This information can be useful for researchers, practitioners, and policymakers who are interested in understanding and addressing the privacy risks associated with machine learning.

Perspectives

Writing this survey exposed me to different aspects of privacy related research and allowed to me to gain a wider view of the topic. I hope this article sheds some light into privacy leaks related to machine learning and helps people make informed decisions about the deployment of machine learning models in a safe and responsible manner.

Maria Rigaki
Czech Technical University in Prague

Read the Original

This page is a summary of: A Survey of Privacy Attacks in Machine Learning, ACM Computing Surveys, September 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3624010.
You can read the full text:

Read

Contributors

The following have contributed to this page