What is it about?
Self-supervised learning (SSL) has been recently applied to sequential recommender systems to provide high-quality user representations. However, while facilitating the learning process recommender systems, SSL is not without security threats: carefully crafted inputs can poison the pre-trained models driven by SSL, thus reducing the effectiveness of the downstream recommendation model. This work shows that poisoning attacks against the pre-training stage threaten sequential recommender systems. Without any background knowledge of the model architecture and parameters, nor any API queries, our strategy proves the feasibility of poisoning attacks on mainstream SSL-based recommender schemes as well as on commonly used datasets. By injecting only a tiny amount of fake users, we get the target item recommended to real users more than thousands of times as before, demonstrating that recommender systems have a new attack surface due to SSL. We further show our attack is challenging for recommendation platforms to detect and defend. Our work highlights the weakness of self-supervised recommender systems and shows the necessity for researchers to be aware of this security threat. Our source code is available at https://github.com/CongGroup/Poisoning-SSL-based-RS.
Featured Image
Photo by AJ on Unsplash
Read the Original
This page is a summary of: Poisoning Self-supervised Learning Based Sequential Recommendations, July 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3539618.3591751.
You can read the full text:
Contributors
The following have contributed to this page