What is it about?
Among adversarial attacks against sequential recommenders,model extraction attacks represent a promising method, aiming to a construct surrogate model for downstream attacks, such as data poisoning attack and profile pollution attack. Existing research on model extraction neglects the role of few-shot data in improving the performance of the surrogate model. When an adversary has access to few-shot raw data, how to maximize the utilization of this limited data to construct a surrogate recommender that more closely approximates the victim recommender, remains an open issue. In this paper, we propose FewMEA, a framework that utilizes only a minimal amount of raw data (less than 10%) yet significantly enhances the similarity of model outputs. In FewMEA, we introduce a novel method for generating synthetic data, ensuring the generated data more closely aligns with the distribution of the raw data. We also design a novel loss function that effectively reduces the discrepancy between the surrogate model and the victim model by focusing on the differences in their recommendation lists. Experiments on three datasets demonstrate that the proposed FewMEA significantly enhances output similarity, achieving an average improvement of 18.62% compared to state-of-the-art model extraction frameworks against sequential recommenders.
Featured Image
Photo by Fahim Muntashir on Unsplash
Read the Original
This page is a summary of: FewMEA: Few-shot Model Extraction Attack against Sequential Recommenders, June 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3731715.3733340.
You can read the full text:
Contributors
The following have contributed to this page







