What is it about?
Artificial intelligence is moving quickly, and large language models are now much better at understanding and producing text. Because they are trained on huge amounts of data, they can power tools like chatbots and virtual assistants. But along with their strengths, these models also create privacy and security risks at different stages of their life cycle. These risks are not the same as those seen in older, smaller models, and they have become an important concern for both researchers and companies. In this work, we look at the main risks that appear in four stages: pre-training, fine-tuning, deployment, and when models are used as agents. We also explore possible solutions for each stage. On top of that, we compare existing risks and defenses, focusing on how well they work, their strengths and weaknesses, and where they can be applied. By examining both attacks and defenses, this survey offers clear directions for future research and helps ensure that large language models can be used more safely and widely.
Featured Image
Photo by Growtika on Unsplash
Why is it important?
Large language models are now widely used in many industries, but their life cycle still contains vulnerabilities that can cause serious privacy and security risks, even threatening public safety and breaking laws. To better understand these risks, this survey introduces a new way of classifying them, analyzing their goals, causes, and how they are carried out, while also summarizing possible countermeasures. We look at four key stages in the life cycle—pre-training, fine-tuning, deployment, and the use of LLMs as agents—and explain how attackers and defenders can be understood differently in each stage. For every stage, we describe how the privacy and security risks in LLMs differ from those in traditional language models, pointing out the threats that are unique to LLMs as well as those they share with other models. We also review existing research, showing what kinds of attacks are possible, what goals they pursue, and what defenses may work. To deal with these risks, we collect and evaluate a range of countermeasures, highlighting their assumptions, strengths, and limitations. Finally, we also explore other important topics for LLM safety, such as machine unlearning and watermarking, offering researchers potential directions for future work.
Perspectives
This article looks at the security risks that large language models face throughout their life cycle. By breaking things down stage by stage, we show where these risks come from and why they happen, which makes it easier to think about how to defend against them. Our hope is that this work gives the community useful ideas for keeping LLMs safe so that more people and industries can benefit from them. More than anything else, and if nothing else, I hope you find this article thought-provoking.
Shang Wang
University of Technology Sydney
Read the Original
This page is a summary of: Unique Security and Privacy Threats of Large Language Models: A Comprehensive Survey, ACM Computing Surveys, September 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3764113.
You can read the full text:
Contributors
The following have contributed to this page







