A Comprehensive Survey of LLM Threats and Mitigation Strategies

This paper presents a comprehensive technical review of privacy and security threats and defenses. A thorough taxonomy of threats corelated to defenses is clearly and methodically presented via interweaved charts, mappings and explanations.

Abstract: Large Language Models (LLMs) have transformed natural language processing and are increasingly utilized in essential sectors such as healthcare, financial services, and education. However, their adoption has raised significant security and privacy concerns due to their vulnerability throughout different lifecycle phases to several attacks, including data poisoning, model inversion, prompt injection, and supply chain vulnerabilities. Although previous studies have investigated these risks, current surveys often lack sufficient comprehensiveness, technical rigor, or lifecycle consideration. To address this gap, we present a lifecycle-oriented, and in-depth analysis of LLM security and privacy. We introduce a taxonomy of threats spanning data-centric, model-centric, and deployment and interaction-centric vectors, each mapped to corresponding defense mechanisms. We also address cross-cutting risks, including hallucinations, algorithmic bias, and vulnerable plugin interfaces. In addition, we evaluate the efficacy and limitations of existing defenses, incorporate new security tools and assessment frameworks, and situate our findings within the broader literature through extensive comparative analysis. Finally, we identify open research challenges and outline future directions for developing secure, robust, and regulation-compliant LLM systems. This survey serves as a foundational reference for researchers, developers, and policymakers in understanding the evolving adversarial landscape of LLMs.

Security and privacy in LLMs: A comprehensive survey of threats and mitigation strategies

Aymen Dia Eddine Berini, Norziana Jamil, Ala-Eddine Benrazek, Abderrahmane Lakas, Leila Ismail, Mohamed Amine Ferrag, Kwok-Yan Lam, “Security and privacy in LLMs: A comprehensive survey of threats and mitigation strategies”,

Information Fusion, Volume 132, 2026, 104241, ISSN 1566-2535,

https://doi.org/10.1016/j.inffus.2026.104241.

(https://www.sciencedirect.com/science/article/pii/S156625352600120X)

________________________________

Disclaimer: This blog post is provided for informational purposes only and does not constitute legal advice. The linked article is the work of its respective author(s) and publication, with full attribution provided. BAYPOINT LAW is not affiliated with the author(s) or publication; it is shared solely as a matter of professional interest.

Previous
Previous

Data Provenance and Lineage Tracking in LLM Systems

Next
Next

LLM Usage Privacy Risks - Often Overlooked