R.I.P.
๐ป
Ghosted
Attacks on Third-Party APIs of Large Language Models
April 24, 2024 ยท Entered Twilight ยท ๐ arXiv.org
Repo contents: README.md, Techniques, attack.py, config.py, evaluation.py, llms.py, main.py, prompt.py, question_set_weather.json, requirements.txt
Authors
Wanru Zhao, Vidit Khazanchi, Haodi Xing, Xuanli He, Qiongkai Xu, Nicholas Donald Lane
arXiv ID
2404.16891
Category
cs.CR: Cryptography & Security
Cross-listed
cs.AI,
cs.CL,
cs.CY
Citations
12
Venue
arXiv.org
Repository
https://github.com/vk0812/Third-Party-Attacks-on-LLMs
โญ 7
Last Checked
2 months ago
Abstract
Large language model (LLM) services have recently begun offering a plugin ecosystem to interact with third-party API services. This innovation enhances the capabilities of LLMs, but it also introduces risks, as these plugins developed by various third parties cannot be easily trusted. This paper proposes a new attacking framework to examine security and safety vulnerabilities within LLM platforms that incorporate third-party services. Applying our framework specifically to widely used LLMs, we identify real-world malicious attacks across various domains on third-party APIs that can imperceptibly modify LLM outputs. The paper discusses the unique challenges posed by third-party API integration and offers strategic possibilities to improve the security and safety of LLM ecosystems moving forward. Our code is released at https://github.com/vk0812/Third-Party-Attacks-on-LLMs.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Cryptography & Security
R.I.P.
๐ป
Ghosted
Membership Inference Attacks against Machine Learning Models
R.I.P.
๐ป
Ghosted
The Limitations of Deep Learning in Adversarial Settings
R.I.P.
๐ป
Ghosted
Practical Black-Box Attacks against Machine Learning
R.I.P.
๐ป
Ghosted
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
R.I.P.
๐ป
Ghosted