AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs
April 21, 2024 ยท Entered Twilight ยท ๐ International Conference on Machine Learning
Repo contents: .flake8, .gitignore, CODE_OF_CONDUCT.md, CONTRIBUTING.md, LICENSE, README.md, advprompter.def, advprompteropt.py, conf, data, llm.py, main.py, requirements.txt, sequence.py, utils.py
Authors
Anselm Paulus, Arman Zharmagambetov, Chuan Guo, Brandon Amos, Yuandong Tian
arXiv ID
2404.16873
Category
cs.CR: Cryptography & Security
Cross-listed
cs.AI,
cs.CL,
cs.LG
Citations
133
Venue
International Conference on Machine Learning
Repository
https://github.com/facebookresearch/advprompter
โญ 178
Last Checked
2 months ago
Abstract
Large Language Models (LLMs) are vulnerable to jailbreaking attacks that lead to generation of inappropriate or harmful content. Manual red-teaming requires a time-consuming search for adversarial prompts, whereas automatic adversarial prompt generation often leads to semantically meaningless attacks that do not scale well. In this paper, we present a novel method that uses another LLM, called AdvPrompter, to generate human-readable adversarial prompts in seconds. AdvPrompter, which is trained using an alternating optimization algorithm, generates suffixes that veil the input instruction without changing its meaning, such that the TargetLLM is lured to give a harmful response. Experimental results on popular open source TargetLLMs show highly competitive results on the AdvBench and HarmBench datasets, that also transfer to closed-source black-box LLMs. We also show that training on adversarial suffixes generated by AdvPrompter is a promising strategy for improving the robustness of LLMs to jailbreaking attacks.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Cryptography & Security
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Membership Inference Attacks against Machine Learning Models
R.I.P.
๐ป
Ghosted
The Limitations of Deep Learning in Adversarial Settings
R.I.P.
๐ป
Ghosted
Practical Black-Box Attacks against Machine Learning
R.I.P.
๐ป
Ghosted
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
R.I.P.
๐ป
Ghosted