R.I.P.
๐ป
Ghosted
Self-Evaluation as a Defense Against Adversarial Attacks on LLMs
July 03, 2024 ยท Entered Twilight ยท ๐ arXiv.org
Repo contents: README.md, api.py, attack, data, generate.py, llama_guard.py, results, run_eval.sh, run_gen.sh, self_eval.py
Authors
Hannah Brown, Leon Lin, Kenji Kawaguchi, Michael Shieh
arXiv ID
2407.03234
Category
cs.LG: Machine Learning
Cross-listed
cs.CL,
cs.CR
Citations
13
Venue
arXiv.org
Repository
https://github.com/Linlt-leon/self-eval
โญ 4
Last Checked
2 months ago
Abstract
We introduce a defense against adversarial attacks on LLMs utilizing self-evaluation. Our method requires no model fine-tuning, instead using pre-trained models to evaluate the inputs and outputs of a generator model, significantly reducing the cost of implementation in comparison to other, finetuning-based methods. Our method can significantly reduce the attack success rate of attacks on both open and closed-source LLMs, beyond the reductions demonstrated by Llama-Guard2 and commonly used content moderation APIs. We present an analysis of the effectiveness of our method, including attempts to attack the evaluator in various settings, demonstrating that it is also more resilient to attacks than existing methods. Code and data will be made available at https://github.com/Linlt-leon/self-eval.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted