Self-Evaluation as a Defense Against Adversarial Attacks on LLMs

July 03, 2024 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: README.md, api.py, attack, data, generate.py, llama_guard.py, results, run_eval.sh, run_gen.sh, self_eval.py

Authors Hannah Brown, Leon Lin, Kenji Kawaguchi, Michael Shieh arXiv ID 2407.03234 Category cs.LG: Machine Learning Cross-listed cs.CL, cs.CR Citations 13 Venue arXiv.org Repository https://github.com/Linlt-leon/self-eval โญ 4 Last Checked 2 months ago
Abstract
We introduce a defense against adversarial attacks on LLMs utilizing self-evaluation. Our method requires no model fine-tuning, instead using pre-trained models to evaluate the inputs and outputs of a generator model, significantly reducing the cost of implementation in comparison to other, finetuning-based methods. Our method can significantly reduce the attack success rate of attacks on both open and closed-source LLMs, beyond the reductions demonstrated by Llama-Guard2 and commonly used content moderation APIs. We present an analysis of the effectiveness of our method, including attempts to attack the evaluator in various settings, demonstrating that it is also more resilient to attacks than existing methods. Code and data will be made available at https://github.com/Linlt-leon/self-eval.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning