A General Retraining Framework for Scalable Adversarial Classification

April 09, 2016 ยท Declared Dead ยท ๐Ÿ› arXiv.org

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Bo Li, Yevgeniy Vorobeychik, Xinyun Chen arXiv ID 1604.02606 Category cs.GT: Game Theory Cross-listed cs.LG, stat.ML Citations 32 Venue arXiv.org Last Checked 2 months ago
Abstract
Traditional classification algorithms assume that training and test data come from similar distributions. This assumption is violated in adversarial settings, where malicious actors modify instances to evade detection. A number of custom methods have been developed for both adversarial evasion attacks and robust learning. We propose the first systematic and general-purpose retraining framework which can: a) boost robustness of an \emph{arbitrary} learning algorithm, in the face of b) a broader class of adversarial models than any prior methods. We show that, under natural conditions, the retraining framework minimizes an upper bound on optimal adversarial risk, and show how to extend this result to account for approximations of evasion attacks. Extensive experimental evaluation demonstrates that our retraining methods are nearly indistinguishable from state-of-the-art algorithms for optimizing adversarial risk, but are more general and far more scalable. The experiments also confirm that without retraining, our adversarial framework dramatically reduces the effectiveness of learning. In contrast, retraining significantly boosts robustness to evasion attacks without significantly compromising overall accuracy.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Game Theory

R.I.P. ๐Ÿ‘ป Ghosted

Blockchain Mining Games

Aggelos Kiayias, Elias Koutsoupias, ... (+2 more)

cs.GT ๐Ÿ› EC ๐Ÿ“š 273 cites 9 years ago

Died the same way โ€” ๐Ÿ‘ป Ghosted