A General Retraining Framework for Scalable Adversarial Classification
April 09, 2016 ยท Declared Dead ยท ๐ arXiv.org
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Bo Li, Yevgeniy Vorobeychik, Xinyun Chen
arXiv ID
1604.02606
Category
cs.GT: Game Theory
Cross-listed
cs.LG,
stat.ML
Citations
32
Venue
arXiv.org
Last Checked
2 months ago
Abstract
Traditional classification algorithms assume that training and test data come from similar distributions. This assumption is violated in adversarial settings, where malicious actors modify instances to evade detection. A number of custom methods have been developed for both adversarial evasion attacks and robust learning. We propose the first systematic and general-purpose retraining framework which can: a) boost robustness of an \emph{arbitrary} learning algorithm, in the face of b) a broader class of adversarial models than any prior methods. We show that, under natural conditions, the retraining framework minimizes an upper bound on optimal adversarial risk, and show how to extend this result to account for approximations of evasion attacks. Extensive experimental evaluation demonstrates that our retraining methods are nearly indistinguishable from state-of-the-art algorithms for optimizing adversarial risk, but are more general and far more scalable. The experiments also confirm that without retraining, our adversarial framework dramatically reduces the effectiveness of learning. In contrast, retraining significantly boosts robustness to evasion attacks without significantly compromising overall accuracy.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Game Theory
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
A Motivational Game-Theoretic Approach for Peer-to-Peer Energy Trading in the Smart Grid
R.I.P.
๐ป
Ghosted
Computing Resource Allocation in Three-Tier IoT Fog Networks: a Joint Optimization Approach Combining Stackelberg Game and Matching
R.I.P.
๐ป
Ghosted
Fast Convergence of Regularized Learning in Games
R.I.P.
๐ป
Ghosted
Computation Peer Offloading for Energy-Constrained Mobile Edge Computing in Small-Cell Networks
R.I.P.
๐ป
Ghosted
Blockchain Mining Games
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted