AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

July 15, 2020 ยท Entered Twilight ยท ๐Ÿ› Neural Information Processing Systems

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: LICENSE, README.md, attack.py, attack_greedy.py, attack_imagenet.py, classifier_loader.py, config.py, data.py, imagenet.py, model.py, opts.py, requirements.txt, resnet.py, train.py, vgg.py, wide_resnets.py

Authors Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie arXiv ID 2007.07435 Category cs.LG: Machine Learning Cross-listed cs.CR, cs.CV, stat.ML Citations 70 Venue Neural Information Processing Systems Repository https://github.com/hmdolatabadi/AdvFlow โญ 49 Last Checked 2 months ago
Abstract
Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks. In this regard, the study of powerful attack models sheds light on the sources of vulnerability in these classifiers, hopefully leading to more robust ones. In this paper, we introduce AdvFlow: a novel black-box adversarial attack method on image classifiers that exploits the power of normalizing flows to model the density of adversarial examples around a given target image. We see that the proposed method generates adversaries that closely follow the clean data distribution, a property which makes their detection less likely. Also, our experimental results show competitive performance of the proposed approach with some of the existing attack methods on defended classifiers. The code is available at https://github.com/hmdolatabadi/AdvFlow.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning