Membership Inference Attacks against Machine Learning Models

October 18, 2016 ยท Declared Dead ยท ๐Ÿ› IEEE Symposium on Security and Privacy

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov arXiv ID 1610.05820 Category cs.CR: Cryptography & Security Cross-listed cs.LG, stat.ML Citations 4.9K Venue IEEE Symposium on Security and Privacy Last Checked 1 month ago
Abstract
We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Cryptography & Security

Died the same way โ€” ๐Ÿ‘ป Ghosted