Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?

April 11, 2019 ยท Declared Dead ยท ๐Ÿ› Transactions of the Association for Computational Linguistics

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Sorami Hisamoto, Matt Post, Kevin Duh arXiv ID 1904.05506 Category cs.LG: Machine Learning Cross-listed cs.CL, stat.ML Citations 124 Venue Transactions of the Association for Computational Linguistics Last Checked 2 months ago
Abstract
Data privacy is an important issue for "machine learning as a service" providers. We focus on the problem of membership inference attacks: given a data sample and black-box access to a model's API, determine whether the sample existed in the model's training data. Our contribution is an investigation of this problem in the context of sequence-to-sequence models, which are important in applications such as machine translation and video captioning. We define the membership inference problem for sequence generation, provide an open dataset based on state-of-the-art machine translation models, and report initial results on whether these models leak private information against several kinds of membership inference attacks.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning

Died the same way โ€” ๐Ÿ‘ป Ghosted