DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM

June 28, 2019 ยท Declared Dead ยท ๐Ÿ› Mathematical and Scientific Machine Learning

๐Ÿ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Bao Wang, Quanquan Gu, March Boedihardjo, Farzin Barekat, Stanley J. Osher arXiv ID 1906.12056 Category cs.LG: Machine Learning Cross-listed cs.CR, stat.ML Citations 29 Venue Mathematical and Scientific Machine Learning Repository https://github.com/BaoWangMath/DP-LSSGD} Last Checked 2 months ago
Abstract
Machine learning (ML) models trained by differentially private stochastic gradient descent (DP-SGD) have much lower utility than the non-private ones. To mitigate this degradation, we propose a DP Laplacian smoothing SGD (DP-LSSGD) to train ML models with differential privacy (DP) guarantees. At the core of DP-LSSGD is the Laplacian smoothing, which smooths out the Gaussian noise used in the Gaussian mechanism. Under the same amount of noise used in the Gaussian mechanism, DP-LSSGD attains the same DP guarantee, but in practice, DP-LSSGD makes training both convex and nonconvex ML models more stable and enables the trained models to generalize better. The proposed algorithm is simple to implement and the extra computational complexity and memory overhead compared with DP-SGD are negligible. DP-LSSGD is applicable to train a large variety of ML models, including DNNs. The code is available at \url{https://github.com/BaoWangMath/DP-LSSGD}.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning

Died the same way โ€” ๐Ÿ’€ 404 Not Found