Differentially Private Sharpness-Aware Training

June 09, 2023 ยท Entered Twilight ยท ๐Ÿ› International Conference on Machine Learning

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: LICENSE, README.md, examples, main.py, opacus, requirements.txt, src

Authors Jinseong Park, Hoki Kim, Yujin Choi, Jaewook Lee arXiv ID 2306.05651 Category cs.LG: Machine Learning Cross-listed cs.AI, cs.CR Citations 14 Venue International Conference on Machine Learning Repository https://github.com/jinseongP/DPSAT โญ 9 Last Checked 1 month ago
Abstract
Training deep learning models with differential privacy (DP) results in a degradation of performance. The training dynamics of models with DP show a significant difference from standard training, whereas understanding the geometric properties of private learning remains largely unexplored. In this paper, we investigate sharpness, a key factor in achieving better generalization, in private learning. We show that flat minima can help reduce the negative effects of per-example gradient clipping and the addition of Gaussian noise. We then verify the effectiveness of Sharpness-Aware Minimization (SAM) for seeking flat minima in private learning. However, we also discover that SAM is detrimental to the privacy budget and computational time due to its two-step optimization. Thus, we propose a new sharpness-aware training method that mitigates the privacy-optimization trade-off. Our experimental results demonstrate that the proposed method improves the performance of deep learning models with DP from both scratch and fine-tuning. Code is available at https://github.com/jinseongP/DPSAT.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning