Differentially Private Adapters for Parameter Efficient Acoustic Modeling

May 19, 2023 ยท Entered Twilight ยท ๐Ÿ› Interspeech

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: LICENSE.txt, MLSW, README.md, kws_streaming, models_data_v2_12_labels, parse_options.sh, pate, requirements.txt, scripts, utils

Authors Chun-Wei Ho, Chao-Han Huck Yang, Sabato Marco Siniscalchi arXiv ID 2305.11360 Category cs.SD: Sound Cross-listed cs.CR, cs.LG, eess.AS Citations 1 Venue Interspeech Repository https://github.com/Chun-wei-Ho/Private-Speech-Adapter โญ 9 Last Checked 1 month ago
Abstract
In this work, we devise a parameter-efficient solution to bring differential privacy (DP) guarantees into adaptation of a cross-lingual speech classifier. We investigate a new frozen pre-trained adaptation framework for DP-preserving speech modeling without full model fine-tuning. First, we introduce a noisy teacher-student ensemble into a conventional adaptation scheme leveraging a frozen pre-trained acoustic model and attain superior performance than DP-based stochastic gradient descent (DPSGD). Next, we insert residual adapters (RA) between layers of the frozen pre-trained acoustic model. The RAs reduce training cost and time significantly with a negligible performance drop. Evaluated on the open-access Multilingual Spoken Words (MLSW) dataset, our solution reduces the number of trainable parameters by 97.5% using the RAs with only a 4% performance drop with respect to fine-tuning the cross-lingual speech classifier while preserving DP guarantees.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Sound