TriNet: stabilizing self-supervised learning from complete or slow collapse on ASR

December 12, 2022 ยท Declared Dead ยท ๐Ÿ› IEEE International Conference on Acoustics, Speech, and Signal Processing

๐Ÿ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Lixin Cao, Jun Wang, Ben Yang, Dan Su, Dong Yu arXiv ID 2301.00656 Category eess.AS: Audio & Speech Cross-listed cs.CL, cs.LG Citations 4 Venue IEEE International Conference on Acoustics, Speech, and Signal Processing Repository https://github.com/tencent-ailab/ Last Checked 2 months ago
Abstract
Self-supervised learning (SSL) models confront challenges of abrupt informational collapse or slow dimensional collapse. We propose TriNet, which introduces a novel triple-branch architecture for preventing collapse and stabilizing the pre-training. TriNet learns the SSL latent embedding space and incorporates it to a higher level space for predicting pseudo target vectors generated by a frozen teacher. Our experimental results show that the proposed method notably stabilizes and accelerates pre-training and achieves a relative word error rate reduction (WERR) of 6.06% compared to the state-of-the-art (SOTA) Data2vec for a downstream benchmark ASR task. We will release our code at https://github.com/tencent-ailab/.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Audio & Speech

Died the same way โ€” ๐Ÿ’€ 404 Not Found