Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve

October 20, 2022 ยท Entered Twilight ยท ๐Ÿ› Neural Information Processing Systems

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitignore, LICENSE, README.md, configs, dataloaders.py, linear.py, non_linear.py, requirements.txt, run_glue_test.py, test_gpt2.py, train_bilingual_gpt2.py, utils.py, visuals

Authors Giannis Daras, Negin Raoof, Zoi Gkalitsiou, Alexandros G. Dimakis arXiv ID 2210.11618 Category cs.LG: Machine Learning Cross-listed cs.AI, cs.CL Citations 4 Venue Neural Information Processing Systems Repository https://github.com/giannisdaras/multilingual_robustness โญ 10 Last Checked 2 months ago
Abstract
We find a surprising connection between multitask learning and robustness to neuron failures. Our experiments show that bilingual language models retain higher performance under various neuron perturbations, such as random deletions, magnitude pruning and weight noise compared to equivalent monolingual ones. We provide a theoretical justification for this robustness by mathematically analyzing linear representation learning and showing that multitasking creates more robust representations. Our analysis connects robustness to spectral properties of the learned representation and proves that multitasking leads to higher robustness for diverse task vectors. We open-source our code and models: https://github.com/giannisdaras/multilingual_robustness
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning