Attention is not Explanation

February 26, 2019 ยท Entered Twilight ยท ๐Ÿ› North American Chapter of the Association for Computational Linguistics

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, Attn_Classification.ipynb, Attn_Classification_1.ipynb, ExperimentsBC.py, ExperimentsQA.py, GRes.svg, Generate Adv Examples for Latex.ipynb, LICENSE, LRes.svg, README.md, Trainers, Untitled.ipynb, __init__.py, common_code, configurations.py, docs, graph_outputs, model, preprocess, random_state_readmission_attention.txt, random_state_readmission_attention_log.txt, requirements.txt, train_and_run_experiments_bc.py, train_and_run_experiments_ehr.py, train_and_run_experiments_qa.py

Authors Sarthak Jain, Byron C. Wallace arXiv ID 1902.10186 Category cs.CL: Computation & Language Cross-listed cs.AI Citations 1.6K Venue North American Chapter of the Association for Computational Linguistics Repository https://github.com/successar/AttentionExplanation โญ 323 Last Checked 2 months ago
Abstract
Attention mechanisms have seen wide adoption in neural NLP models. In addition to improving predictive performance, these are often touted as affording transparency: models equipped with attention provide a distribution over attended-to input units, and this is often presented (at least implicitly) as communicating the relative importance of inputs. However, it is unclear what relationship exists between attention weights and model outputs. In this work, we perform extensive experiments across a variety of NLP tasks that aim to assess the degree to which attention weights provide meaningful `explanations' for predictions. We find that they largely do not. For example, learned attention weights are frequently uncorrelated with gradient-based measures of feature importance, and one can identify very different attention distributions that nonetheless yield equivalent predictions. Our findings show that standard attention modules do not provide meaningful explanations and should not be treated as though they do. Code for all experiments is available at https://github.com/successar/AttentionExplanation.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago