The Architectural Bottleneck Principle

November 11, 2022 ยท Declared Dead ยท ๐Ÿ› Conference on Empirical Methods in Natural Language Processing

๐Ÿ“œ CAUSE OF DEATH: Death by README
Repo has only a README

Repo contents: .gitignore, LICENSE, README.md

Authors Tiago Pimentel, Josef Valvoda, Niklas Stoehr, Ryan Cotterell arXiv ID 2211.06420 Category cs.CL: Computation & Language Cross-listed cs.LG Citations 5 Venue Conference on Empirical Methods in Natural Language Processing Repository https://github.com/rycolab/attentional-probe โญ 3 Last Checked 1 month ago
Abstract
In this paper, we seek to measure how much information a component in a neural network could extract from the representations fed into it. Our work stands in contrast to prior probing work, most of which investigates how much information a model's representations contain. This shift in perspective leads us to propose a new principle for probing, the architectural bottleneck principle: In order to estimate how much information a given component could extract, a probe should look exactly like the component. Relying on this principle, we estimate how much syntactic information is available to transformers through our attentional probe, a probe that exactly resembles a transformer's self-attention head. Experimentally, we find that, in three models (BERT, ALBERT, and RoBERTa), a sentence's syntax tree is mostly extractable by our probe, suggesting these models have access to syntactic information while composing their contextual representations. Whether this information is actually used by these models, however, remains an open question.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago

Died the same way โ€” ๐Ÿ“œ Death by README