Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?
August 28, 2018 Β· Entered Twilight Β· π Conference on Empirical Methods in Natural Language Processing
"Last commit was 7.0 years ago (β₯5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: README.md, contextual_decomposition, data, images, morpho_tagging
Authors
FrΓ©deric Godin, Kris Demuynck, Joni Dambre, Wesley De Neve, Thomas Demeester
arXiv ID
1808.09551
Category
cs.CL: Computation & Language
Cross-listed
cs.AI,
cs.LG
Citations
18
Venue
Conference on Empirical Methods in Natural Language Processing
Repository
https://github.com/FredericGodin/ContextualDecomposition-NLP
β 13
Last Checked
2 months ago
Abstract
Character-level features are currently used in different neural network-based natural language processing algorithms. However, little is known about the character-level patterns those models learn. Moreover, models are often compared only quantitatively while a qualitative analysis is missing. In this paper, we investigate which character-level patterns neural networks learn and if those patterns coincide with manually-defined word segmentations and annotations. To that end, we extend the contextual decomposition technique (Murdoch et al. 2018) to convolutional neural networks which allows us to compare convolutional neural networks and bidirectional long short-term memory networks. We evaluate and compare these models for the task of morphological tagging on three morphologically different languages and show that these models implicitly discover understandable linguistic rules. Our implementation can be found at https://github.com/FredericGodin/ContextualDecomposition-NLP .
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Computation & Language
π
π
Old Age
π
π
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
π»
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
π»
Ghosted