Towards Building a Multilingual Sememe Knowledge Base: Predicting Sememes for BabelNet Synsets
December 04, 2019 ยท Entered Twilight ยท ๐ AAAI Conference on Artificial Intelligence
"Last commit was 5.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: BabelSememe, Ensemble, LICENSE, README.md, SPBS-RR, SPBS-SR, data-all, data-noun
Authors
Fanchao Qi, Liang Chang, Maosong Sun, Sicong Ouyang, Zhiyuan Liu
arXiv ID
1912.01795
Category
cs.CL: Computation & Language
Cross-listed
cs.AI
Citations
19
Venue
AAAI Conference on Artificial Intelligence
Repository
https://github.com/thunlp/BabelNet-Sememe-Prediction
โญ 20
Last Checked
2 months ago
Abstract
A sememe is defined as the minimum semantic unit of human languages. Sememe knowledge bases (KBs), which contain words annotated with sememes, have been successfully applied to many NLP tasks. However, existing sememe KBs are built on only a few languages, which hinders their widespread utilization. To address the issue, we propose to build a unified sememe KB for multiple languages based on BabelNet, a multilingual encyclopedic dictionary. We first build a dataset serving as the seed of the multilingual sememe KB. It manually annotates sememes for over $15$ thousand synsets (the entries of BabelNet). Then, we present a novel task of automatic sememe prediction for synsets, aiming to expand the seed dataset into a usable KB. We also propose two simple and effective models, which exploit different information of synsets. Finally, we conduct quantitative and qualitative analyses to explore important factors and difficulties in the task. All the source code and data of this work can be obtained on https://github.com/thunlp/BabelNet-Sememe-Prediction.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted