Incorporating Chinese Characters of Words for Lexical Sememe Prediction

June 17, 2018 ยท Entered Twilight ยท ๐Ÿ› Annual Meeting of the Association for Computational Linguistics

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 7.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, CSP.sh, Ensemble_model_CSP.py, Ensemble_model_external.py, Ensemble_model_internal.py, LICENSE, README.md, SPCSE.sh, SPCSE_prediction.py, SPCSE_train.py, SPWCF.sh, SPWCF_prediction.py, Sememe_PMI_Matrix_Generator.py, data_generator.sh, hownet.txt, hownet_corpus_data_picker.py, pickle_version_change.py, scorer.py, test_data_generator.py, work.sh

Authors Huiming Jin, Hao Zhu, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Fen Lin, Leyu Lin arXiv ID 1806.06349 Category cs.CL: Computation & Language Cross-listed cs.AI, cs.LG Citations 28 Venue Annual Meeting of the Association for Computational Linguistics Repository https://github.com/thunlp/Character-enhanced-Sememe-Prediction โญ 24 Last Checked 1 month ago
Abstract
Sememes are minimum semantic units of concepts in human languages, such that each word sense is composed of one or multiple sememes. Words are usually manually annotated with their sememes by linguists, and form linguistic common-sense knowledge bases widely used in various NLP tasks. Recently, the lexical sememe prediction task has been introduced. It consists of automatically recommending sememes for words, which is expected to improve annotation efficiency and consistency. However, existing methods of lexical sememe prediction typically rely on the external context of words to represent the meaning, which usually fails to deal with low-frequency and out-of-vocabulary words. To address this issue for Chinese, we propose a novel framework to take advantage of both internal character information and external context information of words. We experiment on HowNet, a Chinese sememe knowledge base, and demonstrate that our framework outperforms state-of-the-art baselines by a large margin, and maintains a robust performance even for low-frequency words.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago