TransLIST: A Transformer-Based Linguistically Informed Sanskrit Tokenizer

October 21, 2022 ยท Entered Twilight ยท ๐Ÿ› Conference on Empirical Methods in Natural Language Processing

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: Hackathon_data, LICENSE.md, LREC-Data, README.md, SIGHUM_embeds, V0, __pycache__, constrained_inference.py, devconvert, embedding_gen.py, fastNLP_module.py, fastnlp-copy, gpu_utils.py, hack_shr_lat_gen.py, lang_model_sanskrit.pkl, load_data.py, lrec_maker.py, ngram_lat_gen.py, paths.py, readme.txt, requirements.sh, set_hack_ngram.sh, set_hack_shr.sh, set_sighum_ngram.sh, set_sighum_shr.sh, setup.sh, skt, sktWS, tlat0.yml, utils.py

Authors Jivnesh Sandhan, Rathin Singha, Narein Rao, Suvendu Samanta, Laxmidhar Behera, Pawan Goyal arXiv ID 2210.11753 Category cs.CL: Computation & Language Citations 15 Venue Conference on Empirical Methods in Natural Language Processing Repository https://github.com/rsingha108/TransLIST โญ 7 Last Checked 2 months ago
Abstract
Sanskrit Word Segmentation (SWS) is essential in making digitized texts available and in deploying downstream tasks. It is, however, non-trivial because of the sandhi phenomenon that modifies the characters at the word boundaries, and needs special treatment. Existing lexicon driven approaches for SWS make use of Sanskrit Heritage Reader, a lexicon-driven shallow parser, to generate the complete candidate solution space, over which various methods are applied to produce the most valid solution. However, these approaches fail while encountering out-of-vocabulary tokens. On the other hand, purely engineering methods for SWS have made use of recent advances in deep learning, but cannot make use of the latent word information on availability. To mitigate the shortcomings of both families of approaches, we propose Transformer based Linguistically Informed Sanskrit Tokenizer (TransLIST) consisting of (1) a module that encodes the character input along with latent-word information, which takes into account the sandhi phenomenon specific to SWS and is apt to work with partial or no candidate solutions, (2) a novel soft-masked attention to prioritize potential candidate words and (3) a novel path ranking algorithm to rectify the corrupted predictions. Experiments on the benchmark datasets for SWS show that TransLIST outperforms the current state-of-the-art system by an average 7.2 points absolute gain in terms of perfect match (PM) metric. The codebase and datasets are publicly available at https://github.com/rsingha108/TransLIST
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago