๐
๐
Old Age
Document-level Relation Extraction with Cross-sentence Reasoning Graph
March 07, 2023 ยท Entered Twilight ยท ๐ Pacific-Asia Conference on Knowledge Discovery and Data Mining
Repo contents: .gitignore, LICENSE, configs, data_processing, requirements.txt, scripts, src
Authors
Hongfei Liu, Zhao Kang, Lizong Zhang, Ling Tian, Fujun Hua
arXiv ID
2303.03912
Category
cs.CL: Computation & Language
Cross-listed
cs.AI,
cs.LG,
cs.SI
Citations
29
Venue
Pacific-Asia Conference on Knowledge Discovery and Data Mining
Repository
https://github.com/UESTC-LHF/GRACR
โญ 7
Last Checked
2 months ago
Abstract
Relation extraction (RE) has recently moved from the sentence-level to document-level, which requires aggregating document information and using entities and mentions for reasoning. Existing works put entity nodes and mention nodes with similar representations in a document-level graph, whose complex edges may incur redundant information. Furthermore, existing studies only focus on entity-level reasoning paths without considering global interactions among entities cross-sentence. To these ends, we propose a novel document-level RE model with a GRaph information Aggregation and Cross-sentence Reasoning network (GRACR). Specifically, a simplified document-level graph is constructed to model the semantic information of all mentions and sentences in a document, and an entity-level graph is designed to explore relations of long-distance cross-sentence entity pairs. Experimental results show that GRACR achieves excellent performance on two public datasets of document-level RE. It is especially effective in extracting potential relations of cross-sentence entity pairs. Our code is available at https://github.com/UESTC-LHF/GRACR.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted