๐
๐
Old Age
EDUKG: a Heterogeneous Sustainable K-12 Educational Knowledge Graph
October 21, 2022 ยท Entered Twilight ยท ๐ arXiv.org
Repo contents: .gitignore, .idea, README.md, edukgLinking, knowledgeTopicManagement, ontology.owl, rhetyper
Authors
Bowen Zhao, Jiuding Sun, Bin Xu, Xingyu Lu, Yuchen Li, Jifan Yu, Minghui Liu, Tingjian Zhang, Qiuyang Chen, Hanming Li, Lei Hou, Juanzi Li
arXiv ID
2210.12228
Category
cs.CL: Computation & Language
Cross-listed
cs.AI,
cs.LG
Citations
8
Venue
arXiv.org
Repository
https://github.com/THU-KEG/EDUKG
โญ 77
Last Checked
2 months ago
Abstract
Web and artificial intelligence technologies, especially semantic web and knowledge graph (KG), have recently raised significant attention in educational scenarios. Nevertheless, subject-specific KGs for K-12 education still lack sufficiency and sustainability from knowledge and data perspectives. To tackle these issues, we propose EDUKG, a heterogeneous sustainable K-12 Educational Knowledge Graph. We first design an interdisciplinary and fine-grained ontology for uniformly modeling knowledge and resource in K-12 education, where we define 635 classes, 445 object properties, and 1314 datatype properties in total. Guided by this ontology, we propose a flexible methodology for interactively extracting factual knowledge from textbooks. Furthermore, we establish a general mechanism based on our proposed generalized entity linking system for EDUKG's sustainable maintenance, which can dynamically index numerous heterogeneous resources and data with knowledge topics in EDUKG. We further evaluate EDUKG to illustrate its sufficiency, richness, and variability. We publish EDUKG with more than 252 million entities and 3.86 billion triplets. Our code and data repository is now available at https://github.com/THU-KEG/EDUKG.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted