ENGAGE: Explanation Guided Data Augmentation for Graph Representation Learning

July 03, 2023 ยท Entered Twilight ยท ๐Ÿ› ECML/PKDD

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: ENGAGE_Appendix.pdf, LICENSE, README.md, graph, node

Authors Yucheng Shi, Kaixiong Zhou, Ninghao Liu arXiv ID 2307.01053 Category cs.LG: Machine Learning Cross-listed cs.AI, cs.IT Citations 13 Venue ECML/PKDD Repository https://github.com/sycny/ENGAGE โญ 4 Last Checked 2 months ago
Abstract
The recent contrastive learning methods, due to their effectiveness in representation learning, have been widely applied to modeling graph data. Random perturbation is widely used to build contrastive views for graph data, which however, could accidentally break graph structures and lead to suboptimal performance. In addition, graph data is usually highly abstract, so it is hard to extract intuitive meanings and design more informed augmentation schemes. Effective representations should preserve key characteristics in data and abandon superfluous information. In this paper, we propose ENGAGE (ExplaNation Guided data AuGmEntation), where explanation guides the contrastive augmentation process to preserve the key parts in graphs and explore removing superfluous information. Specifically, we design an efficient unsupervised explanation method called smoothed activation map as the indicator of node importance in representation learning. Then, we design two data augmentation schemes on graphs for perturbing structural and feature information, respectively. We also provide justification for the proposed method in the framework of information theories. Experiments of both graph-level and node-level tasks, on various model architectures and on different real-world graphs, are conducted to demonstrate the effectiveness and flexibility of ENGAGE. The code of ENGAGE can be found: https://github.com/sycny/ENGAGE.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning