Self-Supervised Contrastive Learning for Code Retrieval and Summarization via Semantic-Preserving Transformations

September 06, 2020 ยท Declared Dead ยท ๐Ÿ› Annual International ACM SIGIR Conference on Research and Development in Information Retrieval

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Nghi D. Q. Bui, Yijun Yu, Lingxiao Jiang arXiv ID 2009.02731 Category cs.SE: Software Engineering Cross-listed cs.AI, cs.LG, cs.PL Citations 135 Venue Annual International ACM SIGIR Conference on Research and Development in Information Retrieval Last Checked 2 months ago
Abstract
We propose Corder, a self-supervised contrastive learning framework for source code model. Corder is designed to alleviate the need of labeled data for code retrieval and code summarization tasks. The pre-trained model of Corder can be used in two ways: (1) it can produce vector representation of code which can be applied to code retrieval tasks that do not have labeled data; (2) it can be used in a fine-tuning process for tasks that might still require label data such as code summarization. The key innovation is that we train the source code model by asking it to recognize similar and dissimilar code snippets through a contrastive learning objective. To do so, we use a set of semantic-preserving transformation operators to generate code snippets that are syntactically diverse but semantically equivalent. Through extensive experiments, we have shown that the code models pretrained by Corder substantially outperform the other baselines for code-to-code retrieval, text-to-code retrieval, and code-to-text summarization tasks.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Software Engineering

Died the same way โ€” ๐Ÿ‘ป Ghosted