R.I.P.
π»
Ghosted
Learned or Memorized ? Quantifying Memorization Advantage in Code LLMs
April 15, 2026 Β· Grace Period Β· π ICSE 2026
Authors
DjirΓ© AlbΓ©rick Euraste, KaborΓ© Abdoul Kader, Jordan Samhi, Earl T. Barr, Jacques Klein, TegawendΓ© F. BissyandΓ©
arXiv ID
2604.13997
Category
cs.SE: Software Engineering
Citations
0
Venue
ICSE 2026
Abstract
The lack of transparency about code datasets used to train large language models (LLMs) makes it difficult to detect, evaluate, and mitigate data leakage. We present a perturbation-based method to quantify memorization advantage in code LLMs, defined as the performance gap between likely seen and unseen inputs. We evaluate 8 open-source code LLMs on 19 benchmarks across four task families: code generation, code understanding, vulnerability detection, and bug fixing. Sensitivity patterns vary widely across models and tasks. For example, StarCoder reaches high sensitivity on some benchmarks (up to 0.8), while QwenCoder remains lower (mostly below 0.4), suggesting differences in generalization behavior. Task categories also differ: code summarization tends to show low sensitivity, whereas test generation is substantially higher. We then analyze two widely discussed benchmarks, CVEFixes and Defects4J, often suspected of leakage. Contrary to common concerns, both show low memorization advantage across models: CVEFixes remains below 0.1, and Defects4J is lower than other program repair benchmarks. These results suggest that, for these datasets, models may rely more on learned generalization than direct memorization. Overall, our findings provide evidence that memorization risk is highly task- and model-dependent, and highlight the need for stronger evaluation protocols, especially in security-focused settings.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Software Engineering
R.I.P.
π»
Ghosted
GraphCodeBERT: Pre-training Code Representations with Data Flow
R.I.P.
π»
Ghosted
DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars
R.I.P.
π»
Ghosted
Microservices: yesterday, today, and tomorrow
R.I.P.
π»
Ghosted
Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks
R.I.P.
π»
Ghosted