Don't Forget Its Variance! The Minimum Path Variance Principle for Accurate and Stable Score-Based Models

January 31, 2026 ยท Grace Period ยท ๐Ÿ› The Fourteenth International Conference on Learning Representations,2026

โณ Grace Period
This paper is less than 90 days old. We give authors time to release their code before passing judgment.
Authors Wei Chen, Jiacheng Li, Shigui Li, Zhiqi Lin, Junmei Yang, John Paisley, Delu Zeng arXiv ID 2602.00834 Category cs.LG: Machine Learning Cross-listed cs.AI, stat.ML Citations 1 Venue The Fourteenth International Conference on Learning Representations,2026
Abstract
Score-based methods are powerful across machine learning, but they face a paradox: theoretically path-independent, yet practically path-dependent. We resolve this by proving that practical training objectives differ from the ideal, ground-truth objective by a crucial, overlooked term: the path variance of the score function. We propose the MinPV (**Min**imum **P**ath **V**ariance) Principle to minimize this path variance. Our key contribution is deriving a closed-form expression for the variance, making optimization tractable. By parameterizing the path with a flexible Kumaraswamy Mixture Model, our method learns data-adaptive, low-variance paths without heuristic manual selection. This principled optimization of the complete objective yields more accurate and stable estimators, establishing new state-of-the-art results on challenging benchmarks and providing a general framework for optimizing score-based interpolation.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning