Variance Reduced EXTRA and DIGing and Their Optimal Acceleration for Strongly Convex Decentralized Optimization
September 09, 2020 Β· Declared Dead Β· π Journal of machine learning research
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Huan Li, Zhouchen Lin, Yongchun Fang
arXiv ID
2009.04373
Category
math.OC: Optimization & Control
Cross-listed
cs.DC,
cs.LG,
math.NA
Citations
27
Venue
Journal of machine learning research
Last Checked
2 months ago
Abstract
We study stochastic decentralized optimization for the problem of training machine learning models with large-scale distributed data. We extend the widely used EXTRA and DIGing methods with variance reduction (VR), and propose two methods: VR-EXTRA and VR-DIGing. The proposed VR-EXTRA requires the time of $O((ΞΊ_s+n)\log\frac{1}Ξ΅)$ stochastic gradient evaluations and $O((ΞΊ_b+ΞΊ_c)\log\frac{1}Ξ΅)$ communication rounds to reach precision $Ξ΅$, which are the best complexities among the non-accelerated gradient-type methods, where $ΞΊ_s$ and $ΞΊ_b$ are the stochastic condition number and batch condition number for strongly convex and smooth problems, respectively, $ΞΊ_c$ is the condition number of the communication network, and $n$ is the sample size on each distributed node. The proposed VR-DIGing has a little higher communication cost of $O((ΞΊ_b+ΞΊ_c^2)\log\frac{1}Ξ΅)$. Our stochastic gradient computation complexities are the same as the ones of single-machine VR methods, such as SAG, SAGA, and SVRG, and our communication complexities keep the same as those of EXTRA and DIGing, respectively. To further speed up the convergence, we also propose the accelerated VR-EXTRA and VR-DIGing with both the optimal $O((\sqrt{nΞΊ_s}+n)\log\frac{1}Ξ΅)$ stochastic gradient computation complexity and $O(\sqrt{ΞΊ_bΞΊ_c}\log\frac{1}Ξ΅)$ communication complexity. Our stochastic gradient computation complexity is also the same as the ones of single-machine accelerated VR methods, such as Katyusha, and our communication complexity keeps the same as those of accelerated full batch decentralized methods, such as MSDA.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Optimization & Control
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Local SGD Converges Fast and Communicates Little
R.I.P.
π»
Ghosted
On Lazy Training in Differentiable Programming
R.I.P.
π»
Ghosted
A Review on Bilevel Optimization: From Classical to Evolutionary Approaches and Applications
R.I.P.
π»
Ghosted
Learned Primal-dual Reconstruction
R.I.P.
π»
Ghosted
On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted