Variance Reduced EXTRA and DIGing and Their Optimal Acceleration for Strongly Convex Decentralized Optimization

September 09, 2020 Β· Declared Dead Β· πŸ› Journal of machine learning research

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Huan Li, Zhouchen Lin, Yongchun Fang arXiv ID 2009.04373 Category math.OC: Optimization & Control Cross-listed cs.DC, cs.LG, math.NA Citations 27 Venue Journal of machine learning research Last Checked 2 months ago
Abstract
We study stochastic decentralized optimization for the problem of training machine learning models with large-scale distributed data. We extend the widely used EXTRA and DIGing methods with variance reduction (VR), and propose two methods: VR-EXTRA and VR-DIGing. The proposed VR-EXTRA requires the time of $O((ΞΊ_s+n)\log\frac{1}Ξ΅)$ stochastic gradient evaluations and $O((ΞΊ_b+ΞΊ_c)\log\frac{1}Ξ΅)$ communication rounds to reach precision $Ξ΅$, which are the best complexities among the non-accelerated gradient-type methods, where $ΞΊ_s$ and $ΞΊ_b$ are the stochastic condition number and batch condition number for strongly convex and smooth problems, respectively, $ΞΊ_c$ is the condition number of the communication network, and $n$ is the sample size on each distributed node. The proposed VR-DIGing has a little higher communication cost of $O((ΞΊ_b+ΞΊ_c^2)\log\frac{1}Ξ΅)$. Our stochastic gradient computation complexities are the same as the ones of single-machine VR methods, such as SAG, SAGA, and SVRG, and our communication complexities keep the same as those of EXTRA and DIGing, respectively. To further speed up the convergence, we also propose the accelerated VR-EXTRA and VR-DIGing with both the optimal $O((\sqrt{nΞΊ_s}+n)\log\frac{1}Ξ΅)$ stochastic gradient computation complexity and $O(\sqrt{ΞΊ_bΞΊ_c}\log\frac{1}Ξ΅)$ communication complexity. Our stochastic gradient computation complexity is also the same as the ones of single-machine accelerated VR methods, such as Katyusha, and our communication complexity keeps the same as those of accelerated full batch decentralized methods, such as MSDA.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Optimization & Control

Died the same way β€” πŸ‘» Ghosted