Stochastic Primal-Dual Method for Empirical Risk Minimization with $\mathcal{O}(1)$ Per-Iteration Complexity

November 03, 2018 Β· Declared Dead Β· πŸ› Neural Information Processing Systems

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Conghui Tan, Tong Zhang, Shiqian Ma, Ji Liu arXiv ID 1811.01182 Category math.OC: Optimization & Control Cross-listed cs.LG Citations 32 Venue Neural Information Processing Systems Last Checked 2 months ago
Abstract
Regularized empirical risk minimization problem with linear predictor appears frequently in machine learning. In this paper, we propose a new stochastic primal-dual method to solve this class of problems. Different from existing methods, our proposed methods only require O(1) operations in each iteration. We also develop a variance-reduction variant of the algorithm that converges linearly. Numerical experiments suggest that our methods are faster than existing ones such as proximal SGD, SVRG and SAGA on high-dimensional problems.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Optimization & Control

Died the same way β€” πŸ‘» Ghosted