Variance-Reduced and Projection-Free Stochastic Optimization

February 05, 2016 ยท Declared Dead ยท ๐Ÿ› International Conference on Machine Learning

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Elad Hazan, Haipeng Luo arXiv ID 1602.02101 Category cs.LG: Machine Learning Citations 173 Venue International Conference on Machine Learning Last Checked 2 months ago
Abstract
The Frank-Wolfe optimization algorithm has recently regained popularity for machine learning applications due to its projection-free property and its ability to handle structured constraints. However, in the stochastic learning setting, it is still relatively understudied compared to the gradient descent counterpart. In this work, leveraging a recent variance reduction technique, we propose two stochastic Frank-Wolfe variants which substantially improve previous results in terms of the number of stochastic gradient evaluations needed to achieve $1-ฮต$ accuracy. For example, we improve from $O(\frac{1}ฮต)$ to $O(\ln\frac{1}ฮต)$ if the objective function is smooth and strongly convex, and from $O(\frac{1}{ฮต^2})$ to $O(\frac{1}{ฮต^{1.5}})$ if the objective function is smooth and Lipschitz. The theoretical improvement is also observed in experiments on real-world datasets for a multiclass classification application.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning

Died the same way โ€” ๐Ÿ‘ป Ghosted