High-Dimensional Robust Mean Estimation in Nearly-Linear Time

November 23, 2018 Β· Declared Dead Β· πŸ› ACM-SIAM Symposium on Discrete Algorithms

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Yu Cheng, Ilias Diakonikolas, Rong Ge arXiv ID 1811.09380 Category cs.LG: Machine Learning Cross-listed cs.DS, math.ST, stat.ML Citations 128 Venue ACM-SIAM Symposium on Discrete Algorithms Last Checked 1 month ago
Abstract
We study the fundamental problem of high-dimensional mean estimation in a robust model where a constant fraction of the samples are adversarially corrupted. Recent work gave the first polynomial time algorithms for this problem with dimension-independent error guarantees for several families of structured distributions. In this work, we give the first nearly-linear time algorithms for high-dimensional robust mean estimation. Specifically, we focus on distributions with (i) known covariance and sub-gaussian tails, and (ii) unknown bounded covariance. Given $N$ samples on $\mathbb{R}^d$, an $Ξ΅$-fraction of which may be arbitrarily corrupted, our algorithms run in time $\tilde{O}(Nd) / \mathrm{poly}(Ξ΅)$ and approximate the true mean within the information-theoretically optimal error, up to constant factors. Previous robust algorithms with comparable error guarantees have running times $\tildeΞ©(N d^2)$, for $Ξ΅= Ξ©(1)$. Our algorithms rely on a natural family of SDPs parameterized by our current guess $Ξ½$ for the unknown mean $ΞΌ^\star$. We give a win-win analysis establishing the following: either a near-optimal solution to the primal SDP yields a good candidate for $ΞΌ^\star$ -- independent of our current guess $Ξ½$ -- or the dual SDP yields a new guess $Ξ½'$ whose distance from $ΞΌ^\star$ is smaller by a constant factor. We exploit the special structure of the corresponding SDPs to show that they are approximately solvable in nearly-linear time. Our approach is quite general, and we believe it can also be applied to obtain nearly-linear time algorithms for other high-dimensional robust learning problems.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Machine Learning

Died the same way β€” πŸ‘» Ghosted