Optimal Mean Estimation without a Variance
November 24, 2020 Β· Declared Dead Β· π Annual Conference Computational Learning Theory
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Yeshwanth Cherapanamjeri, Nilesh Tripuraneni, Peter L. Bartlett, Michael I. Jordan
arXiv ID
2011.12433
Category
math.ST
Cross-listed
cs.DS,
cs.LG,
stat.ML
Citations
24
Venue
Annual Conference Computational Learning Theory
Last Checked
2 months ago
Abstract
We study the problem of heavy-tailed mean estimation in settings where the variance of the data-generating distribution does not exist. Concretely, given a sample $\mathbf{X} = \{X_i\}_{i = 1}^n$ from a distribution $\mathcal{D}$ over $\mathbb{R}^d$ with mean $ΞΌ$ which satisfies the following \emph{weak-moment} assumption for some ${Ξ±\in [0, 1]}$: \begin{equation*} \forall \|v\| = 1: \mathbb{E}_{X \thicksim \mathcal{D}}[\lvert \langle X - ΞΌ, v\rangle \rvert^{1 + Ξ±}] \leq 1, \end{equation*} and given a target failure probability, $Ξ΄$, our goal is to design an estimator which attains the smallest possible confidence interval as a function of $n,d,Ξ΄$. For the specific case of $Ξ±= 1$, foundational work of Lugosi and Mendelson exhibits an estimator achieving subgaussian confidence intervals, and subsequent work has led to computationally efficient versions of this estimator. Here, we study the case of general $Ξ±$, and establish the following information-theoretic lower bound on the optimal attainable confidence interval: \begin{equation*} Ξ©\left(\sqrt{\frac{d}{n}} + \left(\frac{d}{n}\right)^{\fracΞ±{(1 + Ξ±)}} + \left(\frac{\log 1 / Ξ΄}{n}\right)^{\fracΞ±{(1 + Ξ±)}}\right). \end{equation*} Moreover, we devise a computationally-efficient estimator which achieves this lower bound.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β math.ST
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
An introduction to Topological Data Analysis: fundamental and practical aspects for data scientists
R.I.P.
π»
Ghosted
Minimax Optimal Procedures for Locally Private Estimation
R.I.P.
π»
Ghosted
Optimal Best Arm Identification with Fixed Confidence
R.I.P.
π»
Ghosted
Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees
R.I.P.
π»
Ghosted
User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted