High-Dimensional Estimation of Structured Signals from Non-Linear Observations with General Convex Loss Functions
February 10, 2016 Β· Declared Dead Β· π IEEE Transactions on Information Theory
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Martin Genzel
arXiv ID
1602.03436
Category
math.ST
Cross-listed
cs.IT
Citations
45
Venue
IEEE Transactions on Information Theory
Last Checked
2 months ago
Abstract
In this paper, we study the issue of estimating a structured signal $x_0 \in \mathbb{R}^n$ from non-linear and noisy Gaussian observations. Supposing that $x_0$ is contained in a certain convex subset $K \subset \mathbb{R}^n$, we prove that accurate recovery is already feasible if the number of observations exceeds the effective dimension of $K$, which is a common measure for the complexity of signal classes. It will turn out that the possibly unknown non-linearity of our model affects the error rate only by a multiplicative constant. This achievement is based on recent works by Plan and Vershynin, who have suggested to treat the non-linearity rather as noise which perturbs a linear measurement process. Using the concept of restricted strong convexity, we show that their results for the generalized Lasso can be extended to a fairly large class of convex loss functions. Moreover, we shall allow for the presence of adversarial noise so that even deterministic model inaccuracies can be coped with. These generalizations particularly give further evidence of why many standard estimators perform surprisingly well in practice, although they do not rely on any knowledge of the underlying output rule. To this end, our results provide a unified and general framework for signal reconstruction in high dimensions, covering various challenges from the fields of compressed sensing, signal processing, and statistical learning.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β math.ST
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
An introduction to Topological Data Analysis: fundamental and practical aspects for data scientists
R.I.P.
π»
Ghosted
Minimax Optimal Procedures for Locally Private Estimation
R.I.P.
π»
Ghosted
Optimal Best Arm Identification with Fixed Confidence
R.I.P.
π»
Ghosted
Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees
R.I.P.
π»
Ghosted
User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted