The sample complexity of weighted sparse approximation
July 24, 2015 ยท Declared Dead ยท ๐ IEEE Transactions on Signal Processing
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Bubacarr Bah, Rachel Ward
arXiv ID
1507.06736
Category
math.NA: Numerical Analysis
Cross-listed
cs.CC,
cs.IT,
math.FA,
stat.CO
Citations
17
Venue
IEEE Transactions on Signal Processing
Last Checked
2 months ago
Abstract
For Gaussian sampling matrices, we provide bounds on the minimal number of measurements $m$ required to achieve robust weighted sparse recovery guarantees in terms of how well a given prior model for the sparsity support aligns with the true underlying support. Our main contribution is that for a sparse vector ${\bf x} \in \mathbb{R}^N$ supported on an unknown set $\mathcal{S} \subset \{1, \dots, N\}$ with $|\mathcal{S}|\leq k$, if $\mathcal{S}$ has \emph{weighted cardinality} $ฯ(\mathcal{S}) := \sum_{j \in \mathcal{S}} ฯ_j^2$, and if the weights on $\mathcal{S}^c$ exhibit mild growth, $ฯ_j^2 \geq ฮณ\log(j/ฯ(\mathcal{S}))$ for $j\in\mathcal{S}^c$ and $ฮณ> 0$, then the sample complexity for sparse recovery via weighted $\ell_1$-minimization using weights $ฯ_j$ is linear in the weighted sparsity level, and $m = \mathcal{O}(ฯ(\mathcal{S})/ฮณ)$. This main result is a generalization of special cases including a) the standard sparse recovery setting where all weights $ฯ_j \equiv 1$, and $m = \mathcal{O}\left(k\log\left(N/k\right)\right)$; b) the setting where the support is known a priori, and $m = \mathcal{O}(k)$; and c) the setting of sparse recovery with prior information, and $m$ depends on how well the weights are aligned with the support set $\mathcal{S}$. We further extend the results in case c) to the setting of additive noise. Our results are {\em nonuniform} that is they apply for a fixed support, unknown a priori, and the weights on $\mathcal{S}$ do not all have to be smaller than the weights on $\mathcal{S}^c$ for our recovery results to hold.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Numerical Analysis
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations
R.I.P.
๐ป
Ghosted
PDE-Net: Learning PDEs from Data
R.I.P.
๐ป
Ghosted
Efficient tensor completion for color image and video recovery: Low-rank tensor train
R.I.P.
๐ป
Ghosted
Tensor Ring Decomposition
R.I.P.
๐ป
Ghosted
Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted