Probability error bounds for approximation of functions in reproducing kernel Hilbert spaces
March 28, 2020 ยท Declared Dead ยท ๐ Journal of Function Spaces
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Ata Deniz Aydin, Aurelian Gheondea
arXiv ID
2003.12801
Category
math.NA: Numerical Analysis
Cross-listed
cs.IT,
math.FA
Citations
2
Venue
Journal of Function Spaces
Last Checked
2 months ago
Abstract
We find probability error bounds for approximations of functions $f$ in a separable reproducing kernel Hilbert space $\mathcal{H}$ with reproducing kernel $K$ on a base space $X$, firstly in terms of finite linear combinations of functions of type $K_{x_i}$ and then in terms of the projection $ฯ^n_x$ on $\mathrm{Span}\{K_{x_i}\}^n_{i=1}$, for random sequences of points $x=(x_i)_i$ in $X$. Given a probability measure $P$, letting $P_K$ be the measure defined by $\mathrm{d} P_K(x)=K(x,x)\mathrm{d} P(x)$, $x\in X$, our approach is based on the nonexpansive operator \[L^2(X;P_K)\niฮป\mapsto L_{P,K}ฮป:=\int_X ฮป(x)K_x\mathrm{d} P(x)\in \mathcal{H},\] where the integral exists in the Bochner sense. Using this operator, we then define a new reproducing kernel Hilbert space, denoted by $\mathcal{H}_P$, that is the operator range of $L_{P,K}$. Our main result establishes bounds, in terms of the operator $L_{P,K}$, on the probability that the Hilbert space distance between an arbitrary function $f\in\mathcal{H}$ and linear combinations of functions of type $K_{x_i}$, for $(x_i)_i$ sampled independently from $P$, falls below a given threshold. For sequences of points $(x_i)_{i=1}^\infty$ constituting a so-called uniqueness set, the orthogonal projections $ฯ^n_x$ to $\mathrm{Span}\{K_{x_i}\}^n_{i=1}$ converge in the strong operator topology to the identity operator. We prove that, under the assumption that $\mathcal{H}_P$ is dense in $\mathcal{H}$, any sequence of iid samples from $P$ yields a uniqueness set with probability $1$. This result improves on previous error bounds in weaker norms, such as uniform or $L^p$ norms, which yield only convergence in probability and not a.c. convergence. Two examples that show the applicability of this result to a uniform distribution on a compact interval and to the Hardy space $H^2(\mathbb{D})$ are presented as well.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Numerical Analysis
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations
R.I.P.
๐ป
Ghosted
PDE-Net: Learning PDEs from Data
R.I.P.
๐ป
Ghosted
Efficient tensor completion for color image and video recovery: Low-rank tensor train
R.I.P.
๐ป
Ghosted
Tensor Ring Decomposition
R.I.P.
๐ป
Ghosted
Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted