Probability error bounds for approximation of functions in reproducing kernel Hilbert spaces

March 28, 2020 ยท Declared Dead ยท ๐Ÿ› Journal of Function Spaces

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Ata Deniz Aydin, Aurelian Gheondea arXiv ID 2003.12801 Category math.NA: Numerical Analysis Cross-listed cs.IT, math.FA Citations 2 Venue Journal of Function Spaces Last Checked 2 months ago
Abstract
We find probability error bounds for approximations of functions $f$ in a separable reproducing kernel Hilbert space $\mathcal{H}$ with reproducing kernel $K$ on a base space $X$, firstly in terms of finite linear combinations of functions of type $K_{x_i}$ and then in terms of the projection $ฯ€^n_x$ on $\mathrm{Span}\{K_{x_i}\}^n_{i=1}$, for random sequences of points $x=(x_i)_i$ in $X$. Given a probability measure $P$, letting $P_K$ be the measure defined by $\mathrm{d} P_K(x)=K(x,x)\mathrm{d} P(x)$, $x\in X$, our approach is based on the nonexpansive operator \[L^2(X;P_K)\niฮป\mapsto L_{P,K}ฮป:=\int_X ฮป(x)K_x\mathrm{d} P(x)\in \mathcal{H},\] where the integral exists in the Bochner sense. Using this operator, we then define a new reproducing kernel Hilbert space, denoted by $\mathcal{H}_P$, that is the operator range of $L_{P,K}$. Our main result establishes bounds, in terms of the operator $L_{P,K}$, on the probability that the Hilbert space distance between an arbitrary function $f\in\mathcal{H}$ and linear combinations of functions of type $K_{x_i}$, for $(x_i)_i$ sampled independently from $P$, falls below a given threshold. For sequences of points $(x_i)_{i=1}^\infty$ constituting a so-called uniqueness set, the orthogonal projections $ฯ€^n_x$ to $\mathrm{Span}\{K_{x_i}\}^n_{i=1}$ converge in the strong operator topology to the identity operator. We prove that, under the assumption that $\mathcal{H}_P$ is dense in $\mathcal{H}$, any sequence of iid samples from $P$ yields a uniqueness set with probability $1$. This result improves on previous error bounds in weaker norms, such as uniform or $L^p$ norms, which yield only convergence in probability and not a.c. convergence. Two examples that show the applicability of this result to a uniform distribution on a compact interval and to the Hardy space $H^2(\mathbb{D})$ are presented as well.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Numerical Analysis

R.I.P. ๐Ÿ‘ป Ghosted

Tensor Ring Decomposition

Qibin Zhao, Guoxu Zhou, ... (+3 more)

math.NA ๐Ÿ› arXiv ๐Ÿ“š 427 cites 9 years ago

Died the same way โ€” ๐Ÿ‘ป Ghosted