Approximation Bounds for Random Neural Networks and Reservoir Systems
February 14, 2020 Β· Declared Dead Β· π The Annals of Applied Probability
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega
arXiv ID
2002.05933
Category
math.PR
Cross-listed
cs.NE,
eess.SY
Citations
81
Venue
The Annals of Applied Probability
Last Checked
2 months ago
Abstract
This work studies approximation based on single-hidden-layer feedforward and recurrent neural networks with randomly generated internal weights. These methods, in which only the last layer of weights and a few hyperparameters are optimized, have been successfully applied in a wide range of static and dynamic learning problems. Despite the popularity of this approach in empirical tasks, important theoretical questions regarding the relation between the unknown function, the weight distribution, and the approximation rate have remained open. In this work it is proved that, as long as the unknown function, functional, or dynamical system is sufficiently regular, it is possible to draw the internal weights of the random (recurrent) neural network from a generic distribution (not depending on the unknown object) and quantify the error in terms of the number of neurons and the hyperparameters. In particular, this proves that echo state networks with randomly generated weights are capable of approximating a wide class of dynamical systems arbitrarily well and thus provides the first mathematical explanation for their empirically observed success at learning dynamical systems.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β math.PR
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
An Introduction to Matrix Concentration Inequalities
R.I.P.
π»
Ghosted
Non-backtracking spectrum of random graphs: community detection and non-regular Ramanujan graphs
R.I.P.
π»
Ghosted
Convergence of the Deep BSDE Method for Coupled FBSDEs
R.I.P.
π»
Ghosted
A Random Matrix Approach to Neural Networks
R.I.P.
π»
Ghosted
Concentration and regularization of random graphs
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted