Improved universal approximation with neural networks studied via affine-invariant subspaces of $L_2(\mathbb{R}^n)$

April 03, 2025 Β· Declared Dead Β· πŸ› Electronic Journal of Applied Mathematics

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Cornelia Schneider, Samuel Probst arXiv ID 2504.02445 Category math.FA Cross-listed cs.IT Citations 0 Venue Electronic Journal of Applied Mathematics Last Checked 2 months ago
Abstract
We show that there are no non-trivial closed subspaces of $L_2(\mathbb{R}^n)$ that are invariant under invertible affine transformations. We apply this result to neural networks showing that any nonzero $L_2(\mathbb{R})$ function is an adequate activation function in a one hidden layer neural network in order to approximate every function in $L_2(\mathbb{R})$ with any desired accuracy. This generalizes the universal approximation properties of neural networks in $L_2(\mathbb{R})$ related to Wiener's Tauberian Theorems. Our results extend to the spaces $L_p(\mathbb{R})$ with $p>1$.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” math.FA

Died the same way β€” πŸ‘» Ghosted