Nonparametric regression using deep neural networks with ReLU activation function

August 22, 2017 Β· Declared Dead Β· πŸ› Annals of Statistics

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Johannes Schmidt-Hieber arXiv ID 1708.06633 Category math.ST Cross-listed cs.LG, stat.ML Citations 949 Venue Annals of Statistics Last Checked 2 months ago
Abstract
Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to $\log n$-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constraints such as (generalized) additive models. While there is a lot of flexibility in the network architecture, the tuning parameter is the sparsity of the network. Specifically, we consider large networks with number of potential network parameters exceeding the sample size. The analysis gives some insights into why multilayer feedforward neural networks perform well in practice. Interestingly, for ReLU activation function the depth (number of layers) of the neural network architectures plays an important role and our theory suggests that for nonparametric regression, scaling the network depth with the sample size is natural. It is also shown that under the composition assumption wavelet estimators can only achieve suboptimal rates.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” math.ST

R.I.P. πŸ‘» Ghosted

Safe Testing

Peter GrΓΌnwald, Rianne de Heide, Wouter Koolen

math.ST πŸ› Information Theory and Applications Workshop πŸ“š 267 cites 6 years ago

Died the same way β€” πŸ‘» Ghosted