Benefits of depth in neural networks
February 14, 2016 · Declared Dead · 🏛 Annual Conference Computational Learning Theory
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Matus Telgarsky
arXiv ID
1602.04485
Category
cs.LG: Machine Learning
Cross-listed
cs.NE,
stat.ML
Citations
671
Venue
Annual Conference Computational Learning Theory
Last Checked
2 months ago
Abstract
For any positive integer $k$, there exist neural networks with $Θ(k^3)$ layers, $Θ(1)$ nodes per layer, and $Θ(1)$ distinct parameters which can not be approximated by networks with $\mathcal{O}(k)$ layers unless they are exponentially large --- they must possess $Ω(2^k)$ nodes. This result is proved here for a class of nodes termed "semi-algebraic gates" which includes the common choices of ReLU, maximum, indicator, and piecewise polynomial functions, therefore establishing benefits of depth against not just standard networks with ReLU gates, but also convolutional networks with ReLU and maximization gates, sum-product networks, and boosted decision trees (in this last case with a stronger separation: $Ω(2^{k^3})$ total tree nodes are required).
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
📜 Similar Papers
In the same crypt — Machine Learning
R.I.P.
👻
Ghosted
R.I.P.
👻
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
👻
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
👻
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
👻
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
👻
Ghosted
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Died the same way — 👻 Ghosted
R.I.P.
👻
Ghosted
Language Models are Few-Shot Learners
R.I.P.
👻
Ghosted
You Only Look Once: Unified, Real-Time Object Detection
R.I.P.
👻
Ghosted
A Unified Approach to Interpreting Model Predictions
R.I.P.
👻
Ghosted