The jamming transition as a paradigm to understand the loss landscape of deep neural networks
September 25, 2018 Β· Declared Dead Β· π Physical Review E
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Mario Geiger, Stefano Spigler, StΓ©phane d'Ascoli, Levent Sagun, Marco Baity-Jesi, Giulio Biroli, Matthieu Wyart
arXiv ID
1809.09349
Category
cond-mat.dis-nn
Cross-listed
cs.LG
Citations
153
Venue
Physical Review E
Last Checked
2 months ago
Abstract
Deep learning has been immensely successful at a variety of tasks, ranging from classification to AI. Learning corresponds to fitting training data, which is implemented by descending a very high-dimensional loss function. Understanding under which conditions neural networks do not get stuck in poor minima of the loss, and how the landscape of that loss evolves as depth is increased remains a challenge. Here we predict, and test empirically, an analogy between this landscape and the energy landscape of repulsive ellipses. We argue that in FC networks a phase transition delimits the over- and under-parametrized regimes where fitting can or cannot be achieved. In the vicinity of this transition, properties of the curvature of the minima of the loss are critical. This transition shares direct similarities with the jamming transition by which particles form a disordered solid as the density is increased, which also occurs in certain classes of computational optimization and learning problems such as the perceptron. Our analysis gives a simple explanation as to why poor minima of the loss cannot be encountered in the overparametrized regime, and puts forward the surprising result that the ability of fully connected networks to fit random data is independent of their depth. Our observations suggests that this independence also holds for real data. We also study a quantity $Ξ$ which characterizes how well ($Ξ<0$) or badly ($Ξ>0$) a datum is learned. At the critical point it is power-law distributed, $P_+(Ξ)\simΞ^ΞΈ$ for $Ξ>0$ and $P_-(Ξ)\sim(-Ξ)^{-Ξ³}$ for $Ξ<0$, with $ΞΈ\approx0.3$ and $Ξ³\approx0.2$. This observation suggests that near the transition the loss landscape has a hierarchical structure and that the learning dynamics is prone to avalanche-like dynamics, with abrupt changes in the set of patterns that are learned.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β cond-mat.dis-nn
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Mutual Information, Neural Networks and the Renormalization Group
R.I.P.
π»
Ghosted
Machine learning meets network science: dimensionality reduction for fast and efficient embedding of networks in the hyperbolic space
R.I.P.
π»
Ghosted
Classification and Geometry of General Perceptual Manifolds
R.I.P.
π»
Ghosted
Criticality in Formal Languages and Statistical Physics
R.I.P.
π»
Ghosted
Simplicial complexes: higher-order spectral dimension and dynamics
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted