Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes
October 28, 2019 ยท Entered Twilight ยท ๐ Neural Information Processing Systems
"Last commit was 5.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: Batchnorm.ipynb, ExampleGloVeCov.npy, ExampleGloVeVecs.npy, GRU.ipynb, GRUsim.py, GRUtheory.py, GloVe.ipynb, Plots.ipynb, README.md, RNN.ipynb, RNNsim.py, RNNtheory.py, Transformer.ipynb, allkernels.png, colab, images, utils.py
Authors
Greg Yang
arXiv ID
1910.12478
Category
cs.NE: Neural & Evolutionary
Cross-listed
cond-mat.dis-nn,
cs.LG,
math-ph
Citations
226
Venue
Neural Information Processing Systems
Repository
https://github.com/thegregyang/GP4A
โญ 251
Last Checked
2 months ago
Abstract
Wide neural networks with random weights and biases are Gaussian processes, as originally observed by Neal (1995) and more recently by Lee et al. (2018) and Matthews et al. (2018) for deep fully-connected networks, as well as by Novak et al. (2019) and Garriga-Alonso et al. (2019) for deep convolutional networks. We show that this Neural Network-Gaussian Process correspondence surprisingly extends to all modern feedforward or recurrent neural networks composed of multilayer perceptron, RNNs (e.g. LSTMs, GRUs), (nD or graph) convolution, pooling, skip connection, attention, batch normalization, and/or layer normalization. More generally, we introduce a language for expressing neural network computations, and our result encompasses all such expressible neural networks. This work serves as a tutorial on the *tensor programs* technique formulated in Yang (2019) and elucidates the Gaussian Process results obtained there. We provide open-source implementations of the Gaussian Process kernels of simple RNN, GRU, transformer, and batchnorm+ReLU network at github.com/thegregyang/GP4A.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Neural & Evolutionary
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Progressive Growing of GANs for Improved Quality, Stability, and Variation
R.I.P.
๐ป
Ghosted
Learning both Weights and Connections for Efficient Neural Networks
R.I.P.
๐ป
Ghosted
LSTM: A Search Space Odyssey
R.I.P.
๐ป
Ghosted
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
R.I.P.
๐ป
Ghosted