Application of Quantum Annealing to Training of Deep Neural Networks

October 21, 2015 Β· Declared Dead Β· πŸ› arXiv.org

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Steven H. Adachi, Maxwell P. Henderson arXiv ID 1510.06356 Category quant-ph: Quantum Computing Cross-listed cs.LG, stat.ML Citations 253 Venue arXiv.org Last Checked 2 months ago
Abstract
In Deep Learning, a well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model, typically using Contrastive Divergence (CD), then fine-tuning the weights using backpropagation or other discriminative techniques. However, the generative training can be time-consuming due to the slow mixing of Gibbs sampling. We investigated an alternative approach that estimates model expectations of Restricted Boltzmann Machines using samples from a D-Wave quantum annealing machine. We tested this method on a coarse-grained version of the MNIST data set. In our tests we found that the quantum sampling-based training approach achieves comparable or better accuracy with significantly fewer iterations of generative training than conventional CD-based training. Further investigation is needed to determine whether similar improvements can be achieved for other data sets, and to what extent these improvements can be attributed to quantum effects.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Quantum Computing

R.I.P. πŸ‘» Ghosted

Variational Quantum Algorithms

M. Cerezo, Andrew Arrasmith, ... (+9 more)

quant-ph πŸ› Nature Reviews Physics πŸ“š 3.3K cites 5 years ago

Died the same way β€” πŸ‘» Ghosted