Coordinating Filters for Faster Deep Neural Networks

March 28, 2017 ยท Entered Twilight ยท ๐Ÿ› IEEE International Conference on Computer Vision

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .Doxyfile, .github, .gitignore, .travis.yml, CMakeLists.txt, CONTRIBUTING.md, CONTRIBUTORS.md, INSTALL.md, LICENSE, Makefile, Makefile.config.Ubuntu.16.04, Makefile.config.Ubuntu.16.04.anaconda.opt, Makefile.config.example, README.md, caffe.cloc, cmake, data, docker, docs, examples, include, matlab, models, python, scripts, src, tools

Authors Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li arXiv ID 1703.09746 Category cs.CV: Computer Vision Citations 141 Venue IEEE International Conference on Computer Vision Repository https://github.com/wenwei202/caffe โญ 382 Last Checked 2 months ago
Abstract
Very large-scale Deep Neural Networks (DNNs) have achieved remarkable successes in a large variety of computer vision tasks. However, the high computation intensity of DNNs makes it challenging to deploy these models on resource-limited systems. Some studies used low-rank approaches that approximate the filters by low-rank basis to accelerate the testing. Those works directly decomposed the pre-trained DNNs by Low-Rank Approximations (LRA). How to train DNNs toward lower-rank space for more efficient DNNs, however, remains as an open area. To solve the issue, in this work, we propose Force Regularization, which uses attractive forces to enforce filters so as to coordinate more weight information into lower-rank space. We mathematically and empirically verify that after applying our technique, standard LRA methods can reconstruct filters using much lower basis and thus result in faster DNNs. The effectiveness of our approach is comprehensively evaluated in ResNets, AlexNet, and GoogLeNet. In AlexNet, for example, Force Regularization gains 2x speedup on modern GPU without accuracy loss and 4.05x speedup on CPU by paying small accuracy degradation. Moreover, Force Regularization better initializes the low-rank DNNs such that the fine-tuning can converge faster toward higher accuracy. The obtained lower-rank DNNs can be further sparsified, proving that Force Regularization can be integrated with state-of-the-art sparsity-based acceleration methods. Source code is available in https://github.com/wenwei202/caffe
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision