Transform-Invariant Convolutional Neural Networks for Image Classification and Search

November 28, 2019 ยท Entered Twilight ยท ๐Ÿ› ACM Multimedia

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 9.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .Doxyfile, .gitignore, .travis.yml, CMakeLists.txt, CONTRIBUTING.md, CONTRIBUTORS.md, INSTALL.md, LICENSE, Makefile, Makefile.config.example, README.md, build-windows, caffe.cloc, cmake, data, docker, docs, examples, include, matlab, models, python, scripts, src, test, tools

Authors Xu Shen, Xinmei Tian, Anfeng He, Shaoyan Sun, Dacheng Tao arXiv ID 1912.01447 Category cs.CV: Computer Vision Cross-listed cs.LG, eess.IV, stat.ML Citations 45 Venue ACM Multimedia Repository https://github.com/jasonustc/caffe-multigpu/tree/TICNN โญ 13 Last Checked 2 months ago
Abstract
Convolutional neural networks (CNNs) have achieved state-of-the-art results on many visual recognition tasks. However, current CNN models still exhibit a poor ability to be invariant to spatial transformations of images. Intuitively, with sufficient layers and parameters, hierarchical combinations of convolution (matrix multiplication and non-linear activation) and pooling operations should be able to learn a robust mapping from transformed input images to transform-invariant representations. In this paper, we propose randomly transforming (rotation, scale, and translation) feature maps of CNNs during the training stage. This prevents complex dependencies of specific rotation, scale, and translation levels of training images in CNN models. Rather, each convolutional kernel learns to detect a feature that is generally helpful for producing the transform-invariant answer given the combinatorially large variety of transform levels of its input feature maps. In this way, we do not require any extra training supervision or modification to the optimization process and training images. We show that random transformation provides significant improvements of CNNs on many benchmark tasks, including small-scale image recognition, large-scale image recognition, and image retrieval. The code is available at https://github.com/jasonustc/caffe-multigpu/tree/TICNN.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision