Learning Spatio-Temporal Features with 3D Residual Networks for Action Recognition

August 25, 2017 ยท Entered Twilight ยท ๐Ÿ› 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 8.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, LR, README.md, activitynet_utils.lua, data_loader.lua, data_threads.lua, dataset.lua, kinetics_utils.lua, main.lua, mean.lua, model.lua, models, opts.lua, test_video.lua, train.lua, utils.lua, utils, val.lua

Authors Kensho Hara, Hirokatsu Kataoka, Yutaka Satoh arXiv ID 1708.07632 Category cs.CV: Computer Vision Citations 671 Venue 2017 IEEE International Conference on Computer Vision Workshops (ICCVW) Repository https://github.com/kenshohara/3D-ResNets โญ 122 Last Checked 2 months ago
Abstract
Convolutional neural networks with spatio-temporal 3D kernels (3D CNNs) have an ability to directly extract spatio-temporal features from videos for action recognition. Although the 3D kernels tend to overfit because of a large number of their parameters, the 3D CNNs are greatly improved by using recent huge video databases. However, the architecture of 3D CNNs is relatively shallow against to the success of very deep neural networks in 2D-based CNNs, such as residual networks (ResNets). In this paper, we propose a 3D CNNs based on ResNets toward a better action representation. We describe the training procedure of our 3D ResNets in details. We experimentally evaluate the 3D ResNets on the ActivityNet and Kinetics datasets. The 3D ResNets trained on the Kinetics did not suffer from overfitting despite the large number of parameters of the model, and achieved better performance than relatively shallow networks, such as C3D. Our code and pretrained models (e.g. Kinetics and ActivityNet) are publicly available at https://github.com/kenshohara/3D-ResNets.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision