ChannelDropBack: Forward-Consistent Stochastic Regularization for Deep Networks

November 16, 2024 ยท Entered Twilight ยท ๐Ÿ› International Conference on Pattern Recognition

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitignore, LICENSE, README.md, SGD_Dropback.py, ViT-pytorch, fine_tune.py, imgs, net_models, requirements.txt, train.py

Authors Evgeny Hershkovitch Neiterman, Gil Ben-Artzi arXiv ID 2411.10891 Category cs.CV: Computer Vision Citations 1 Venue International Conference on Pattern Recognition Repository https://github.com/neiterman21/ChannelDropBack.git} Last Checked 2 months ago
Abstract
Incorporating stochasticity into the training process of deep convolutional networks is a widely used technique to reduce overfitting and improve regularization. Existing techniques often require modifying the architecture of the network by adding specialized layers, are effective only to specific network topologies or types of layers - linear or convolutional, and result in a trained model that is different from the deployed one. We present ChannelDropBack, a simple stochastic regularization approach that introduces randomness only into the backward information flow, leaving the forward pass intact. ChannelDropBack randomly selects a subset of channels within the network during the backpropagation step and applies weight updates only to them. As a consequence, it allows for seamless integration into the training process of any model and layers without the need to change its architecture, making it applicable to various network topologies, and the exact same network is deployed during training and inference. Experimental evaluations validate the effectiveness of our approach, demonstrating improved accuracy on popular datasets and models, including ImageNet and ViT. Code is available at \url{https://github.com/neiterman21/ChannelDropBack.git}.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision