AssembleNet++: Assembling Modality Representations via Attention Connections

August 18, 2020 · Declared Dead · 🏛 European Conference on Computer Vision

⏳ CAUSE OF DEATH: Coming Soon™
Promised but never delivered

"Paper promises code 'coming soon'"

Evidence collected by the PWNC Scanner

Authors Michael S. Ryoo, AJ Piergiovanni, Juhana Kangaspunta, Anelia Angelova arXiv ID 2008.08072 Category cs.CV: Computer Vision Cross-listed cs.LG, cs.NE Citations 50 Venue European Conference on Computer Vision Last Checked 1 month ago
Abstract
We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network. A new network component named peer-attention is introduced, which dynamically learns the attention weights using another block or input modality. Even without pre-training, our models outperform the previous work on standard public activity recognition datasets with continuous videos, establishing new state-of-the-art. We also confirm that our findings of having neural connections from the object modality and the use of peer-attention is generally applicable for different existing architectures, improving their performances. We name our model explicitly as AssembleNet++. The code will be available at: https://sites.google.com/corp/view/assemblenet/
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

📜 Similar Papers

In the same crypt — Computer Vision

Died the same way — ⏳ Coming Soon™