Multi-Scale Attention for Audio Question Answering

May 29, 2023 ยท Entered Twilight ยท ๐Ÿ› Interspeech

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: conifg.py, data_generator.py, main_MWAFM.py, metadata, nets, readme.md, scripts, utils.py

Authors Guangyao Li, Yixin Xu, Di Hu arXiv ID 2305.17993 Category cs.SD: Sound Cross-listed cs.AI, cs.MM, eess.AS Citations 17 Venue Interspeech Repository https://github.com/GeWu-Lab/MWAFM โญ 28 Last Checked 2 months ago
Abstract
Audio question answering (AQA), acting as a widely used proxy task to explore scene understanding, has got more attention. The AQA is challenging for it requires comprehensive temporal reasoning from different scales' events of an audio scene. However, existing methods mostly extend the structures of visual question answering task to audio ones in a simple pattern but may not perform well when perceiving a fine-grained audio scene. To this end, we present a Multi-scale Window Attention Fusion Model (MWAFM) consisting of an asynchronous hybrid attention module and a multi-scale window attention module. The former is designed to aggregate unimodal and cross-modal temporal contexts, while the latter captures sound events of varying lengths and their temporal dependencies for a more comprehensive understanding. Extensive experiments are conducted to demonstrate that the proposed MWAFM can effectively explore temporal information to facilitate AQA in the fine-grained scene.Code: https://github.com/GeWu-Lab/MWAFM
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Sound