MInD: Improving Multimodal Sentiment Analysis via Multimodal Information Disentanglement

January 22, 2024 · Declared Dead · 🏛 arXiv.org

⏳ CAUSE OF DEATH: Coming Soon™
Promised but never delivered

"Paper promises code 'coming soon'"

Evidence collected by the PWNC Scanner

Authors Weichen Dai, Xingyu Li, Zeyu Wang, Pengbo Hu, Ji Qi, Jianlin Peng, Yi Zhou arXiv ID 2401.11818 Category cs.MM: Multimedia Citations 7 Venue arXiv.org Last Checked 1 month ago
Abstract
Learning effective joint representations has been a central task in multi-modal sentiment analysis. Previous works addressing this task focus on exploring sophisticated fusion techniques to enhance performance. However, the inherent heterogeneity of distinct modalities remains a core problem that brings challenges in fusing and coordinating the multi-modal signals at both the representational level and the informational level, impeding the full exploitation of multi-modal information. To address this problem, we propose the Multi-modal Information Disentanglement (MInD) method, which decomposes the multi-modal inputs into modality-invariant and modality-specific components through a shared encoder and multiple private encoders. Furthermore, by explicitly training generated noise in an adversarial manner, MInD is able to isolate uninformativeness, thus improves the learned representations. Therefore, the proposed disentangled decomposition allows for a fusion process that is simpler than alternative methods and results in improved performance. Experimental evaluations conducted on representative benchmark datasets demonstrate MInD's effectiveness in both multi-modal emotion recognition and multi-modal humor detection tasks. Code will be released upon acceptance of the paper.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

📜 Similar Papers

In the same crypt — Multimedia

Died the same way — ⏳ Coming Soon™