Dynamic Multimodal Information Bottleneck for Multimodality Classification

November 02, 2023 Β· Entered Twilight Β· πŸ› IEEE Workshop/Winter Conference on Applications of Computer Vision

πŸ’€ TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitattributes, README.md, dataset, diagnosis_tasks, prognosis_tasks

Authors Yingying Fang, Shuang Wu, Sheng Zhang, Chaoyan Huang, Tieyong Zeng, Xiaodan Xing, Simon Walsh, Guang Yang arXiv ID 2311.01066 Category eess.IV: Image & Video Processing Cross-listed cs.CV Citations 15 Venue IEEE Workshop/Winter Conference on Applications of Computer Vision Repository https://github.com/ayanglab/DMIB ⭐ 9 Last Checked 2 months ago
Abstract
Effectively leveraging multimodal data such as various images, laboratory tests and clinical information is gaining traction in a variety of AI-based medical diagnosis and prognosis tasks. Most existing multi-modal techniques only focus on enhancing their performance by leveraging the differences or shared features from various modalities and fusing feature across different modalities. These approaches are generally not optimal for clinical settings, which pose the additional challenges of limited training data, as well as being rife with redundant data or noisy modality channels, leading to subpar performance. To address this gap, we study the robustness of existing methods to data redundancy and noise and propose a generalized dynamic multimodal information bottleneck framework for attaining a robust fused feature representation. Specifically, our information bottleneck module serves to filter out the task-irrelevant information and noises in the fused feature, and we further introduce a sufficiency loss to prevent dropping of task-relevant information, thus explicitly preserving the sufficiency of prediction information in the distilled feature. We validate our model on an in-house and a public COVID19 dataset for mortality prediction as well as two public biomedical datasets for diagnostic tasks. Extensive experiments show that our method surpasses the state-of-the-art and is significantly more robust, being the only method to remain performance when large-scale noisy channels exist. Our code is publicly available at https://github.com/ayanglab/DMIB.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Image & Video Processing