Learning to Rematch Mismatched Pairs for Robust Cross-Modal Retrieval

March 08, 2024 ยท Entered Twilight ยท ๐Ÿ› Computer Vision and Pattern Recognition

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: README.md, data.py, evaluation.py, main_L2RM.py, main_testing.py, models.py, noise_index, opt.py, tarin_coco.sh, train_cc152.sh, train_f30k.sh, utils.py, vocab.py

Authors Haochen Han, Qinghua Zheng, Guang Dai, Minnan Luo, Jingdong Wang arXiv ID 2403.05105 Category cs.CV: Computer Vision Cross-listed cs.AI, cs.MM Citations 17 Venue Computer Vision and Pattern Recognition Repository https://github.com/hhc1997/L2RM โญ 36 Last Checked 2 months ago
Abstract
Collecting well-matched multimedia datasets is crucial for training cross-modal retrieval models. However, in real-world scenarios, massive multimodal data are harvested from the Internet, which inevitably contains Partially Mismatched Pairs (PMPs). Undoubtedly, such semantical irrelevant data will remarkably harm the cross-modal retrieval performance. Previous efforts tend to mitigate this problem by estimating a soft correspondence to down-weight the contribution of PMPs. In this paper, we aim to address this challenge from a new perspective: the potential semantic similarity among unpaired samples makes it possible to excavate useful knowledge from mismatched pairs. To achieve this, we propose L2RM, a general framework based on Optimal Transport (OT) that learns to rematch mismatched pairs. In detail, L2RM aims to generate refined alignments by seeking a minimal-cost transport plan across different modalities. To formalize the rematching idea in OT, first, we propose a self-supervised cost function that automatically learns from explicit similarity-cost mapping relation. Second, we present to model a partial OT problem while restricting the transport among false positives to further boost refined alignments. Extensive experiments on three benchmarks demonstrate our L2RM significantly improves the robustness against PMPs for existing models. The code is available at https://github.com/hhc1997/L2RM.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision