From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective

May 10, 2022 Β· Declared Dead Β· πŸ› Annual International ACM SIGIR Conference on Research and Development in Information Retrieval

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Thibault Formal, Carlos Lassance, Benjamin Piwowarski, StΓ©phane Clinchant arXiv ID 2205.04733 Category cs.IR: Information Retrieval Cross-listed cs.CL Citations 188 Venue Annual International ACM SIGIR Conference on Research and Development in Information Retrieval Last Checked 2 months ago
Abstract
Neural retrievers based on dense representations combined with Approximate Nearest Neighbors search have recently received a lot of attention, owing their success to distillation and/or better sampling of examples for training -- while still relying on the same backbone architecture. In the meantime, sparse representation learning fueled by traditional inverted indexing techniques has seen a growing interest, inheriting from desirable IR priors such as explicit lexical matching. While some architectural variants have been proposed, a lesser effort has been put in the training of such models. In this work, we build on SPLADE -- a sparse expansion-based retriever -- and show to which extent it is able to benefit from the same training improvements as dense models, by studying the effect of distillation, hard-negative mining as well as the Pre-trained Language Model initialization. We furthermore study the link between effectiveness and efficiency, on in-domain and zero-shot settings, leading to state-of-the-art results in both scenarios for sufficiently expressive models.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Information Retrieval

Died the same way β€” πŸ‘» Ghosted