From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective
May 10, 2022 Β· Declared Dead Β· π Annual International ACM SIGIR Conference on Research and Development in Information Retrieval
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Thibault Formal, Carlos Lassance, Benjamin Piwowarski, StΓ©phane Clinchant
arXiv ID
2205.04733
Category
cs.IR: Information Retrieval
Cross-listed
cs.CL
Citations
188
Venue
Annual International ACM SIGIR Conference on Research and Development in Information Retrieval
Last Checked
2 months ago
Abstract
Neural retrievers based on dense representations combined with Approximate Nearest Neighbors search have recently received a lot of attention, owing their success to distillation and/or better sampling of examples for training -- while still relying on the same backbone architecture. In the meantime, sparse representation learning fueled by traditional inverted indexing techniques has seen a growing interest, inheriting from desirable IR priors such as explicit lexical matching. While some architectural variants have been proposed, a lesser effort has been put in the training of such models. In this work, we build on SPLADE -- a sparse expansion-based retriever -- and show to which extent it is able to benefit from the same training improvements as dense models, by studying the effect of distillation, hard-negative mining as well as the Pre-trained Language Model initialization. We furthermore study the link between effectiveness and efficiency, on in-domain and zero-shot settings, leading to state-of-the-art results in both scenarios for sufficiently expressive models.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Information Retrieval
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation
R.I.P.
π»
Ghosted
Graph Convolutional Neural Networks for Web-Scale Recommender Systems
π
π
Old Age
Neural Graph Collaborative Filtering
R.I.P.
π»
Ghosted
Self-Attentive Sequential Recommendation
R.I.P.
π»
Ghosted
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted