R.I.P.
π»
Ghosted
Covariance-adapting algorithm for semi-bandits with application to sparse rewards
April 15, 2026 Β· Grace Period Β· π Proceedings of the 33rd Annual Conference on Learning Theory (COLT 2020), PMLR 125, 2020
Authors
Pierre Perrault, Vianney Perchet, Michal Valko
arXiv ID
2604.13738
Category
stat.ML: Machine Learning (Stat)
Cross-listed
cs.LG
Citations
0
Venue
Proceedings of the 33rd Annual Conference on Learning Theory (COLT 2020), PMLR 125, 2020
Abstract
We investigate stochastic combinatorial semi-bandits, where the entire joint distribution of outcomes impacts the complexity of the problem instance (unlike in the standard bandits). Typical distributions considered depend on specific parameter values, whose prior knowledge is required in theory but quite difficult to estimate in practice; an example is the commonly assumed sub-Gaussian family. We alleviate this issue by instead considering a new general family of sub-exponential distributions, which contains bounded and Gaussian ones. We prove a new lower bound on the expected regret on this family, that is parameterized by the unknown covariance matrix of outcomes, a tighter quantity than the sub-Gaussian matrix. We then construct an algorithm that uses covariance estimates, and provide a tight asymptotic analysis of the regret. Finally, we apply and extend our results to the family of sparse outcomes, which has applications in many recommender systems.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Machine Learning (Stat)
R.I.P.
π»
Ghosted
Distilling the Knowledge in a Neural Network
R.I.P.
π»
Ghosted
Layer Normalization
R.I.P.
π»
Ghosted
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
R.I.P.
π»
Ghosted
Domain-Adversarial Training of Neural Networks
R.I.P.
π»
Ghosted