The Price of Incentivizing Exploration: A Characterization via Thompson Sampling and Sample Complexity
February 03, 2020 ยท Declared Dead ยท ๐ ACM Conference on Economics and Computation
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Mark Sellke, Aleksandrs Slivkins
arXiv ID
2002.00558
Category
cs.GT: Game Theory
Cross-listed
cs.DS,
cs.LG
Citations
37
Venue
ACM Conference on Economics and Computation
Last Checked
2 months ago
Abstract
We consider incentivized exploration: a version of multi-armed bandits where the choice of arms is controlled by self-interested agents, and the algorithm can only issue recommendations. The algorithm controls the flow of information, and the information asymmetry can incentivize the agents to explore. Prior work achieves optimal regret rates up to multiplicative factors that become arbitrarily large depending on the Bayesian priors, and scale exponentially in the number of arms. A more basic problem of sampling each arm once runs into similar factors. We focus on the price of incentives: the loss in performance, broadly construed, incurred for the sake of incentive-compatibility. We prove that Thompson Sampling, a standard bandit algorithm, is incentive-compatible if initialized with sufficiently many data points. The performance loss due to incentives is therefore limited to the initial rounds when these data points are collected. The problem is largely reduced to that of sample complexity: how many rounds are needed? We address this question, providing matching upper and lower bounds and instantiating them in various corollaries. Typically, the optimal sample complexity is polynomial in the number of arms and exponential in the "strength of beliefs".
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Game Theory
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
A Motivational Game-Theoretic Approach for Peer-to-Peer Energy Trading in the Smart Grid
R.I.P.
๐ป
Ghosted
Computing Resource Allocation in Three-Tier IoT Fog Networks: a Joint Optimization Approach Combining Stackelberg Game and Matching
R.I.P.
๐ป
Ghosted
Fast Convergence of Regularized Learning in Games
R.I.P.
๐ป
Ghosted
Computation Peer Offloading for Energy-Constrained Mobile Edge Computing in Small-Cell Networks
R.I.P.
๐ป
Ghosted
Blockchain Mining Games
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted