Optimally Deceiving a Learning Leader in Stackelberg Games
June 11, 2020 ยท Declared Dead ยท ๐ Neural Information Processing Systems
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Georgios Birmpas, Jiarui Gan, Alexandros Hollender, Francisco J. Marmolejo-Cossรญo, Ninad Rajgopal, Alexandros A. Voudouris
arXiv ID
2006.06566
Category
cs.GT: Game Theory
Cross-listed
cs.DS,
cs.LG
Citations
20
Venue
Neural Information Processing Systems
Last Checked
2 months ago
Abstract
Recent results in the ML community have revealed that learning algorithms used to compute the optimal strategy for the leader to commit to in a Stackelberg game, are susceptible to manipulation by the follower. Such a learning algorithm operates by querying the best responses or the payoffs of the follower, who consequently can deceive the algorithm by responding as if his payoffs were much different than what they actually are. For this strategic behavior to be successful, the main challenge faced by the follower is to pinpoint the payoffs that would make the learning algorithm compute a commitment so that best responding to it maximizes the follower's utility, according to his true payoffs. While this problem has been considered before, the related literature only focused on the simplified scenario in which the payoff space is finite, thus leaving the general version of the problem unanswered. In this paper, we fill in this gap, by showing that it is always possible for the follower to compute (near-)optimal payoffs for various scenarios about the learning interaction between leader and follower.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Game Theory
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
A Motivational Game-Theoretic Approach for Peer-to-Peer Energy Trading in the Smart Grid
R.I.P.
๐ป
Ghosted
Computing Resource Allocation in Three-Tier IoT Fog Networks: a Joint Optimization Approach Combining Stackelberg Game and Matching
R.I.P.
๐ป
Ghosted
Fast Convergence of Regularized Learning in Games
R.I.P.
๐ป
Ghosted
Computation Peer Offloading for Energy-Constrained Mobile Edge Computing in Small-Cell Networks
R.I.P.
๐ป
Ghosted
Blockchain Mining Games
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted