Manipulative Elicitation -- A New Attack on Elections with Incomplete Preferences
November 10, 2017 Β· Declared Dead Β· π AAAI Conference on Artificial Intelligence
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Palash Dey
arXiv ID
1711.03948
Category
cs.MA: Multiagent Systems
Cross-listed
cs.DS,
cs.GT
Citations
6
Venue
AAAI Conference on Artificial Intelligence
Last Checked
2 months ago
Abstract
Lu and Boutilier proposed a novel approach based on "minimax regret" to use classical score based voting rules in the setting where preferences can be any partial (instead of complete) orders over the set of alternatives. We show here that such an approach is vulnerable to a new kind of manipulation which was not present in the classical (where preferences are complete orders) world of voting. We call this attack "manipulative elicitation." More specifically, it may be possible to (partially) elicit the preferences of the agents in a way that makes some distinguished alternative win the election who may not be a winner if we elicit every preference completely. More alarmingly, we show that the related computational task is polynomial time solvable for a large class of voting rules which includes all scoring rules, maximin, Copeland$^Ξ±$ for every $Ξ±\in[0,1]$, simplified Bucklin voting rules, etc. We then show that introducing a parameter per pair of alternatives which specifies the minimum number of partial preferences where this pair of alternatives must be comparable makes the related computational task of manipulative elicitation \NPC for all common voting rules including a class of scoring rules which includes the plurality, $k$-approval, $k$-veto, veto, and Borda voting rules, maximin, Copeland$^Ξ±$ for every $Ξ±\in[0,1]$, and simplified Bucklin voting rules. Hence, in this work, we discover a fundamental vulnerability in using minimax regret based approach in partial preferential setting and propose a novel way to tackle it.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Multiagent Systems
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Mean Field Multi-Agent Reinforcement Learning
R.I.P.
π»
Ghosted
A Survey and Critique of Multiagent Deep Reinforcement Learning
R.I.P.
π»
Ghosted
A Survey of Learning in Multiagent Environments: Dealing with Non-Stationarity
R.I.P.
π»
Ghosted
Collaborative vehicle routing: a survey
R.I.P.
π»
Ghosted
Deep Reinforcement Learning for Swarm Systems
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted