Inherent Trade-Offs in the Fair Determination of Risk Scores

September 19, 2016 ยท Declared Dead ยท ๐Ÿ› Information Technology Convergence and Services

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Jon Kleinberg, Sendhil Mullainathan, Manish Raghavan arXiv ID 1609.05807 Category cs.LG: Machine Learning Cross-listed cs.CY, stat.ML Citations 2.0K Venue Information Technology Convergence and Services Last Checked 2 months ago
Abstract
Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning

Died the same way โ€” ๐Ÿ‘ป Ghosted