Verifying Individual Fairness in Machine Learning Models

June 21, 2020 ยท Entered Twilight ยท ๐Ÿ› Conference on Uncertainty in Artificial Intelligence

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, README.md, datasets, dev-pkgs, experiments, research

Authors Philips George John, Deepak Vijaykeerthy, Diptikalyan Saha arXiv ID 2006.11737 Category cs.LG: Machine Learning Cross-listed cs.AI, stat.ML Citations 69 Venue Conference on Uncertainty in Artificial Intelligence Repository https://github.com/philips-george/ifv-uai-2020 Last Checked 2 months ago
Abstract
We consider the problem of whether a given decision model, working with structured data, has individual fairness. Following the work of Dwork, a model is individually biased (or unfair) if there is a pair of valid inputs which are close to each other (according to an appropriate metric) but are treated differently by the model (different class label, or large difference in output), and it is unbiased (or fair) if no such pair exists. Our objective is to construct verifiers for proving individual fairness of a given model, and we do so by considering appropriate relaxations of the problem. We construct verifiers which are sound but not complete for linear classifiers, and kernelized polynomial/radial basis function classifiers. We also report the experimental results of evaluating our proposed algorithms on publicly available datasets.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning