Algorithmic Risk Assessments Can Alter Human Decision-Making Processes in High-Stakes Government Contexts
December 09, 2020 ยท Declared Dead ยท ๐ Proc. ACM Hum. Comput. Interact.
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Ben Green, Yiling Chen
arXiv ID
2012.05370
Category
cs.HC: Human-Computer Interaction
Cross-listed
cs.AI,
cs.CY
Citations
74
Venue
Proc. ACM Hum. Comput. Interact.
Last Checked
2 months ago
Abstract
Governments are increasingly turning to algorithmic risk assessments when making important decisions, such as whether to release criminal defendants before trial. Policymakers assert that providing public servants with algorithmic advice will improve human risk predictions and thereby lead to better (e.g., fairer) decisions. Yet because many policy decisions require balancing risk-reduction with competing goals, improving the accuracy of predictions may not necessarily improve the quality of decisions. If risk assessments make people more attentive to reducing risk at the expense of other values, these algorithms would diminish the implementation of public policy even as they lead to more accurate predictions. Through an experiment with 2,140 lay participants simulating two high-stakes government contexts, we provide the first direct evidence that risk assessments can systematically alter how people factor risk into their decisions. These shifts counteracted the potential benefits of improved prediction accuracy. In the pretrial setting of our experiment, the risk assessment made participants more sensitive to increases in perceived risk; this shift increased the racial disparity in pretrial detention by 1.9%. In the government loans setting of our experiment, the risk assessment made participants more risk-averse; this shift reduced government aid by 8.3%. These results demonstrate the potential limits and harms of attempts to improve public policy by incorporating predictive algorithms into multifaceted policy decisions. If these observed behaviors occur in practice, presenting risk assessments to public servants would generate unexpected and unjust shifts in public policy without being subject to democratic deliberation or oversight.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Human-Computer Interaction
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Improving fairness in machine learning systems: What do industry practitioners need?
R.I.P.
๐ป
Ghosted
Identifying Stable Patterns over Time for Emotion Recognition from EEG
R.I.P.
๐ป
Ghosted
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
R.I.P.
๐ป
Ghosted
Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges and Opportunities
R.I.P.
๐ป
Ghosted
Educational data mining and learning analytics: An updated survey
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted