Machine Learning Informed Decision-Making with Interpreted Model's Outputs

A Field Intervention

More Info
expand_more

Abstract

Despite having set the theoretical ground for explainable systems decades ago, the information system scholars have given little attention to new developments in the decision-making with humans-in-the-loop in real-world problems. We take the sociotechnical system lenses and employ mixed-method analysis of a field intervention to study the machine-learning informed decision-making with interpreted models' outputs. Contrary to theory, our results suggest a small positive effect of explanations on confidence in the final decision, and a negligible effect on the decisions' quality. We uncover complex dynamic interactions between humans and algorithms, and the interplay of algorithmic aversion, trust, experts' heuristic, and changing uncertainty-resolving condititions.