Closer or even farther from fairness: An assessment of whether fairness toolkits constrain practitioners with regards to algorithmic harms

More Info
expand_more

Abstract

To encourage ethical thinking in Machine Learning (ML) development, fairness researchers have created tools to assess and mitigate unfair outcomes. However, despite their efforts, algorithmic harms go beyond what the toolkits currently allow to measure. Through 30 semi-structured interviews, we investigated whether data scientists are constrained to only thinking about issues that can be tackled with these toolkits when using them in practice. The results of a comparative assessment of approaches with and without a toolkit indicate that although they can be incredibly effective, toolkits shouldn't replace educating on sources of harm and can even have hazardous consequences when improperly used. We discovered that while fairness toolkits increase practitioners' awareness of several specific sources of harm, such as questionable attributes or data sampling techniques, their greater power lies in fostering discussions about ML systems' propensity to treat individuals unfairly. On the contrary, we observed that these toolkits do not significantly help in the data documentation process, and, from observing our study participants, we also infer a risk of them blindly evaluating and optimizing for undesired outcomes as a result of choosing metrics and mitigations on unfounded or incomplete assumptions. This work supports future improvement of toolkits by providing a breakdown of perspectives around various sources of harm and reasoning about the ones that get frequently overlooked.