As Machine Learning models are being applied to a wide range of fields, the potential impact that these algorithms can have on people's lives is increasing. In a growing number of applications, such as criminal justice, financial assessments, job and college applications, the dat
...
As Machine Learning models are being applied to a wide range of fields, the potential impact that these algorithms can have on people's lives is increasing. In a growing number of applications, such as criminal justice, financial assessments, job and college applications, the data points are indeed people's profiles. Therefore, in the presence of such sensitive attributes, the risk for algorithmic predictions leading to discrimination should be carefully addressed. Among the state-of-the-art methods aiming at solving such complex problems by taking fairness into account, path-specific causality-based methods are selected in this work. In fact, causality-based fairness metrics are acclaimed in the literature for satisfactorily capturing unfairness in Machine Learning models. In this work, selected state-of-the-art causality-based methods and metrics are compared, emphasizing the methodological and experimental differences between individual- and population-level fairness approaches. Based on these results, a novel method Conditional Path-Specific Effect (CPSE) is proposed, with the goal of bridging the two different approaches by leveraging the properties of conditional Path-Specific Effects. CPSE is tested in comparison with other state-of-the-art methods, both on simulated and empirical datasets. The results suggest a high potential of CPSE for successfully detecting and correcting unfairness.