PA

P. Altmeyer

12 records found

Central banks communicate their monetary policy plans to the public through meeting minutes or transcripts. These communications can have immense effects on markets and are often the subjects of studies in the financial literature. The recent advancements in Natural Language Proc ...
In recent years, the need for explainable artificial intelligence (XAI) has become increasingly important as complex black-box models are used in critical applications. While many methods have been developed to interpret these models, there is also potential in enhancing the mode ...
Counterfactual explanations can be applied to algorithmic recourse, which is concerned with helping individuals in the real world overturn undesirable algorithmic decisions. They aim to provide explanations to opaque machine learning models. Not all generated points are equally f ...
Adversarial Training has emerged as the most reliable technique to make neural networks robust to gradient-based adversarial perturbations on input data. Besides improving model robustness, preliminary evidence presents an interesting consequence of adversarial training -- increa ...
Counterfactual Explanations (CE) are essential for understanding the predictions of black-box models by suggesting minimal changes to input features that would alter the output. Despite their importance in Explainable AI (XAI), there is a lack of standardized metrics to assess th ...
Counterfactual explanations (CEs) can be used to gain useful insights into the behaviour of opaque classification models, allowing users to make an informed decision when trusting such systems. Assuming the CEs of a model are faithful (they well represent the inner workings of th ...

A Study on Counterfactual Explanations

Investigating the impact of inter-class distance and data imbalance

Counterfactual explanations (CEs) are emerging as a crucial tool in Explainable AI (XAI) for understanding model decisions. This research investigates the impact of various factors on the quality of CEs generated for classification tasks. We explore how inter-class distance, data ...

Finding Recourse for Algorithmic Recourse

Actionable Recommendations in Real-World Contexts

The aim of algorithmic recourse (AR) is generally understood to be the provision of "actionable" recommendations to individuals affected by algorithmic decision-making systems in an attempt to present them with the capacity to take actions that would guarantee more desirable outc ...
The evaluation metrics commonly used for machine learning models often fail to adequately reveal the inner workings of the models, which is particularly necessarily in critical fields like healthcare. Explainable AI techniques, such as counterfactual explanations, offer a way to ...
Algorithmic recourse aims to provide individuals affected by a negative classification outcome with actions which, if applied, would flip this outcome. Various approaches to the generation of recourse have been proposed in the literature; these are typically assessed on statistic ...
Employing counterfactual explanations in a recourse process gives a positive outcome to an individual, but it also shifts their corresponding data point. For systems where models are updated frequently, a change might be seen when recourse is applied, and after multiple rounds, s ...
Machine learning classifiers have become a household tool for banks, companies, and government institutes for automated decision-making. In order to help explain why a person was classified a certain way, a solution was proposed that could generate these counterfactual explanatio ...