G. Pozzi
12 records found
1
Achieving trustworthy AI is increasingly considered an essential desideratum to integrate AI systems into sensitive societal fields, such as criminal justice, finance, medicine, and healthcare, among others. For this reason, it is important to spell out clearly its characteristic
...
Machine learning for mental health diagnosis
Tackling contributory injustice and epistemic oppression
In their contribution, Ugar and Malele shed light on an often overlooked but crucial aspect of the ethical development of machine learning (ML) systems to support the diagnosis of mental health disorders. The authors restrain their focus on pointing to the danger of misdiagnosing
...
From ethics to epistemology and back again
Informativeness and epistemic injustice in explanatory medical machine learning
In this paper, we discuss epistemic and ethical concerns brought about by machine learning (ML) systems implemented in medicine. We begin by fleshing out the logic underlying a common approach in the specialized literature (which we call the informativeness account). We maintain
...
The advancement of AI-based technologies, such as machine learning (ML) systems, for implementation in healthcare is progressing rapidly. Since these systems are used to support healthcare professionals in crucial medical practices, their role in medical decision-making needs to
...
The social aspects of causality in medicine and healthcare have been emphasized in recent debates in the philosophy of science as crucial factors that need to be considered to enable, among others, appropriate interventions in public health. Therefore, it seems central to recogni
...
The principle of trust has been placed at the centre as an attitude for engaging with clinical machine learning systems. However, the notions of trust and distrust remain fiercely debated in the philosophical and ethical literature. In this article, we proceed on a structural lev
...
Machine learning (ML) systems play an increasingly relevant role in medicine and healthcare. As their applications move ever closer to patient care and cure in clinical settings, ethical concerns about the responsibility of their use come to the fore. I analyse an aspect of respo
...
Automated opioid risk scores
A case for machine learning-induced epistemic injustice in healthcare
Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions conn
...
Further remarks on testimonial injustice in medical machine learning
A response to commentaries
In my paper entitled 'Testimonial injustice in medical machine learning',1 I argued that machine learning (ML)-based Prediction Drug Monitoring Programmes (PDMPs) could infringe on patients' epistemic and moral standing inflicting a testimonial injustice.2 I am very grateful for
...