Introduction to meaningful human control of artificially intelligent systems

More Info
expand_more

Abstract

Artificial intelligence (AI) technology aims to replicate human intelligence and behaviour, and more in general to solve problems while operating at a relatively low dependence on direct human control. This implies that a high level of autonomous capability is desired in these systems. In recent times, the same rapidly growing capabilities of AI systems that can enable breakthroughs in many services and sectors such as transportation and healthcare, have also caused increasing concerns on whether they might spin “out of control”: control by individual users, control by the developers or manufacturers, control by non-users, control by other stakeholders and control by society at large. For instance, automated vehicles could behave unpredictably and create risks for their passengers and other road users alike. Decision support systems such as AI-based recruitment systems could steer human decision-making in ways that are undesirable and possibly harmful. AI-supported medical tools could amplify existing biases in patient diagnosis or treatment. War drones could engage targets without the explicit and full consensus of a responsible human agent. From a slightly different angle, in case of harmful events involving these increasing autonomous capabilities of AI systems, the clear and unambiguous attribution of moral responsibility and legal liability to a person or a group may become complicated. For example, should the organization employing the AI system have prevented the system errors leading to the harmful event? Should the system have been designed or deployed differently? Should the end-user have intervened? Should legislation have prevented its use in the first place? AI systems, especially when designed to function with high levels of autonomous capabilities, are prone to cause what some have called “responsibility gaps”: situations where it is inherently difficult to attribute responsibility to human agents for undesired events (Santoni de Sio & Mecacci, 2021).

Files

9781802204131-book-part-978180... (pdf)
(pdf | 0.391 Mb)
warning

File under embargo until 20-01-2025