The Human in Command
An exploratory study into human moral autonomy of Behavioural Artificial Intelligence Technology
More Info
expand_more
Abstract
The accelerating development of algorithms causes a disruptive effect in many domains, including the complex decision-making of knowledge workers. Experts can manage difficult but repetitive decisions with software technologies like a Decision Support System (DSS). A DSS is used to monitor decisions, get additional insights and improve decisions over time. Their supportive performance characterises these systems to assist human decision-makers. To answer to the pressing demand for transparency in DSSs, Councyl developed Behavioural Artificial Intelligence Tenchnology (BAIT). BAIT is a DSS that adequately supports experts with making decisions. However, algorithms like BAIT may affect the autonomy of experts and their decisions in numerous ways. This thesis studies human moral autonomy (HMA) of end-users in the context of BAIT. We do this by measuring perceptions of end-users. The product arising from this study is the HMA Survey.