Machine Learning and Counter-Terrorism

Ethics, Efficacy, and Meaningful Human Control

More Info
expand_more

Abstract

Machine Learning (ML) is reaching the peak of a hype cycle. If you can think of a personal or grand societal challenge – then ML is being proposed to solve it. For example, ML is purported to be able to assist in the current global pandemic by predicting COVID-19 outbreaks and identifying carriers (see, e.g., Ardabili et al. 2020). ML can make our buildings and energy grids more efficient – helping to tackle climate change (see, e.g., Rolnick et al. 2019). ML is even used to tackle the very problem of ethics itself – creating an algorithm to solve ethical dilemmas. Humans, it is argued, are simply not smart enough to solve ethical dilemmas; however, ML can use its mass processing power to tell us the answers regarding how to be ‘good’, in the same way it is better at Chess or Go (Metz 2016). States have taken notice of this new power and are attempting to use ML to solve their problems, including their security problems and, of particular importance in this thesis, the problem of countering terrorism. Counterterrorism procedures including border checks, intelligence collection, waging war against terrorist armed forces, etc. These practices are all being ‘enhanced’ with ML-powered tools (Saunders et al. 2016; Kendrick 2019; Ganor 2019), including: bulk data collection and analysis, mass surveillance, and autonomous weapons among others. This is concerning. Not because the state should not be able to use such power to enhance the services it provides. Not because AI is in principle unethical to use – like land mines or chemical weapons. This is concerning because little has been worked out regarding how to use this tool in a way that is compatible with liberal democratic values. States are in the dark about what these tools can and should do.

Files

Unknown license