As digitalization advances, cybersecurity departments are increasingly overwhelmed by alerts and potential threats, leading to decision fatigue among security analysts. In response, many are adopting Artificial Intelligence (AI) to automate routine tasks, prioritize alerts, and a
...
As digitalization advances, cybersecurity departments are increasingly overwhelmed by alerts and potential threats, leading to decision fatigue among security analysts. In response, many are adopting Artificial Intelligence (AI) to automate routine tasks, prioritize alerts, and accelerate incident response. However, the pace of AI adoption in cybersecurity lags behind that of threat actors, revealing underlying challenges. This thesis explores these challenges through a case study of a large European bank's cybersecurity department, using a sociotechnical systems (STS) approach. Semi-structured interviews with security analysts, data scientists, and leadership revealed four key themes: (1) mixed perceptions of AI, (2) the influence of organizational factors on AI adoption, (3) the importance of interdisciplinary collaboration, and (4) the critical role of trust in AI systems. While AI offers potential benefits like improved threat detection and reduced decision fatigue, challenges such as data quality issues and lack of transparency persist. Organizational readiness, leadership support, and effective change management are crucial for successful AI integration. Building trust through transparency and active user involvement is essential for adoption. The thesis proposes a conceptual model that addresses these challenges by integrating technical and social factors, offering practical recommendations for enhancing AI adoption in cybersecurity. Future research should expand on these findings through diverse case studies, human-centered AI frameworks, and longitudinal studies to better understand AI’s impact on trust and collaboration in cybersecurity.