Continuous developments in the field of AI have led to the rise of new applications across various industries, such as that of cybersecurity. AI’s ability to quickly process large amounts of data makes it a valuable tool for protecting IT systems.
The development and deploym
...
Continuous developments in the field of AI have led to the rise of new applications across various industries, such as that of cybersecurity. AI’s ability to quickly process large amounts of data makes it a valuable tool for protecting IT systems.
The development and deployment of new AI systems in the European Union (EU) is, now regulated by the EU’s new AI Act, a landmark regulatory framework designed to promote responsible AI development and usage. The AI Act imposes regulations on the protection and use of AI systems, particularly in regards to the explainability and traceability required of these systems. Further developments in industry standards, such as the new ISO 42001 norm, also aim to provide guidance for the use and development of AI systems. Both regulatory and industry standards pose requirements regarding the implementation of safeguards for explainability and transparency for AI systems. These regulations and standards emphasize the need for safeguards that enhance transparency and accountability, ensuring both meaningful insight into AI-driven decision-making and effective mechanisms for human oversight.
This research seeks to consolidate legal and industry standards into a unified framework that offers clear guidance on the implementation of AI-driven cybersecurity solutions in the public sector. The framework aims to outline the current legislatory boundaries, including how regulations such as the GDPR and the forthcoming EU AI Act affect the design, deployment, and monitoring of AI systems. It also proposes different methods to comply with current regulations, such as best practices and industry standards, as well as tools from the field of Explainable AI (XAI) than can be implemented to comply with current regulatory requirements.
These methods were combined to answer the main research question:
How can compliance with current EU regulations be ensured in AI-powered cybersecurity monitoring solutions for public organizations?
An exploratory literature study was conducted to look at the current legal requirements regarding the use of AI systems, as well as any potential additional requirements for their application in the public sector. This was expanded upon by conducting with subject matter experts at DTACT, as well as external experts, to form cybersecurity, regulatory, and public sector perspectives. further elaborated upon by conducting interviews with subject matter experts at DTACT, as well as external experts. A case study was conducted on the cybersecurity solution currently developed and used at DTACT, using the constructed framework. This is followed by a discussion on the public sector implementation of AI solutions for cybersecurity, to gather further insights into the limitations of the framework and to increase its applicability on the needs and requirements of DTACT’s current client base.