Implementation Attacks Powered by Artificial Intelligence

More Info
expand_more

Abstract

In an era of increasing reliance on digital technology, securing embedded and interconnected devices, such as smart cards or Internet of Things (IoT) devices, against emerging threats becomes crucial, highlighting the need for advanced security measures. Cryptographic algorithms, essential for secure communication, data storage, and transaction integrity, are often employed to develop secure systems. However, the practical implementation of these algorithms in software and hardware introduces vulnerabilities, exposing sensitive information to risks. Implementation attacks, such as fault injection (FI) and side-channel analysis (SCA), belong to a category of security threats that exploit these vulnerabilities occurring during the cryptographic algorithms’ execution.

Security evaluation and certification assess the product’s security features against industry best practices and regulatory standards. These processes aim to independently verify the claims made about the product’s security, fostering and maintaining trust among users. Given the evolving landscape of security threats and increasing security concerns, the need for more efficient and resource-effective security evaluations has become evident. Fault injection and side-channel analysis are commonly conducted as part of this assessment, and recent studies have demonstrated that integrating artificial intelligence (AI) methods can significantly enhance their performance. Moreover, this integration can provide more automated and optimized attacks for security evaluation.

This thesis aims to advance AI-based implementation attacks by investigating current AI frameworks, with the objective of improving the efficiency and effectiveness of these attacks across various scenarios. We target specific challenges within AI-based fault injection (AIFI) and deep learning-based SCA (DLSCA), addressing gaps in the current methodologies and proposing solutions that significantly impact their performance and efficiency. We focus on hyperparameter tuning of the utilized AI methods, portability of the attacks, and alternative evaluation metrics within the AI frameworks.

Hyperparameter tuning is critical but can be a time-intensive process. By investigating specific hyperparameters, we can identify those crucial for the performance, guiding a more efficient tuning process. This thesis focuses on initialization methods, revealing no universally optimal initialization method. Instead, we offer a strategic approach to selecting initialization methods that can lead to improved and more reliable performance in specific scenarios. Next, we provide practical AI-based solutions to enhance the portability of FI parameter search results across different samples of the same target and SCA profiling models across different public datasets (targets). This approach makes security evaluation more efficient by leveraging data and findings to expedite evaluations on other targets. Furthermore, this enables future efforts to develop universal methods to help standardize the AI-based implementation attacks for security evaluation. Lastly, we revisit and refine evaluation metrics within the AI-based implementation attacks, proposing new metrics better aligned with the considered objectives. We present new XIX XX SUMMARY metrics for evaluating the performance of AI-based FI parameter search to find distant vulnerable regions of the target alongside algorithms for this objective. On the other hand, we improve the training process of DLSCA by introducing a training scheme involving the redefinition of the labels and a metric that can evaluate the generality of the profiling model, enabling better assessment for early stopping and model tuning.

Through its exploration of AI-based implementation attacks, this thesis offers valuable insights and practical solutions that significantly enhance the field. By improving the efficiency and effectiveness of AI-based implementation attacks, this research not only aids security analysts but also offers a foundation for future standardization efforts of these attacks for security evaluation.