Circular Image

110 records found

Still Making Noise

Improving Deep-Learning-Based Side-Channel Analysis

Editor’s notes: Side-channel attacks have been undermining cryptosystems for almost three decades. Advances in machine learning techniques have shown great promise in improving the performance and efficiency of side-channel attacks, even on systems with countermeasures. This arti ...

It’s a Kind of Magic

A Novel Conditional GAN Framework for Efficient Profiling Side-Channel Analysis

Profiling side-channel analysis (SCA) is widely used to evaluate the security of cryptographic implementations under worst-case attack scenarios. This method assumes a strong adversary with a fully controlled device clone, known as a profiling device, with full access to the inte ...
The use of deep learning-based side-channel analysis is an effective way of performing profiling attacks on power and electromagnetic leakages, even against targets protected with countermeasures. While many research articles have reported successful results, they typically focus ...
Recently, attackers have targeted machine learning systems, introducing various attacks. The backdoor attack is popular in this field and is usually realized through data poisoning. To the best of our knowledge, we are the first to investigate whether the backdoor attacks remain ...

Beyond PhantomSponges

Enhancing Sponge Attack on Object Detection Models

Given today's ongoing deployment of deep learning models, ensuring their security against adversarial attacks has become paramount. This paper introduces an enhanced version of the PhantomSponges attack by Shapira et al. The attack exploits the non-maximum suppression (NMS) algor ...

I Choose You

Automated Hyperparameter Tuning for Deep Learning-based Side-channel Analysis

Today, the deep learning-based side-channel analysis represents a widely researched topic, with numerous results indicating the advantages of such an approach. Indeed, breaking protected implementations while not requiring complex feature selection made deep learning a preferred ...

Unveiling the Threat

Investigating Distributed and Centralized Backdoor Attacks in Federated Graph Neural Networks

Graph neural networks (GNNs) have gained significant popularity as powerful deep learning methods for processing graph data. However, centralized GNNs face challenges in data-sensitive scenarios due to privacy concerns and regulatory restrictions. Federated learning has emerged a ...

MUDGUARD

Taming Malicious Majorities in Federated Learning using Privacy-preserving Byzantine-robust Clustering

Byzantine-robust Federated Learning (FL) aims to counter malicious clients and train an accurate global model while maintaining an extremely low attack success rate. Most existing systems, however, are only robust when most of the clients are honest. FLTrust (NDSS '21) and Zeno++ ...
Graph Neural Networks (GNNs) have achieved promising performance in various real-world applications. Building a powerful GNN model is not a trivial task, as it requires a large amount of training data, powerful computing resources, and human expertise. Moreover, with the developm ...

SoK

Deep Learning-based Physical Side-channel Analysis

Side-channel attacks represent a realistic and serious threat to the security of embedded devices for already almost three decades. A variety of attacks and targets they can be applied to have been introduced, and while the area of side-channel attacks and their mitigation is ver ...

The Power of Bamboo

On the Post-Compromise Security for Searchable Symmetric Encryption

Dynamic searchable symmetric encryption (DSSE) enables users to delegate the keyword search over dynamically updated encrypted databases to an honest-but-curious server without losing keyword privacy. This paper studies a new and practical security risk to DSSE, namely, secret ke ...
One of the Round 3 Finalists in the NIST post-quantum cryptography call is the Classic McEliece cryptosystem. Although it is one of the most secure cryptosystems, the large size of its public key remains a practical limitation. In this work, we propose a McEliece-type cryptosyste ...
We derive necessary conditions related to the notions, in additive combinatorics, of Sidon sets and sum-free sets, on those exponents d ∈ Z/(2n − 1)Z, which are such that F (x) = xd is an APN function over F2n (which is an important cryptographic ...
SHA-3 is considered to be one of the most secure standardized hash functions. It relies on the Keccak-f[1 600] permutation, which operates on an internal state of 1 600 bits, mostly represented as a 5 x 5 x 64-bit matrix. While existing implementations process the state sequentia ...

Going in Style

Audio Backdoors Through Stylistic Transformations

This work explores stylistic triggers for backdoor attacks in the audio domain: dynamic transformations of malicious samples through guitar effects. We first formalize stylistic triggers – currently missing in the literature. Second, we explore how to develop stylistic triggers i ...
Recently, researchers have successfully employed Graph Neural Networks (GNNs) to build enhanced recommender systems due to their capability to learn patterns from the interaction between involved entities. In addition, previous studies have investigated federated learning as the ...

Backdoor Pony

Evaluating backdoor attacks and defenses in different domains

Outsourced training and crowdsourced datasets lead to a new threat for deep learning models: the backdoor attack. In this attack, the adversary inserts a secret functionality in a model, activated through malicious inputs. Backdoor attacks represent an active research area due to ...
Federated Learning (FL) has become very popular since it enables clients to train a joint model collaboratively without sharing their private data. However, FL has been shown to be susceptible to backdoor and inference attacks. While in the former, the adversary injects manipulat ...
Backdoor attacks have been demonstrated as a security threat for machine learning models. Traditional backdoor attacks intend to inject backdoor functionality into the model such that the backdoored model will perform abnormally on inputs with predefined backdoor triggers and sti ...
Evolutionary algorithms have been successfully applied to attack Physically Unclonable Functions (PUFs). CMA-ES is recognized as the most powerful option for a type of attack called the reliability attack. In this paper, we take a step back and systematically evaluate several met ...