Circular Image

Jérémie Decouchant

36 records found

Federated learning (FL) allows the collaborative training of a model while keeping data decentralized. However, FL has been shown to be vulnerable to poisoning attacks. Model poisoning, in particular, enables adversaries to manipulate their local updates, leading to a significant ...
Federated Learning (FL) is a distributed machine learning approach that enhances data privacy by training models across multiple devices or servers without centralizing raw data. Traditional FL frameworks, which rely on synchronous updates and homogeneous resources, face signific ...
This thesis explores the application of a modular execution environment, specifically utilizing the Move Virtual Machine (MoveVM), within a blockchain-agnostic framework. The study aims to demonstrate how this modular approach can enhance the execution capability of existing bloc ...
Distributed consensus algorithms are essential for maintaining data reliability and consistency across computer networks, ensuring that all nodes agree to a single state despite failures or malicious disruptions. Despite the critical role of Byzantine Fault Tolerant State Machine ...

Fast Simulation of Federated and Decentralized Learning Algorithms

Scheduling Algorithms for Minimisation of Variability in Federated Learning Simulations

Federated Learning (FL) systems often suffer from high variability in the final model due to inconsistent training across distributed clients. This paper identifies the problem of high variance in models trained through FL and proposes a novel approach to mitigate this issue thro ...

Rollback protection in Damysus

Apollo & Artemis: providing rollback resistance in hybrid consensus protocols

Streamlined Byzantine Fault Tolerance (BFT) protocols have been developed to create efficient view-changes. To improve upon their performance, trusted components have been introduced to prevent equivocation within a protocol. However, these trusted components suffer from rollback ...
Federated Learning is a machine learning paradigm where the computational load for training the model on the server is distributed amongst a pool of clients who only exchange model parameters with the server. Simulation environments try to accurately model all the intricacies of ...
Federated Learning has gained prominence in recent years, in no small part due to its ability to train Machine Learning models with data from users' devices whilst keeping this data private. Decentralized Federated Learning (DFL) is a branch of Federated Learning (FL) that deals ...

Improving the Accuracy of Federated Learning Simulations

Using Traces from Real-world Deployments to Enhance the Realism of Simulation Environments

Federated learning (FL) is a machine learning paradigm where private datasets are distributed among decentralized client devices and model updates are communicated and aggregated to train a shared global model. While providing privacy and scalability benefits, FL systems also fac ...
Blockchain-based payment systems typically assume a synchronous communication network and a limited workload to confirm transactions within a bounded timeframe. These assumptions make such systems less effective in scenarios where reliable network access is not guaranteed.
Of ...
Streamlined Byzantine Fault Tolerant (BFT) protocols, such as HotStuff [PODC'19], and weighted voting represent two possible strategies to improve consensus in the distributed systems world. Several studies have been conducted on both techniques, but the research on ...
This paper explores the integration of weighted vot-ing mechanisms into DAG-based consensus proto-cols, such as Tusk [EuroSys’22], which promise high throughput and low latency. Weighted voting,
pioneered by protocols like WHEAT [SRDS’15] and AWARE [TDSC’20], aims to optimize ...

Using Weighted Voting to Accelerate Blockchain Consensus

Intrusion tolerance during performance degradation attacks

This paper examines the impact of faulty nodes on Practical Byzantine Fault Tolerance (PBFT) algorithms, focusing on the AWARE optimization. While AWARE improves average-case latency by assigning larger voting weights to well-connected nodes, it is vulnerable to exploitation by ...

Using Weighted Voting to Accelerate Blockchain Consensus

How to make sure that the latency that the nodes report prior to AWARE’s algorithm is realistic?

This research addresses the challenge of managing latency in distributed computer systems. Maintaining correct delays in data transmission across various network conditions is crucial for system efficiency and security. We focus on improving the Adaptive Wide-Area Replication (AW ...
Since its emergence in 2008, blockchain technology has significantly expanded its scope, impacting various industries beyond its initial cryptocurrency applications. Its potential to enhance established practices is increasingly recognized, yet its application in the Architecture ...
Searchable symmetric encryption (SSE) is an encryption scheme that allows a single user to perform searches over an encrypted dataset. The advent of dynamic SSE has further enhanced this scheme by enabling updates to the encrypted dataset, such as insertions and deletions. In dyn ...
Effective large-scale process optimization in manufacturing industries requires close cooperation between different parties of human experts who encode their knowledge of related domains as Bayesian network models. For example, parties in the steel industry must collaboratively u ...
Byzantine consensus protocols are designed to build resilient systems to achieve consensus under Byzantine settings, maintaining safety guarantees under any network synchrony model and providing liveness in partially or fully synchronous networks.
However, several Byzantine c ...
Vertical federated learning’s (VFL) immense potential for time series forecasting in industrial applications such as predictive maintenance and machine control remains untapped. Critical challenges to be addressed in the manufacturing industry include small and noisy datasets, mo ...

Training diffusion models with federated learning

A communication-efficient model for cross-silo federated image generation

The training of diffusion-based models for image generation is predominantly controlled by a select few Big Tech companies, raising concerns about privacy, copyright, and data authority due to the lack of transparency regarding training data. Hence, we propose a federated diffusi ...