We also study attack longevity in early/late round training, the impact of malicious participant availability, and the relationships between the two. Finally, we propose a defense strategy that can help identify malicious participants in FL to circumvent poisoning attacks, and demonstrate its ...
In order to address these concerns, this research paper presents a novel technique for detecting data poisoning attacks in intelligent networks, focusing on addressing privacy and security concerns associated with the use of machine learning (ML) methods. The research combines f...
Our work is an earlier study that considers issues of data poisoning attack for federated learning. To the end, experimental results on real-world datasets show that federated multi-task learning model is very sensitive to poisoning attacks, when the attackers either directly poison the target ...
In particular, model inversion attacks involve malicious actors using model parameters to reconstruct training data. Additionally, attacks may disrupt model convergence. Adversarial, Byzantine, data poisoning, and model poisoning attacks can cause models to be incorrectly trained. To address these ...
BadVFL: Backdoor Attacks in Vertical Federated Learning (S&P 2024) [paper] Backdooring Multimodal Learning (S&P 2024) [paper] [code] Distribution Preserving Backdoor Attack in Self-supervised Learning (S&P 2024) [paper] [code] CVPR Data Poisoning based Backdoor Attacks to Contrastive Learning ...
Federated learning (FL) is a distributed machine learning (ML) approach that enables collaboration without exposing sensitive data or ML algorithms.
Contrastive learning pre-trains an image encoder using a large amount ofunlabeled data such that the image encoder can be used as a general-purposefeature extractor for various downstream tasks. In this work, we proposePoisonedEncoder, a data poisoning attack to contrastive learning. Inparticular, ...
Federated Learning (FL) has become a popular paradigm for learning from distributed data. To effectively utilize data at different devices without moving t... X Zhang,M Hong,S Dhople,... - arXiv 被引量: 0发表: 2020年 Poisoning Attacks Against Non-IID Federated Learning with Mixed-Data Cal...
Although machine learning (ML) has shown promise across disciplines, out-of-sample generalizability is concerning. This is currently addressed by sharing multi-site data, but such centralization is challenging/infeasible to scale due to various limitatio
论文地址:Federated Learning with Non-IID Data 一、 Introduction 介绍 这部分内容先是介绍了FL的由来和发展,简单介绍了Fedavg算法(不了解的小伙伴需要看一下2016年谷歌那篇论文,流程比较简单),说明了一下FL通信的问题和研究,最后引出了FL的Non-IID问题,在一些特定的Non-IID数据集上Fedavg是可以收敛的,但是其他情...