manipulate model and targeted and untargeted attack 拜占庭鲁棒性的DFL聚合规则(Byzantine-robust DFL Aggregation Rules) UBMR method LEARN trimmed mean SCCLIP 上述模型的缺点和BALANCE的优点 问题前置背景: 威胁模型 攻击者的知识 防御者的知识和目标 BALANCE 算法(THE BALANCE ALGORITHM) 理论分析 假设1 假设2 ...
就是在worker node中出现了一个叛徒,可能他的数据和标签都是动过手脚的,他的异常导致整个神经网络的崩溃。 attack1:data poisoning attack;毒药攻击,给数据做一些手脚 attack2:model poisoning attack;模型攻击,针对分布式学习。直接做法就是样本和标签错对。 这些攻击可以使得模型收敛变慢,可以让准确度下降,甚至可以...
1.本文提出了一种新的毒化攻击方式—— model-poisoning attack(Adversarial Model Replacement)。 2.通过对比Semantic backdoors,Pixel-pattern backdoor和一般training-data poisoning实验。证明了 model-poisoning attack比training-data poisoning攻击效果更好。 3.为了躲避异常检测的防御,本文采用Constrain-and-scale和tra...
In this paper, we propose a federated learning-based intrusion detection scheme (IDS) against poisoning attacks. Specifically, we first design an anti-poisoning attacks algorithm based on the encryption model. Then we define the anti-attack strategy and objective function. To achieve high detection ...
Model poisoning attack in federated learning Federated learning poisoning attacks occur when malicious clients manipulate their local data or model updates to degrade federated learning performance or access unauthorized data. These attackers might add malicious data samples or tamper with model updates to ...
Data poisoningDeep learningFederated learning (FL) is an emerging paradigm for distributed training of large-scale deep neural networks in which participants' data remains on their own devices with only model updates being shared with a central server. However, the distributed nature of FL gives ...
Poisoning attacks against FL-based NIDS In this section, we provide a detailed description of the system architecture for the federated learning-based network intrusion detection system (FL-based NIDS) and introduce the attack model against FL-based NIDS. ...
However, this setting is vulnerable to model poisoning attack, since the participants have permission to modify the model parameters. In this paper, we perform systematic investigation for such threats in federated learning and propose a novel optimization-based model poisoning attack. Different f...
论文阅读:Understanding Distributed Poisoning Attack in Federated Learning(2019 ICPADS) 棉花球 5 人赞同了该文章 摘要:本轮文首先介绍了通过标签翻转进行分布式投毒攻击中投毒样本数量和攻击者数量对攻击成功率的影响,并且提出了一个解决方案“Sniper”,通过解决最大团问题可以识别诚实的本地模型,实验结果证明本方案...
Privacy-Enhanced Federated Learning against Poisoning Adversaries | IEEE Journals & Magazine | IEEE Xploreieeexplore.ieee.org/abstract/document/9524709 今天分享的是发表在 TIFS2021 上的一篇论文,主要关注是的隐私保护的联邦学习问题(preserving-privacy federated learning , PPFL)。单纯的 PPFL 方案致力于各...