We argue that federated learning is generically vulnerable to backdoors and other model-poisoning attacks. First, when training with millions of participants, it is impossible to ensure that none of them are malicious. The possibility of training with multiple malicious participants is explicitly acknowl...
一、整篇文章说了啥? 说了联邦学习是容易通过backdoor攻击的,并且展示了如何进行Backdoor。 从原理上说,联邦学习容易被Backdoor主要是下面几点: 从定义上来说,会存在着成千上万的不受中心节点控制训练过程的device,这样必然会容易存在着恶意的device 联邦学习的一个假设就是device上的数据分布是NON-IID的,并且为了...
联邦学习Federated Learning学习笔记(1) 联邦学习的价值 与现有研究的关系 差分隐私理论Differential Privacy(联邦学习数据和模型本身不会进行传输,数据层面不存在泄露的可能)分布式机器学习Distributed MachineLearning(横向联邦学习中多方联合训练的方式和分布式机器学习有部分相似的地方;联邦学习面对的一个更复杂的学习环境,强...
这是今年刚在WSDM上发表的一篇文章,在联邦学习的框架下考虑了实时排序算法的实现,作者将这个框架称为Federated Online Learning to Rank (FOLtR)。 code开源地址 简单摘要: 问题:我们有一个排序模型,θ{\theta}θ,和客户的交互数据a,如何在保证隐私的同时提高模型的评分f(a;θ){f(a; \theta)}f(a...《...
Federated learning enables thousands of participants to construct a deep learning model without sharing their private training data with each other. For example, multiple smartphones can jointly train a next-word predictor for keyboards without revealing what individual users type. We demonstrate that an...
Federated learning’s popularity is rapidly increasing because it addresses common development-related security concerns. It is also highly sought after for its performance advantages. Research shows this technique can improve an image classification model’s accuracy by up to 20%— a substantial increas...
How Does Federated Learning Work? In its first research paper on the topic, Google explained that with federated learning, “each client has a local training dataset which is never uploaded to the server. Instead, each client computes an update to the current global model maintained by the serv...
论文标题:How to Backdoor Federated Learning 作者:Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, Vitaly Shmatikov 摘要 本文表明联邦学习易受某种 model-poisoning 攻击,这个攻击比只在训练集上的 poisoning 攻击更厉害。单个或多个恶意的参与者可以使用本文提出的 model replacement 在联合模型上注...
E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, & V. Shmatikov, “How to backdoor federated learning,” in International Conference on Artificial Intell
Although federated learning helps protect information, an additional layer of security can be added to prevent potential attacks throughOpenFL + SGX(Gramine). Get Started Federated learning provides a win-win for data owners and AI developers by enabling collaborative model training while maintai...