Differential privacyDeep learning privacy-preservingFederated learningFast fourier transformPrivacy loss distributionSpurred by the simultaneous need for data privacy protection and data sharing, federated lear
In this experiment, we use Differential Private Federated Learning (DP-FL) to ensure data privacy. Differential Privacy (DP) was not considered in experiment series 1 since the objective was to study the effects of data size, distribution, and the number of clients on the performance of distrib...
Federated Learning with Differential Privacy:Algorithms and Performance Analysis 2024/2/11 大四做毕设的时候第一次读这篇论文,当时只读了前一部分,后面关于收敛界推导证明的部分没有看,现在重新完整阅读一下这篇文章。 本文贡献 提出了一种基于差分隐私 (DP) 概念的新框架,其中在聚合之前将人工噪声添加到客户端的...
论文标题Dynamic Personalized Federated Learning with Adaptive Differential Privacy 论文作者 Xiyuan Yang, Wenke Huang, Mang Ye 科研机构 Wuhan University 发表会议 NeurIPS 2023 摘要概括 个性化联邦学习场景下的差分隐私有效解决数据非独立同分布和隐私泄露问题。然而,现有的个性化联邦学习场景下的差分隐私面临着两大挑...
To provide intelligent and personalized services on smart devices, machine learning techniques have been widely used to learn from data, identify patterns, and make automated decisions. Machine learning processes typically require a large amount of repre
Poor. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, vol. 15, pp. 3454–3469, 2020. DOI: https://doi.org/10.1109/TIFS.2020.2988575. Article Google Scholar C. L. Xie, K. L. Huang, P. Y. Chen, B...
One way to achieve a strict privacy guarantee is to apply local differential privacy into federated learning. However, previous works do not give a practical solution due to three issues. First, the noisy data is close to its original value with high probability, increasing the risk of ...
differential privacy 2006年提出的,已经研究了十多年,纯DP的研究可以做的东西并不多了,学术界对DP的...
“联邦学习”(Federated Learning)实际上是一种加密的分布式机器学习技术,参与各方可以在不披露底层数据和底层数据的加密(混淆)形态的前提下共建模型。它可以实现各个企业的自有数据不出本地,而通过加密机制下的参数交换方式,使各个机构在不交换数据的情况下进行协作,提升机器学习的效果。背景回顾 2016年是人工智能...
Differential privacy is the de-facto technique for protecting the individuals in the training dataset and the learning models in deep learning. However, the technique presents two limitations when applied to vertical federated learning, where several organizations collaborate to train a common global mode...