[16] Y. He, Y. Chen, X. Yang, H. Yu, Y. Huang, and Y. Gu (2022) Learning critically: selective self-distillation in federated learning on non-iid data. IEEE Transactions on Big Data. [17] Y. He, Y. Chen, X. Yang, Y. Zhang, and B. Zeng (2022) Class-wise adaptive self ...
因此,即便目标客户端对原全局模型贡献很小,遗忘时全局更新\Delta M_t乘上\frac{N}{N-1}也会对新模型产生较大影响。为了缓解这个问题,我们使用一种懒学习(lazy learning)策略。具体来说,我们假设客户端N仍然参与训练过程,但是设置他的更新参数为0,即\Delta M_t^N = 0。于是我们可以得到\Delta M_t' =\fr...
Knowledge distillationHeterogeneous dataData-free algorithmThe heterogeneity of the data distribution generally influences federated learning performance in neural networks. For a well-performing global model, taking a weighted average of the local models, as in most existing federated learning algorithms, ...
In this work, we present a communication-efficient federated learning method based on knowledge distillation, named FedKD. Our method is mainly focused on cross-silo federated learning where the clients have relatively richer computing resources and larger local data volume than personal devices. Instea...
The huge communication cost in federated learning leads to heavy overheads on clients and high environmental burdens. Here, we present a federated learning method named FedKD that is both communication-efficient and effectiv...
we present a federated learning method named FedKD that is both communication-eff i cient and effective, based on adaptive mutual knowledge distillation and dynamic gradientcompression techniques. FedKD is validated on three different scenarios that need privacyprotection, showing that it maximally can ...
indicating that the local model can distill the refined knowledge of the global model. FedX-enhanced models also have larger inter-class angles, demonstrating better class discrimination (see Figure 3-b). The paper “FedX: Unsupervised Federated Learning with Cross Kno...
A Federated Domain Adaptation Algorithm Based on Knowledge Distillation and Contrastive Learning To address these problems, we propose a federated domain adaptation algorithm based on knowledge distillation and contrastive learning. Knowledge distillation is used to extract transferable integration knowledge from...
Communication-efficient federated learning via knowledge distillation ArticleOpen access19 April 2022 Introduction Data collected from healthcare, finance, etc., may reveal private data, such as basic personal information, patient medical history information, economic information, and so on. Once these dat...
Li, D. & Wang, J. Fedmd: heterogenous federated learning via model distillation. Preprint at arXiv:1910.03581 (2019).(压缩大时,异构性处理能力弱) 通信加速:知识蒸馏(本地模型大于共享模型,利用共享模型在共享数据集训练,但共享泄露隐私) Li, D. & Wang, J. Fedmd: heterogenous federated learning via...