[12] X. Gong, A. Sharma, S. Karanam, Z. Wu, T. Chen, D. Doermann, and A. Innanje (2021) Ensemble attention distillation for privacy-preserving federated learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15076–15086. [13] I. J. Goodfellow, ...
换言之,该任务移除了客户端i对全局模型M_F的所有贡献,并创造了一个新模型M^{C\backslash\left\{i \right\}},就好像客户端i从来没有参与过训练过程一样。 Challenges in Federated Unlearning Incremental Learning Process 全局模型的更新是一个增量过程,每一次更新都依赖上次的更新结果。假设\Delta M_T^k代表...
In this paper, we propose a novel framework of federated learning equipped with the process of decentralized knowledge distillation (FedDKD) (i.e., without data on the server). The FedDKD introduces a module of decentralized knowledge distillation (DKD) to distill the knowledge of the local ...
The huge communication cost in federated learning leads to heavy overheads on clients and high environmental burdens. Here, we present a federated learning method named FedKD that is both communication-efficient and effective, based on adaptive mutual knowledge distillation and dynamic gradient ...
The huge communication cost in federated learning leads to heavy overheads on clients and high environmental burdens. Here, we present a federated learning method named FedKD that is both communication-efficient and effective...
indicating that the local model can distill the refined knowledge of the global model. FedX-enhanced models also have larger inter-class angles, demonstrating better class discrimination (see Figure 3-b). The paper “FedX: Unsupervised Federated Learning with Cross Kno...
A Federated Domain Adaptation Algorithm Based on Knowledge Distillation and Contrastive Learning To address these problems, we propose a federated domain adaptation algorithm based on knowledge distillation and contrastive learning. Knowledge distillation is used to extract transferable integration knowledge from...
we present a federated learning method named FedKD that is both communication-eff i cient and effective, based on adaptive mutual knowledge distillation and dynamic gradientcompression techniques. FedKD is validated on three different scenarios that need privacyprotection, showing that it maximally can ...
Thus, the communication cost in federated learning can be quite heavy. In this paper, we propose a communication efficient federated learning method based on knowledge distillation. Instead of directly communicating the large models between clients and server, we propose an adaptive mutual distillation ...
本文将数据浓缩(dataset condensation,我自己喜欢称之为数据蒸馏,即dataset distillation)用于联邦学习领域,在每一个 client 端将本地的数据蒸馏为浓缩数据集,server 端使用各个 client 提供的浓缩数据集进行训练。 client 端部分,作者对模型参数和浓缩数据集进行交替优化:先优化模型,再固定模型优化浓缩数据集。 为了解决...