Knowledge distillation has emerged as a widely adopted and effective method for addressing two challenges of heterogeneous federated learning: Data heterogeneity causes client drift, which makes model convergenc
[54] L. Zhang, L. Shen, L. Ding, D. Tao, and L. Duan (2022) Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10174–10183. [55] Y. Zhao, M. Li...
Semi-supervised two-stage learning strategy: We are the first to employ federated knowledge distillation (stage 2) to fuse the knowledge of the per-task models (from stage 1) into a different architecture than the teacher when small amounts of manual annotations are available. With the CNNs p...
The huge communication cost in federated learning leads to heavy overheads on clients and high environmental burdens. Here, we present a federated learning method named FedKD that is both communication-efficient and effectiv...
distillation approach is a general method that is not peculiar to the federated learning scenario. It can provide a new direction to compress large deep learning models and meanwhile keep high accuracy, which can serve as a fundamental technique to empower efficient deep learning. It also enables ...
1 前言最近在研究 Data-free KD 和 FL 这块。今天分享一篇CVPR'22的论文: Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning联邦学习(FL)是一种在隐私约束下…
Model optimization techniques in personalized federated learning: A survey 2.2 Knowledge distillation Knowledge distillation is a technique commonly used in machine learning to transfer knowledge from one model, known as the Teacher model, to another model, called the student model. This process involves...
federated learning leads to heavy overheads on clients and high environmental burdens. Here, we present a federated learning method named FedKD that is both communication-efficient and effective, based on adaptive mutual knowledge distillation and dynamic gradient compression techni...
Paper tables with annotated results for Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning
Federated Learning Knowledge Distillation Transfer Learning Datasets Edit Add Datasets introduced or used in this paper Results from the Paper Edit Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers. Methods...