model-contrastive federated learning 对比联邦学习(Contrastive Federated Learning)是基于联邦学习(Federated Learning)的一种机器学习技术,它受到了联合训练(Joint Training)的启发,通过在客户机训练数据集上进行联合训练,在客户机和服务器端应用搭配特征(Contrasting Features)以及对抗损失(Contrastive Loss)以提高模型的精度...
今天要介绍的就是其中一篇论文《Model-Contrastive Federated Learning》 一、Motivation 联邦学习的关键挑战是客户端之间数据的异质性(Non-IID),尽管已有很多方法(例如FedProx,SCAFFOLD)来解决这个问题,但是他们在图像数据集上的效果欠佳(见实验Table1)。 传统的对比学习是data-level的,本文改进了FedAvg的本地模型训练阶段...
2.1 Federated Learning 现有处理non-IID问题的方法分为提升本地训练效果和提升聚合效果,本文是前一种,因为对比损失函数在本地计算 其他研究方向有个性化联邦学习,尝试为每个客户端学习一个模型,本文属于经典联邦学习,学习单个全局最佳模型 2.2 Contrastive Learning 对比学习的核心原理是拉近相同样本的表征让模型学习到更好...
(Model-Contrastive Federated Learning)模型对比联邦学习 【摘要】 摘要 联邦学习使多方能够在不交流本地数据的情况下协作训练机器学习模型。 联邦学习的一个关键挑战是处理各方本地数据分布的异质性。 尽管已经提出了许多研究来应对这一挑战,但我们发现它们无法在具有深度学习模型... 摘要 联邦学习使多方能够在不交流...
model-contrastive federated learning解决思路 Model-Contrastive FederatedLearning解决思路是将用户在本地模型训练的参数作为主要特征,利用相典技术实现“跨用户”的参数更新训练。在每轮训练中,系统将所有的用户的模型参数从位置服务器拉取,然后计算出该模型的输入和输出,以及其在模型参数训练表中计算出的模型变化。之后...
In this paper, we propose MOON: model-contrastive federated learning. MOON is a simple and effective federated learning framework. The key idea of MOON is to utilize the similarity between model representations to correct the local training of individual parties, i.e., conducting contrastive ...
we find that they fail to achieve high performance in image datasets with deep learning models. In this paper, we propose MOON: model-contrastive federated learning. MOON is a simple and effective federated learning framework. The key idea of MOON is to utilize the similarity between model repre...
Our novel approach, FedARC, addresses this issue through personalized federated learning (PFL), enabling the use of private data without direct access. FedARC innovatively navigates data heterogeneity and privacy concerns by employing adaptive regularization and model-contrastive learning. This method not...
1:09:46 国际基础科学大会-Ball quotients and Algebraic Geometry-Eduard Looijenga 1:08:29 国际基础科学大会-The complexity of Constraint Satisfaction Problem and its variations 48:08 国际基础科学大会-GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training 40:46 国际基础科学大会-ORB-SLAM....
on it, we introduce pixel-level contrastive learning to enforce local pixel embeddings belonging to the global semantic space. Extensive experiments on four semantic segmentation benchmarks (Cityscapes, CamVID, PascalVOC and ADE20k) demonstrate the effectiveness of our FedSeg. We hope this work will...