1)我们提出了有选择性监督的噪声标签对比学习,它可以获得鲁棒性的预训练表示 有效地选择合并对来执行Sup-CL。 2)在不知道噪声率的情况下,我们的方法选择了由确定的自信例子建立的对,这些例子建立的对具有很高的表示相似性。它实现了一个积极的循环,其中更好的自信对导致更好的表示,更好的表示将识别更好的自信对。
Enabling On-Device Self-Supervised Contrastive Learning with Selective Data Contrastdoi:10.1109/DAC18074.2021.9586228Yawen WuZhepeng WangDewen ZengYiyu ShiJingtong HuIEEEDesign Automation Conference
Less is More: Selective reduction of CT data for self-supervised pre-training of deep learning models with contrastive learning improves downstream classification performance - Wolfda95/Less_is_More
Enabling On-Device Self-Supervised Contrastive Learning With Selective Data ContrastDewen ZengJingtong HuYawen WuYiyu ShiZhepeng Wang
We then jointly perform dual contrastive learning and metric learning to provide different supervision signals for relational learning. Extensive experiments on benchmark datasets have substantiated the superiority of SuperRL in different evaluation metrics over the state-of-the art baselines. The source ...