在看Supervised Contrastive Learning的时候,一开始看到code还觉得很奇怪,跟paper中的公式对不上。后面想起log_softmax也是减去最大值,才发现逻辑是一样的,简单记录一下。 Log_Softmax 注意这里是所有点进行处理的,注意mask matrix: mask: the sample with the same category ...
作者证明了 Triplet Loss 是使用一正一负样本时对比损失的特例。当使用多个负数时,作者表明 SupCon Loss 等同于 N-pairs Loss。 实验 实验也证明了确实有提高:
《Selective-Supervised Contrastive Learning with Noisy Labels》 引入一个filter机制,用高置信的positive来做supervised contrastive learning,提升监督质量。 《Balanced Contrastive Learning for Long-Tailed Visual Recognition》提出了balanced supervised contrastive learning loss。1)通过class-averaging来平衡不均衡负类的...
Supervised Contrastive Learning (SupContrast) based on MoCo-v2 representation-learningcontrastive-learningsupervised-contrastive-learningmoco-v2 UpdatedNov 10, 2022 Python AfsahS/Supervised-Contrastive-Ordinal-Loss-for-Ordinal-Regression Star4 Code for MICCAI 2023 publication: SCOL: Supervised Contrastive Ordina...
(1) Supervised Contrastive Learning.Paper (2) A Simple Framework for Contrastive Learning of Visual Representations.Paper Update Note: if you found it not easy to parse the supcon loss implementation in this repo, we got you. Supcon loss essentially is just a cross-entropy loss (see eq 4 in...
论文:Self-supervised Contrastive Representation Learning for Semi-supervised Time-Series Classification GitHub:https://github.com/emadeldeen24/CA-TCC TPAMI期刊论文,是CCF-A类的期刊论文,IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)。
论文:TimesURL: Self-supervised Contrastive Learning for Universal Time Series Representation Learning GitHub:https://github.com/Alrash/TimesURL AAAI 2024的论文。 摘要 学习适用于各类下游任务的通用时间序列表示法具有挑战性,但在实际应用中却很有价值。最近,研究人员试图利用计算机视觉(CV)和自然语言处理(NLP)...
(b) Unlike concurrent semi-supervised contrastive learning works (Zhao et al., 2020, Alonso et al., 2021), we do not use the pseudo-labels of unlabeled images in the cross-entropy loss directly as using them directly can lead to reinforcing erroneous predictions in the training (Chapelle et...
Code:https://github.com/HobbitLong/SupContrast 《Supervised Contrastive Learning》是来自于 NeurlPS 2020 的论文,本文主要介绍了一种提高 feature 质量的对比学习方法,有别于之前的自监督对比学习,是一种有监督对比学习 对比学习(分类算法) 具体做法
LTH14/targeted-supcon: A PyTorch implementation of the paper Targeted Supervised Contrastive Learning for Long-tailed Recognition (github.com)github.com/LTH14/targeted-supcon 简述 uniformity 是指在理想情况下,监督对比学习应该收敛到一个嵌入,其中不同的类别在超球面上均匀分布。但在长尾学习中,监督对比...