b) GCT网络结构图,和本方法相同,唯一的区别在于GCT使用confidence map作为监督信号,而CPS使用one-hot label c)MeanTeacher 使用学生网络和老师网络,两个网络结构相同,参数初始化不同,学生网络用老师网络得到的confidence map作为监督信号,老师网络随着学生网络的权重变化按照指数平均不断变化 d)一张原图X分别经过弱数据...
For each of the two student branches, the respective teacher branches, used to generate high-quality pseudo labels, are constructed using an exponential moving average method (EMA). A pseudo one-hot label, produced by one teacher network branch, supervises the other student network bran...
第二组baseline由两种self-training方法组成,它们为未标记集生成伪标签,选择高置信度标签来扩展有标数据。 Pseudo-Label[17]在不使用数据增强的情况下生成伪标签,而UPS[26]为无标数据集中的低置信样本创建补充标签。第三组包含一致性正则化方法,其中Π-model[25]和Mean Teacher[32]是两个经典基准,UDA[33]代表了...
(8) where ˆlrm and ˆlrn denote the pseudo labels of two regions rm, rn ∈ RTxt , respectively. Thus, given the the affinity matrix of student ExSt and the supervision matrix MxTt , the intrA-Graph consistency Loss (AGL) is defined as: (MxTt )(m...
[CVPR 2021] CPS: Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision - charlesCXK/TorchSemiSeg
we present a novel general framework termed Multi-Granularity Confidence Alignment Mean Teacher (MGCAMT) for cross domain object detection, which alleviates confidence misalignment across category-, instance-, and image-levels simultaneously to obtain high quality pseudo supervision for better teacher-studen...
We proposed a novel multi-task semi-supervised method based on multi-branch cross pseudo supervision, called MS2MPS, which can efficiently utilize unlabeled data for semi-supervised medical image segmentation. The proposed method consists of two multi-task backbone networks with multiple output branches...
0.8 Pseudo labeling on target domain training set 0.7 0.6 0.5 0.4 Mean Teacher (MT) MT + Weak-Strong Augmentation MT + Adversarial Loss Adaptive Teacher (Ours) 0.3 10k 20k 30k 40k 50k Mutual Learning Iteration 60k Figure 1. The effectiveness of domain loss and weak-strong augmentation ...
Table 4: The results of domain generalization on unseen target dataset, which leverages the labeled source data and another domain without supervision. Theaverage precision(AP, %) is reported. The backbone is ResNet-101 for fair comparison. “WS Aug.” indicates weak-strong augmentation. ...
In addition to considering Dw as an unlabeled set and imposing a consistency over its examples, the pseudo-labels are used to train the auxiliary networks gak◦h using a weakly supervised loss Lw. In this case, the loss in Eq. (3) be- comes: L = Ls ...