To address these challenges, we advocate "mixup your own contrastive pairs for supervised contrastive regression", instead of relying solely on real/augmented samples. Specifically, we propose Supervised Contrastive Learning for Regression with Mixup (SupReMix). It takes anchor-inclusive mixtures (mixup...
这篇文章针对语音翻译中的模态鸿沟问题,提出了一种简单有效的跨模态 Mixup 方法,通过 Mixup 产生同时包含语音表示和文本表示的序列,从而使模型在训练过程中建立模态间的联系。在此基础上,本文引入了一个自我学习框架,使语音翻译任务从 Mixup 中学习知识,进而提升语音翻译的性能。