Contrastive Learning 对比学习在视觉表示学习、自然语言处理和图神经网络中取得了令人印象深刻的成就。最近,一些研究将对比学习引入推荐系统,例如SGL通过节点自我辨别为基于GCN的推荐模型提供辅助信号。SEPT设计了一个社会感知的自监督框架,从用户-项目图和社会图中学习区分信号。一些工作也将对比学习引入到序列推荐中,S^{...
标题:Multi-level Contrastive Learning Framework for Sequential Recommendation 地址:arxiv.org/pdf/2208.1300 会议:CIKM 2022 学校,公司:华中科技大学,阿里 喜欢的小伙伴记得三连哦,感谢支持 更多内容可以关注公众号:秋枫学习笔记 1. 导读 本文主要针对序列推荐中的数据稀疏问题提出相应的解决方法,针对现有对比学习在...
论文标题:Multi-Level Graph Contrastive Learning论文作者:Pengpeng Shao, Tong Liu, Dawei Zhang, J. Tao, Feihu Che, Guohua Yang论文来源:2021, Neurocomputing论文地址:download论文代码:download 1 Introduction本文贡献:提出多层次图对比学习框架:联合节点级和图级对比学习; 引入KNN 图提取语义信息;...
然后,MCLSR在两个层次(即兴趣层次和特征层次)上执行交叉视图对比学习范式。在兴趣层面,MCLSR从顺序视图获得序列信息,从用户-项目视图获得协作信息,其中执行对比机制来捕获两个视图之间的互补信息。在特性级别,MCLSR通过在用户-用户视图和项目-项目视图上执行gnn来重新观察用户和项目之间的关系。MCLSR通过对比学习学习两...
Additionally, multi-level contrastive learning, as an auxiliary self-supervised task, is jointly trained with the primary supervised task, which further enhances recommendation performance. Experimental results on the MovieLens and Amazon-books datasets demonstrate that this framework effectively improves the...
(3)因此,本文提出multi-level feature learning framework用于contrastive multi-view clustering 。 (4)提出的方法在不融合的前提下从原始特征中学习low-level, high-level以及semantic labels/features,因此可以实现在不同的特征空间中优化重构目标和一致性目标。
Different from traditional contrastive learning methods which generate two graph views by uniform data augmentation schemes such as corruption or dropping, we comprehensively consider three different graph views for KG-aware recommendation, including global-level structural view, local-level collaborative and...
Yang B, Wu L, Zhu J et al (2022) Multimodal sentiment analysis with two-phase multi-task learning. IEEE/ACM Trans Audio Speech Lang Process 30:2015–2024. https://doi.org/10.1109/TASLP.2022.3178204 Article MATH Google Scholar Lin R, Hu H (2022) Multimodal contrastive learning via uni-...
# Paper: https://openaccess.thecvf.com/content/CVPR2022/html/Xu_Multi-Level_Feature_Learning_for_Contrastive_Multi-View_Clustering_CVPR_2022_paper.html # To test the trained model, run: python test.py # To train a new model, run: python train.py # The experiments are carried out on a...
Improved disentangled speech representations using contrastive learning in factorized hierarchical variational autoencoder By utilizing the fact that speaker identity and content vary on different time scales, \\acrlong{fhvae} (\\acrshort{fhvae}) uses a sequential latent variab... Y Xie,T Arildsen,...