KDD 2022 | CrossCBR: Cross-view Contrastive Learning for Bundle Recommendation 文章信息「来源」:Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2022)「标题」:CrossCBR: Cross-view Contrastive Learning for Bundle Recommendation「作者」:Ma, Yunshan and He, Yi...
标题:CrossCBR: Cross-view Contrastive Learning for Bundle Recommendation 链接:https://arxiv.org/pdf/2206.00242.pdf 代码:https://github.com/mysbupt/CrossCBR 会议:KDD 2022 学校:新加坡国立,中科大 喜欢的小伙伴记得三连哦,感谢支持 更多内容可以关注公众号:秋枫学习笔记 1. 导读 捆绑推荐旨在向用户推荐一...
To tackle this, inspired by the recent success of contrastive learning in mining supervised signals from data itself, we propose a novel Cross-view Contrastive learning mechanism for Knowledge-aware Session-based Recommendation, named CCKSR. Our model comprehensively considers two different graph views,...
Different from traditional contrastive learning methods which generate two graph views by uniform data augmentation schemes such as corruption or dropping, we comprehensively consider three different graph views for KG-aware recommendation, including global-level structural view, local-level collaborative and...
Multi-level Cross-view Contrastive Learning for Knowledge-aware Recommender System (MCCLK) is a knowledge-aware recommendation solution based on GNN and Contrastive Learning, proposing a multi-level cross-view contrastive framework to enhance representation learning from multi-faced aspects....
(CV-MIM) which maximizes mutual information of the same pose performed from different viewpoints in a contrastive learning manner. We further propose two regularization terms to ensure disentanglement and smoothness of the learned representations. The resulting pose representations can be used for cross-...
We leverage contrastive learning with domain specific hard negative mining to train a network to learn similar representations between the synthesized BEV and the aerial map. During inference, BEVLoc guides the identification of the most probable locations within the aerial map through a coarse-to-...
Learning View-Disentangled Human Pose Representation by Contrastive Cross-View Mutual Information Maximizationdoi:10.1109/CVPR46437.2021.01260Long ZhaoYuxiao WangJiaping ZhaoLiangzhe YuanJennifer J. SunFlorian SchroffHartwig AdamXi PengDimitris Metaxas
Moreover, we align the consensus representation and the view-specific representation by the structure-guided contrastive learning module, which makes the view-specific representations from different samples with high structure relationship similar. The proposed module is a flexible multi-view data ...
pair-wise loss包括两部分,一部分是tightness,将类内的点拉近,第二部分是contrastive,将类间的点拉远。前者可以视为 优化特征与标签的互信息的生成式部分,后者可以视为对 特征的熵的估计。这俩部分本质上都在优化互信息。 交叉熵是一种特殊的pair-wise loss,可以分为 tightness部分和contrastive部分。