监督学习、无监督学习和自我监督学习 (Supervised Learning, Unsupervised Learning and Self-Supervised Learning) 监督学习是指利用明确定义的手动标签来训练机器学习模型的学习范式。相反,无监督学习是指不使用任何手动标签的学习范式。作为无监督学习的一个子集,自监督学习表示监督信号从数据本身生成的学习范式。在自监督...
为了解决这个问题,自监督学习(Self-supervised Learning,SSL)正在成为一种全新的范式,通过精心设计的代理任务来提取富含语义信息的知识,而不依赖人工标注的数据。在本综述中,我们扩展了最早出现在计算机视觉和自然语言处理领域的自监督学习,对现有的图自监督学习(Graph Self-supervised Learning,Graph SSL)技术进行了及时...
Cluster-aware graph neural networks for unsupervised graph representation learning. arxiv preprint. Self-supervised graph transformer on large-scale molecular data. NeurIPS, 2020.
Step-4: Channel-level contrastive objective 共享的MLP输出两个表示矩阵H,Ha∈RN×dH,Ha∈RN×d, 对于GG其采样的增强视图GaGa, 可以看作是两个图视图上具有dd通道的两个信号。与之前的研究侧重于节点级对比的计算成本较高的方法不同,我们提供了一种在通道级生成对比对的替代方法. 具体来说, 如图11所示, . ...
3.节点表示容易收到噪声交互的影响。在本文中作者通过在用户-物品图上引入自监督学习来改善GCN在推荐系统上的准确率和鲁棒性,将其称为Self-supervised Graph Learning(SGL),并应用在LightGCN模型上。SGL是模型无关的,并通过辅助自监督任务来补充监督任务中的信息以达成上述目的。
Self-supervised learning (SSL) seeks to create and utilize specific pretext tasks on unlabeled data to aid in alleviating this fundamental limitation of deep learning models. Although initially applied in the image and text domains, recent interest has been in leveraging SSL in the graph domain to...
Liang Wang. 2020.Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131 1040 (2020).[9] Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu Wang, Jing Zhang, and Jie Tang. 2021. Self-supervised learning: Generative or contrastive. TKDE (2021).作者: Shaw99 ...
论文题目:Self-Supervised Learning of Graph Neural Networks: A Unified Review 论文地址:https://arxiv.org/abs/2102.10757 1 Introduction 可以将SSL的前置任务分为两类:对比模型和预测模型。 两类的主要区别在于对比模型需要data-data对进行训练,而预测模型需要data-label对,其中label时从数据中自行生成的,如图1...
GraphLoG在大量未标记图上进行预训练,然后在下游任务上进行微调。实验在化学和生物基准数据集上证明了方法的有效性,化学领域的6个任务中优于其他自监督图方法,平均ROC-AUC性能提升2.1%。框架包含局部实例结构学习和全局语义结构学习两部分。局部结构学习保持数据映射的局部相似性,全局结构学习通过层次模型...
In addition, self-supervised learning is utilized to model cooperative signals between the two graph learning modules, complementing the representation learning capabilities of each module. We name our method self-supervised dual graph learning (SDGL). Experiment results on the seven real-world ...