论文标题:Self-supervised Learning on Graphs: Contrastive, Generative,or Predictive论文作者:Lirong Wu, Haitao Lin, Cheng Tan,Zhangyang Gao, and Stan.Z.Li论文来源:2022, ArXiv论文地址:download 1介绍图深度学习的发展是由于能够捕获图的结构和节点/边特征。
Contrastive Self-supervised Learning (CSSL)对比自监督学习(CSSL)遵循一个与 GraphGL 非常相似的框架,不同的只是数据增强的方式。不仅有节点的删除,它还认为节点插入是一种重要的增强策略。具体来说,它随机选择一个强连通的子图 SS,去除 SS 中的所有边,添加一个新的节点 vivi,并在 vivi 和SS 中的每个节点...
Introduction 本文引入了masked graph autoencoder (MGAE)框架来对图数据进行学习。 高比率掩蔽输入图结构(70%)有利于下游应用程序。 使用GNN作为编码器,在部分掩码的图上进行边重构,并使用定制的互相关解码器,会起到良好的效果。 Masked Graph Autoencoder MGAE有四个组成部分:Network masking、GNN Encoder、Cross-c...
[82] S. Wan, Y. Zhan, L. Liu, B. Yu, S. Pan, and C. Gong, “Contrastive graph poisson networks: Semi-supervised learning with extremely limited labels,” in NeurIPS, 2021. [83] X. Wang, N. Liu, H. Han, and C. Shi, “Self-supervised heterogeneous graph neural network with co...
Graph Self-Supervised Learning: A Survey 作者:Philip S. Yu等 主要工作: 对图自监督学习进行归类 总结了已有的图自监督学习的工作 提出了对后续的图自监督学习工作方向的展望 相较于已有的图自监督学习综述,他们的工作对这块分得更科学更细致 Introduction 部分 ...
Self-supervised learningNeighborhood aggregationNeighborhood aggregation is a key operation in most of the graph neural network-based embedding solutions. Each type of aggregator typically has its best application domain. The single type of aggregator for aggregation adopted by most existing embedding ...
论文题目:Self-Supervised Learning of Graph Neural Networks: A Unified Review 论文地址:https://arxiv.org/abs/2102.10757 1 Introduction 可以将SSL的前置任务分为两类:对比模型和预测模型。 两类的主要区别在于对比模型需要data-data对进行训练,而预测模型需要data-label对,其中label时从数据中自行生成的,如图1...
This has led to the development of self-supervised learning (SSL) that aims to alleviate this limitation by creating domain specific pretext tasks on unlabeled data. Simultaneously, there are increasing interests in generalizing deep learning to the graph domain in the form of graph neural networks...
graph structure in different manners. We term this new learning paradigm asSelf-supervised Graph Learning(SGL), implementing it on the state-of-the-art model LightGCN. Through theoretical analyses, we find that SGL has the ability of automatically mining hard negatives. Empirical studies on three ...
Self-supervised representation learning leverages input data itself as supervision and benefits almost all types of downstream tasks. In this survey, we take a look into new self-supervised learning methods for representation in computer vision, natural language processing, and graph learning. We ...