1. Deep Representation Learning 针对深度表示学习,主要介绍了下面三种方法: Schema of deep multi-view representation learning models. SplitAE W. Wang, R. Arora, K. Livescu, and J. Bilmes, “On deep multi-view representation learning,” in International conference on machine learning. PMLR, 2015,...
(3)learning by mutual information maximization 3. method 我们的方法通过最大化node representations of one view and graph representations of another view and vice versa之间的 MI 来学习node 和 graph representations,并且相较于比较global or multi-scale encodings在node和graph的分类任务上都获得了更好的结果。
【ICML 2020】Contrastive Multi-View Representation Learning on Graphs 论文简读 katrina hello world26 人赞同了该文章 这是一篇图对比学习的论文,虽然是2020年的论文,但是跑了一下其实比现在很多模型都稳定而且效果更好。 【背景和动机】 在计算机视觉领域,多视图的表示学习方法取得了很不错的效果,但是在图领域,...
Hassani K, Khasahmadi A H. Contrastive multi-view representation learning on graphs[C]//International Conference on Machine Learning. PMLR, 2020: 4116-4126. 摘要导读 通过对比图的结构视图,本文引入了一种学习节点和图级别表示的自监督方法。与视图表示学习不相同的是,对于图结构的对比学习来讲,增加对比视...
论文标题:Contrastive Multi-View Representation Learning on Graphs论文作者:Kaveh Hassani 、Amir Hosein Khasahmadi论文来源:2020, ICML论文地址:download 论文代码:download 1 Introduction节点local-global 对比学习。2 Method框架如下:框架由以下组件组成:仅有图结构数据增强,并无节点级数据增强; 两个专用 GNN ...
Uncertainty-Aware Multi-View Representation LearningYu Geng; Zongbo Han; Changqing Zhang; Qinghua Hu2021 Justicia: A Stochastic SAT Approach to Formally Verify FairnessBishwamittra Ghosh; Debabrota Basu; Kuldeep S. Meel2021 The Importance of Modeling Data Missingness in Algorithmic Fairness: A Causal...
Learning text representation using recurrent convolutional neural network with highway layers. arXiv preprint arXiv:160606905. 2016; Ibrahim M, Torki M, El-Makky N. Imbalanced toxic comments classification using data augmentation and deep learning. In: 2018 17th IEEE international conference on machine...
Rethinking Kernel Methods for Node Representation Learning on Graphs Yu Tian, Long Zhao, Xi Peng, Dimitris N. Metaxas NeurIPS 2019 Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks Sitao Luan, Mingde Zhao, Xiao-Wen Chang, Doina Precup NeurIPS 2019 N-Gram Graph: A Sim...
自监督-Contrastive Multi-View Representation Learning on Graphs 标签:自监督,图神经网络 动机 GNN 大多数需要依赖于任务的标签来学习丰富的表示,尽管如此,与视频、图像、文本和音频等更常见形式相比,给图打上标签是一项挑战 最近关于通过最大化
2.3 learning by mutual information maximization 3. method our approach learns node and graph representations bymaximizing MI between node representations of one viewandgraph representation of another viewand vice versa. (better than contrasting global or multi-scale encodings on both node and graph clas...