这篇文章发表于WWW 2022,提出了一种新的图对比学习方法,即Dual Space Graph Contrastive (DSGC),用于在不同空间(双曲空间和欧氏空间)生成的视图之间进行图对比学习。 无监督图表示学习已成为解决现实问题的有力工具,并在图学习领域取得了巨大成功。图对比学习是一种无监督的图表示学习方法,近年来受到研究者的关注...
Multi-layer embedding contrastive learningRobustnessTemporal knowledge graph reasoning(TKGR) has attracted widespread attention due to its ability to handle dynamic temporal features. However, existing methods face three major challenges: (1) the difficulty of capturing long-distance dependencies in ...
Many recent SSL methods have provided well-designed pretext tasks based contrastive learning that are applicable for graphs to deal with graph anomaly detection, the task of detecting anomalies (e.g., anomalous nodes, edges, sub-graphs) in static graphs. Note that in a static graph, oftentimes...
Based on this, we developed a dual-task contrastive learning framework to enhance the cross-domain generalization and emotion recognition abilities of deep learning models. In our dual-task design, we construct two unique meta-learning tasks from different domains, creating a dual-task structure that...
Graph contrastive learning with augmentations. Adv Neural Inf Process Syst. 2020;33:5812–23. Google Scholar Xu M, Wang H, Ni B, Guo H, Tang J. Self-supervised graph-level representation learning with local and global structure. In: International Conference on Machine Learning, 2021. p. ...
MHCLMDA: multihypergraph contrastive learning for miRNA–disease association prediction. Brief Bioinform. 2024;25(1):bbad524. 22. Xiao Q, Luo J, Liang C, Cai J, Ding P. A graph regularized non-negative matrix factorization method for identifying microRNA-disease associa- tions. Bioinformatics....
The recent advanced Contrastive Neural Topic Model (CNTM) was proposed to tackle topic collapse through document-level contrastive learning. However, limited by its usage of the Logistic-Normal prior in topic space and document level contrastive learning, it is less capable of disentangling semantically...
3.3. Multi-scale Actor Contrastive Learning The actor representation is reweighted and aggregated by dual spatiotemporal paths, however, the modeling process is independent. To promote cooperation of these two com- plementary paths, we design a self-supervised Multi...
ontrastive (DSGC) Learning, to conduct graph contrastive learning among views generated in different spaces including the hyperbolic space and the Euclidean space. Since both spaces have their own advantages to represent graph data in the embedding spaces, we hope to utilize graph contrastive ...
The explicit knowledge alignment objective aims to directly optimize the knowledge representation of LLMs through dual-view knowledge graph contrastive learning. The implicit knowledge alignment objective focuses on incorporating textual patterns of knowledge into LLMs through triple completion language modeling...