Information Bottleneck Principle。 假设观测到的实际数据是X(这里其实对应着LaGraph中的clear version), 扰动后数据是\tilde{X}^{}_{},稠密的向量表示是Z,则可以得到如下图所示的IB公式[Tishby et al., 1999]。也就是说,要求降低稠密(压缩后)的隐向量与扰动的互信息,而提升实际数据与向量表示的互信息。 Inf...
edge dropping:在图中删除一些边的操作。 其他的一些GDA的操作也可以应用AD-GCL principle。在我们的实验中,edge-dropping augmentation optimized by AD-GCL已经比所有pre-defined random GDAs甚至比通过额外测量选择的GDAs获得了更好的效果。 parameterizingT_\Phi(\cdot). t(G) \sim T_\Phi(G)都是一个和G具...
那么一个显而易见的选择就是信息瓶颈理论(Information Bottleneck Principle)。通过注入信息瓶颈,GSAT能够天然的控制图中的信息量,从而达到预期的效果。具体而言,图信息瓶颈损失可以写作:minθ,ϕ−I(fθ(GS),Y)+βI(GS;G) s.t. GS∼gϕ(G)minθ,ϕ−I(fθ(GS),Y)+βI(GS;G) s.t. GS...
In this framework, we maximize the mutual information between local and global representations of a perturbed graph and its adversarial augmentations, where the adversarial graphs can be generated in either supervised or unsupervised approaches. Based on the Information Bottleneck Principle, we ...
understanding for the new objective which can be equivalently seen as an instantiation of the Information Bottleneck Principle under the self-supervised setting... H Zhang,Q Wu,J Yan,... - arXiv e-prints 被引量: 0发表: 2021年 Audio-guided self-supervised learning for disentangled visual speec...
This noise and interference can affect the quality of graph representations during information aggregation. In this paper, we propose a method called NIB-HGSL, which is a hierarchical graph structure learning method based on the nonlinear information bottleneck principle. NIB-HGSL aims to learn ...
information of the input data. Concretely, the GIB principle regularizes the representation of the node features as well as the graph structure so that it increases the robustness of GNNs. For more information, see our paperGraph Information Bottleneck(Wuet al. 2020), and our project website ...
Its idea stemmed from Charles Darwin’s principle of evolution by natural selection. In genetic algorithms, some fundamental genetic ideas are borrowed and artificially used in constructing robust search algorithms with minimal problem information (Sheikh et al., 2008). The search is performed in ...
In recent years, graph neural networks (GNNs) have become extremely popular due to their powerful expressive capabilities and widespread availability. They have been successful in various real-world applications. GNNs work by learning the node representa
and information bottleneck principle. The experimental results demonstrate that iGCL outperforms all baselines on 5 node classification benchmark datasets. iGCL also shows superior performance for different label ratios and is capable of resisting graph attacks, which indicates that iGCL has excellent ge...