为此,我们提出了一个信息论指导原则,即鲁棒图信息瓶颈(Robust Graph Information Bottleneck,RGIB),以提取可靠的监督信号并避免表征坍缩。与基本的信息瓶颈GIB[4,5]不同的是,RGIB进一步解耦并平衡了图拓扑、图标签和图表征之间的相互依赖性,为抵抗双边噪声的鲁棒表征构建了新的学习目标。此外,我们探索了两种实例,R...
Graph Information Bottleneck for Subgraph Recognition —— 子图识别的图信息瓶颈 来自 中国科学院自动化研究所 2020 本文关注的问题 本文关注图学习的几个关键问题,如寻找可解释的子图、图去噪和图压缩,这些可以归纳为识别原图的子图问题。该子图应尽可能提供信息,包含较少的冗余和噪声结构。这一问题的设置与信息瓶颈...
概要:图神经网络(GNNs)是一种融合网络结构和节点特征信息的表示学习方法,容易受到对抗性攻击。本文提出了一个信息论原则——图信息瓶颈(Graph Information Bottleneck,GIB),它能够优化图数据表示的表达能力和鲁棒性之间的平衡。GIB继承了一般信息瓶颈(Graph Information Bottleneck,IB)的思想,通过最大化表示与目标之间的互...
graph structure with richer message-passing characterization and better information transport interpretability. From both graph geometry and information theory perspectives, we propose the novel Discrete Curvature Graph Information Bottleneck (CurvGIB) framework to optimize the information transport structure and...
Leveraged by the Information Bottleneck (IB) principle, we first propose the expected optimal representations should satisfy the Minimal-Sufficient-Consensual (MSC) Condition. To compress redundant as well as conserve meritorious information into latent representation, DGIB iteratively directs and refines ...
Graph Information Bottleneck (GIB) for learning minimal sufficient structural and feature information using GNNs - snap-stanford/GIB
本文将信息瓶颈原则扩展到图数据上, 并针对图结构的层次特性, 提出了一个非线性信息瓶颈 [20]指导的层次图结构学习方法modelname (hierarchical graph structure learning with nonlinear information bottleneck). modelname 针对图结构中的不同层级, 在信息瓶颈目标函数的指导下, 通过采用无关特征掩码和结构学习方法, ...
This is the code in Contrastive Graph Structure Learning via Information Bottleneck for Recommendation which has been accepted by NeurIPS 2022.RequirementsTo install requirements:conda env create -f environment.yaml Data ProcessTo prepare the data for the model training:python...
Variational Information Bottleneck for Effective Low-Resource Fine-Tuning While large-scale pretrained language models have obtained impressive results when fine-tuned on a wide variety of tasks, they still often suffer from overfitting in low-resource scenarios. Since such models are general-purpose fea...
Hardware-wise, the stochasticity of resistive switching is leveraged to produce low-cost and scalable random resistive memory arrays that physically implement the weights of an ESGNN, featuring in-memory computing with large parallelism and high efficiency that overcomes the von Neumann bottleneck and ...