Towards a More Stable and General Subgraph Information Bottleneck
Hongzhi Liu (Xi'an Jiaotong University); Kaizhong Zheng (Xi'an Jiaotong University); Shujian Yu (Vrije Universiteit Amsterdam); Badong Chen ("Xi'an Jiaotong University, China")
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Graph Neural Networks (GNNs) have been widely applied to graph-structured data. However, the lack of interpretability impedes its practical deployment especially in high-risk areas such as medical diagnosis. Recently, the Information Bottleneck (IB) principle has been extended to GNNs to identify a compact subgraph that is most informative to class labels, which significantly improves the interpretability on decision. However, existing graph Information Bottleneck models are either unstable during the training (due to the difficulty of mutual information estimation) or only focus on a special kind of graph (e.g., brain networks) that suffer from poor generalization to general graph datasets with varying graph sizes. In this work, we extend the recently developed Brain Information Bottleneck (BrainIB) to general graphs by introducing matrix-based Rényi's α-order mutual information to stablize the training; and by designing a novel mask strategy to deal with varying graph sizes such that the new method can also be used for social networks, molecules, etc. Extensive experiments on different types of graph datasets demonstrate the superior stability and generality of our model.