We use cookies to improve your experience with our site.

预训练和学习:在图神经网络中保留全局信息

Pre-Train and Learn: Preserving Global Information for Graph Neural Networks

  • 摘要: 1、研究背景(context)。
    近几年,图神经网络是图学习领域的前沿技术,相关的研究汗牛充栋。然而,常见图神经网络一般只能承受两层的深度,过多的层数会导致过平滑问题,因此,只能利用到两部以内的局部信息。
    2、目的(Objective)。
    本研究提出了一种框架,可以灵活地嵌入不同类型的图神经网络内核,使其能够获取并利用全局信息,进而提高方法的分类水平。
    3、方法(Method)。
    首先,基于随机游走的无监督学习方法,分别获取每个节点的全局结构特征和全局属性特征;其次,基于平行构架,使用3个图神经网络模型分别从原始属性和预训练特征中提取出高维特征;最后,加权混合不同的高维特征,进行分类。
    4、结果(Result&Findings)。
    本文在4个数据集上进行了实证实验,发现本文的框架可以明显提高所有方法的性能。特别的是,在Cora和Pubmed两个标准数据集上,本文得到了新的标杆结果:Cora(84.31%)和Pubmed(80.95%)。
    5、结论(Conclusions)。
    本文提出了一种可使通用图神经网络方法获取全局信息处理能力的框架,实证中的结果论证了本文方法的有效性。未来,我们将针对具体的图神经网络方法,研究针对性的、提升全局信息利用能力的方案。

     

    Abstract: Graph neural networks (GNNs) have shown great power in learning on graphs. However, it is still a challenge for GNNs to model information faraway from the source node. The ability to preserve global information can enhance graph representation and hence improve classification precision. In the paper, we propose a new learning framework named G-GNN (Global information for GNN) to address the challenge. First, the global structure and global attribute features of each node are obtained via unsupervised pre-training, and those global features preserve the global information associated with the node. Then, using the pre-trained global features and the raw attributes of the graph, a set of parallel kernel GNNs is used to learn different aspects from these heterogeneous features. Any general GNN can be used as a kernal and easily obtain the ability of preserving global information, without having to alter their own algorithms. Extensive experiments have shown that state-of-the-art models, e.g., GCN, GAT, Graphsage and APPNP, can achieve improvement with G-GNN on three standard evaluation datasets. Specially, we establish new benchmark precision records on Cora (84.31%) and Pubmed (80.95%) when learning on attributed graphs.

     

/

返回文章
返回