We use cookies to improve your experience with our site.

基于校准协同训练的图上对抗防御方法

Towards Defense Against Adversarial Attacks on Graph Neural Networks via Calibrated Co-Training

  • 摘要: 1、研究背景(context):图神经网络在多个领域的图数据分析中取得了显著的性能,包括引文网络、生物网络和社交网络。图卷积网络及其变体因其高性能和高效率而引起了广泛关注。然而,最近的研究表明,这些基于消息传递的模型会受到对抗者的干扰。当对手对图数据进行不易察觉的改动时,所学模型的准确性会急剧下降。与特征扰动相比,结构扰动对进行成功的攻击更为有效,进而影响了GNN模型在现实世界场景中的应用。
    2、目的(Objective):当前图神经网络过度依赖不完整数据视图,因而鲁棒性不足。利用图数据天然具备节点特征、图结构等多维视图,本文提出了通过整合图多个数据视图信息,实现不同数据视图带来的信息互补,提高模型鲁棒性。
    3、方法(Method):本文提出了一个关于图数据的校准联合训练框架,该框架结合这节点特征和图结构视图来训练子模型,并通过协同训练来融合它们的知识。由于视图的正交性,一个子模型倾向于对针对另一个子模型的扰动具有鲁棒性。上述方法使得子模型可以相互纠正,从而增强其组合的稳健性。
    4、结果(Result & Findings):实验结果表明,本文提出的方法在不牺牲模型在干净图数据下训练性能的前提下,极大地提高了图模型对对抗性攻击的鲁棒性。通过模型校正及类别均衡的方法,有效的解决了传统协同训练中模型过拟合的问题。与此同时,该方法在面对适应性攻击时也能取得好的表现。
    5、结论(Conclusions):本文提出的对抗防御方法简单且易于实现,同时对提高各种GNN模型对抗对抗性攻击的鲁棒性非常有效。两种视图的互补性使子模型的输出多样化,同时削弱了对抗性攻击在它们之间的可转移性。 评估结果验证了在干净图和扰动图上的有效性。与此同时,本文验证了该方法的普遍适用性。实验表明,此方法可以应用于经典GCN之外的许多类型的GNN,这对现实条件下的GNN使用有很大帮助。

     

    Abstract: Graph neural networks (GNNs) have achieved significant success in graph representation learning. Nevertheless, the recent work indicates that current GNNs are vulnerable to adversarial perturbations, in particular structural perturbations. This, therefore, narrows the application of GNN models in real-world scenarios. Such vulnerability can be attributed to the model’s excessive reliance on incomplete data views (e.g., graph convolutional networks (GCNs) heavily rely on graph structures to make predictions). By integrating the information from multiple perspectives, this problem can be effectively addressed, and typical views of graphs include the node feature view and the graph structure view. In this paper, we propose C2oG, which combines these two typical views to train sub-models and fuses their knowledge through co-training. Due to the orthogonality of the views, sub-models in the feature view tend to be robust against the perturbations targeted at sub-models in the structure view. C2oG allows sub-models to correct one another mutually and thus enhance the robustness of their ensembles. In our evaluations, C2oG significantly improves the robustness of graph models against adversarial attacks without sacrificing their performance on clean datasets.

     

/

返回文章
返回