We use cookies to improve your experience with our site.
Xu-Gang Wu, Hui-Jun Wu, Xu Zhou, Xiang Zhao, Kai Lu. Towards Defense Against Adversarial Attacks on Graph Neural Networks via Calibrated Co-Training[J]. Journal of Computer Science and Technology, 2022, 37(5): 1161-1175. DOI: 10.1007/s11390-022-2129-2
Citation: Xu-Gang Wu, Hui-Jun Wu, Xu Zhou, Xiang Zhao, Kai Lu. Towards Defense Against Adversarial Attacks on Graph Neural Networks via Calibrated Co-Training[J]. Journal of Computer Science and Technology, 2022, 37(5): 1161-1175. DOI: 10.1007/s11390-022-2129-2

Towards Defense Against Adversarial Attacks on Graph Neural Networks via Calibrated Co-Training

  • Graph neural networks (GNNs) have achieved significant success in graph representation learning. Nevertheless, the recent work indicates that current GNNs are vulnerable to adversarial perturbations, in particular structural perturbations. This, therefore, narrows the application of GNN models in real-world scenarios. Such vulnerability can be attributed to the model’s excessive reliance on incomplete data views (e.g., graph convolutional networks (GCNs) heavily rely on graph structures to make predictions). By integrating the information from multiple perspectives, this problem can be effectively addressed, and typical views of graphs include the node feature view and the graph structure view. In this paper, we propose C2oG, which combines these two typical views to train sub-models and fuses their knowledge through co-training. Due to the orthogonality of the views, sub-models in the feature view tend to be robust against the perturbations targeted at sub-models in the structure view. C2oG allows sub-models to correct one another mutually and thus enhance the robustness of their ensembles. In our evaluations, C2oG significantly improves the robustness of graph models against adversarial attacks without sacrificing their performance on clean datasets.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return