We use cookies to improve your experience with our site.
Yongduo Sui, Xiang Wang, Tianlong Chen, Meng Wang, Xiangnan He, Tat-Seng Chua. Inductive Lottery Ticket Learning for Graph Neural Networks[J]. Journal of Computer Science and Technology. DOI: 10.1007/s11390-023-2583-5
Citation: Yongduo Sui, Xiang Wang, Tianlong Chen, Meng Wang, Xiangnan He, Tat-Seng Chua. Inductive Lottery Ticket Learning for Graph Neural Networks[J]. Journal of Computer Science and Technology. DOI: 10.1007/s11390-023-2583-5

Inductive Lottery Ticket Learning for Graph Neural Networks

  • Graph neural networks (GNNs) have gained increasing popularity, while usually suffer from unaffordable computations for real-world large-scale applications. Hence, pruning GNNs is of great need but largely unexplored. A recent work studies lottery ticket learning for GNNs, aiming to find a subset of model parameters and graph structure that can best maintain the GNN performance. However, it is tailed for the transductive setting, failing to generalize to unseen graphs, which are common in inductive tasks like graph classification. In this work, we propose a simple and effective learning paradigm, Inductive Co-Pruning of GNNs (ICPG), to endow graph lottery tickets with inductive pruning capacity. To prune the input graphs, we design a predictive model to generate importance scores for each edge based on the input; to prune the model parameters, it views the weight's magnitude as their importance scores. Then we design an iterative co-pruning strategy to trim the graph edges and GNN weights based on their importance scores. Although it might be strikingly simple, ICPG surpasses the existing pruning method and can be universally applicable in both inductive and transductive learning settings. On ten graph-classification and two node-classification benchmarks, ICPG achieves the same performance level with 14.26%-43.12% sparsity for graphs and 48.80%-91.41% sparsity for the model.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return