We use cookies to improve your experience with our site.
岑科廷, 沈华伟, 曹婍, 徐冰冰, 程学旗. 基于身份保持对抗训练的鲁棒网络表示学习模型[J]. 计算机科学技术学报, 2024, 39(1): 177-191. DOI: 10.1007/s11390-023-2256-4
引用本文: 岑科廷, 沈华伟, 曹婍, 徐冰冰, 程学旗. 基于身份保持对抗训练的鲁棒网络表示学习模型[J]. 计算机科学技术学报, 2024, 39(1): 177-191. DOI: 10.1007/s11390-023-2256-4
Cen KT, Shen HW, Cao Q et al. Identity-preserving adversarial training for robust network embedding. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 39(1): 177−191 Jan. 2024. DOI: 10.1007/s11390-023-2256-4.
Citation: Cen KT, Shen HW, Cao Q et al. Identity-preserving adversarial training for robust network embedding. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 39(1): 177−191 Jan. 2024. DOI: 10.1007/s11390-023-2256-4.

基于身份保持对抗训练的鲁棒网络表示学习模型

Identity-Preserving Adversarial Training for Robust Network Embedding

  • 摘要: 网络表示学习期望给每个节点学习一个低维向量表示,同时保留网络的结构信息和节点属性信息。该方法已经在节点分类、链路预测等场景中有重要的作用。然而,现有的网络表示学习模型容易受到随机或对抗扰动的影响,从而降低网络表示学习模型在下游任务中的表现。为了实现鲁棒的网络表示学习模型,研究人员开始尝试引入对抗训练,通过混合对抗样本和原始样本来约束表示学习过程,从而提升模型的鲁棒性。然而,现有方法通过启发式的方式生成对抗样本,无法保证生成的对抗样本的不可感知性,从而限制了对抗训练的能力。在本文中,我们提出了一种基于身份保持对抗训练的鲁棒网络表示学习模型。该方法通过显式的正则化约束保持了对抗样本的不可察觉性。我们将这种保持身份的正则化形式化为一个多类别分类问题,其中每个节点代表一个类,并将每个对抗样本分到其原始节点所对应的类。大量真实世界的数据证明了我们的方法能显著的提升网络表示学习模型的鲁棒性,以及提升节点在众多不同下游任务中的结果。我们的模型是一个通用的方法,能应用到任意现有网络表示学习模型上,并提升他们鲁棒性。

     

    Abstract: Network embedding, as an approach to learning low-dimensional representations of nodes, has been proved extremely useful in many applications, e.g., node classification and link prediction. Unfortunately, existing network embedding models are vulnerable to random or adversarial perturbations, which may degrade the performance of network embedding when being applied to downstream tasks. To achieve robust network embedding, researchers introduce adversarial training to regularize the embedding learning process by training on a mixture of adversarial examples and original examples. However, existing methods generate adversarial examples heuristically, failing to guarantee the imperceptibility of generated adversarial examples, and thus limit the power of adversarial training. In this paper, we propose a novel method Identity-Preserving Adversarial Training (IPAT) for network embedding, which generates imperceptible adversarial examples with explicit identity-preserving regularization. We formalize such identity-preserving regularization as a multi-class classification problem where each node represents a class, and we encourage each adversarial example to be discriminated as the class of its original node. Extensive experimental results on real-world datasets demonstrate that our proposed IPAT method significantly improves the robustness of network embedding models and the generalization of the learned node representations on various downstream tasks.

     

/

返回文章
返回