Combining Innovative CVTNet and Regularization Loss for Robust Adversarial Defense
-
Abstract
Deep neural networks (DNNs) are vulnerable to elaborately crafted and imperceptible adversarial perturbations. With the continuous development of adversarial attack methods, existing defense algorithms can no longer defend against them proficiently. Meanwhile, numerous studies have shown that vision transformer (ViT) has stronger robustness and generalization performance than the convolutional neural network (CNN) in various domains. Moreover, because the standard denoiser is subject to the error amplification effect, the prediction network cannot correctly classify all reconstruction examples. Firstly, this paper proposes a defense network (CVTNet) that combines CNNs and ViTs that is appended in front of the prediction network. CVTNet can effectively eliminate adversarial perturbations and maintain high robustness. Furthermore, this paper proposes a regularization loss ( \calL_\rmCPL), which optimizes the CVTNet by computing different losses for the correct prediction set (CPS) and the wrong prediction set (WPS) of the reconstruction examples, respectively. The evaluation results on several standard benchmark datasets show that CVTNet performs better robustness than other advanced methods. Compared with state-of-the-art algorithms, the proposed CVTNet defense improves the average accuracy of pixel-constrained attack examples generated on the CIFAR-10 dataset by 24.25% and spatially-constrained attack examples by 14.06%. Moreover, CVTNet shows excellent generalizability in cross-model protection.
-
-