FSD-GAN: Generative Adversarial Training for Face Swap Detection via Latent Noise Fingerprint
-
Abstract
Current works against Deepfake attacks are mostly passive methods that detect specific defects of Deepfake algorithms, lacking generalization ability. Meanwhile, existing active defense methods focus only on defending against face attribute manipulations, and there remain enormous challenges to establishing an active and sustainable defense mechanism for face swap detection. Therefore, we propose a novel training framework called FSD-GAN (Face Swap Detection based on Generative Adversarial Network), immune to the evolution of face swap attacks. Specifically, FSD-GAN contains three modules: the data processing module, the attack module that generates fake faces only used in training, and the defense module that consists of a fingerprint generator and a fingerprint discriminator. We embed the latent noise fingerprint generated by the fingerprint generator to face images, unperceivable to attackers visually and statistically. Once the attacker uses these protected faces to perform face swap attacks, these fingerprints will be transferred from training data (protected faces) to generative models (real-world face swap models), and they also exist in generated results (swapped faces). Our discriminator can easily detect latent noise fingerprints embedded in face images, converting the problem of face swap detection to verifying if fingerprints exist in swapped face images or not. Moreover, we alternately train the attack and defense modules under an adversarial framework, making the defense module more robust. We illustrate the effectiveness and robustness of FSD-GAN through extensive experiments, demonstrating that it can confront various face images, mainstream face swap models, and JPEG compression under different qualities.
-
-