The misuse of deepfake technology poses serious security threats, such as identity theft. DeepSafe is a defense system that employs an Adversarial Robust Watermarking technique to disrupt face-swapping models while enabling image source tracking. Based on the Dual Defense framework, this system utilizes the Original-domain Feature Emulation Attack (OFEA) method to embed an invisible watermark, preventing face swapping while ensuring identity traceability.
DeepSafe offers a two-layered defense mechanism against deepfake attacks:
- When users upload their images, DeepSafe embeds an invisible adversarial watermark that disrupts face-swapping models.
- The watermarked image is uploaded to the platform, ensuring protection.
- If an attacker attempts face-swapping, the image degrades or fails to swap properly due to the embedded watermark.
- Even if the face-swapping model processes the image, DeepSafe preserves watermark information.
- The watermark can be extracted from manipulated images, allowing for identity tracking and alerting affected users
Install all the requirements in requirements.txt
with created conda env.
cd deepsafe/backend
uvicorn main:app --reload
cd deepsafe/frontend
npm run dev
Watermark will be encoded into the raw image.
If the generated image is damaged, it means the image is protected well.
- 4 identities: Winter, Chuu, Cha Eun-woo, and Byeon Woo-seok (800 images per person, total: 4,000 images)- Extracted frames from YouTube videos (Winter: 35, Chuu: 48, Cha Eun-woo: 31, Byeon Woo-seok: 48 videos)
- Preprocessing:
- Extracted frames every 20 frames using OpenCV
- Face cropping: resized to 256x256
- Final resizing: 160x160 for model training
- Training Epochs: 47,000
- Loss Functions:
- L1 Loss (weight = 1.0)
- VGG Perceptual Loss (weight = 0.08)
- Optimizer: Adam (lr=5e-5) with MultiStepLR scheduler
- Baseline Code: Faceswap Deepfake Pytorch
- loss
- alpha *
image_loss
+ beta *message_loss
image_loss
= lambda_en * image_encoded_loss + lambda_s * ssim_loss + lambda_adv * image_adv_logits_lossmessage_loss
= L_wm_en + L_wm_adv
- alpha *
- lambda(beta)=1, alpha=2
- Train:Valid:Test=0.8:0.1:0.1
- message size = 4 (4bits, watermark information)
- batch size = 16
![]() |
![]() |
![]() |
![]() |
![]() |
frontend
: frontend codes (stack; next.js)backend/ddf
: Dual defense code aligning with backendbackend/
: backend codes (stack; FastAPI)- Note: You can see full training codes of faceswap in
feature/faceswap
branch.
- Dual Defense official codes: https://github.com/Xming-zzz/DualDefense