Skip to content

[TDSC 2025] Hiding Faces in Plain Sight: Defending DeepFakes by Disrupting Face Detection

OUC-VAS/FacePoison

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hiding Faces in Plain Sight: Defending DeepFakes by Disrupting Face Detection

Delong Zhu 1Yuezun Li 1*Baoyuan Wu 2Jiaran Zhou 1
Zhibo Wang 3Siwei Lyu 4
1 Ocean University of China  2 The Chinese University of Hong Kong  3 Zhejiang University 
4 University at Buffalo SUNY 
* Corresponding author



showcase

Introduction 📖

This repo, named FacePoison, contains the official PyTorch implementation of our paper Hiding Faces in Plain Sight: Defending DeepFakes by Disrupting Face Detection. We are actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR). 💖

Getting Started 🏁

1. Clone the code and prepare the environment 🛠️

Note

We recommend using Anaconda to manage the Python environment:

git clone https://github.com/OUC-VAS/FacePoison
cd FacePoison

# create env using conda
conda create -n FacePoison python=3.8
conda activate FacePoison
pip3 install torch torchvision torchaudio
pip install -r requirements.txt

Here: The version and device information I used is also provided below for reference.

- Python 3.8.19
- PyTorch 2.2.1
- CUDA 12.1
- GPU NVIDIA RTX 3060
- OS Ubuntu 22.04

2. Data Preparation 📦

You can download images and annotations of WIDER dataset from Google Drive or OneDrive. Unzip and place them in ./attack_public_datasets.

If you intend to perform protection on videos, we provide the data for id0, id1, and id2 from the Celeb-DF used in our main experiments. You can download them from Google Drive or OneDrive to assist with the next step of generating perturbations. Unzip and place them in ./VideoFacePoison/attack_public_datasets.

3. Download Pretrained Weights 📥

Please download all pretrained weights of face detectors from Google Drive or OneDrive. Place them in ./detectors/xxx/weights, respectively.

4. FacePoison 🎭

Important

When you need to attack YOLO5Face, you need to uncomment lines 24 and 25 in the code ./detectors/yolov5face/__init__.py, and conversely, comment out the code during inference.

Attack 💉

cd attack_public_datasets
python run_poison.py

If the script runs successfully, you will get a series of adversarial samples in /save_data/wider/adv/.

Inference 🔎

python origin_detect.py  # Clean_sample
python adv_detect.py  # Adversarial_sample

5. VideoFacePoison 👾

Attack 💉

FacePoison is applied to every frame of the videos:

# FP-all
cd VideoFacePosion
python run_poison.py

Inference 🔎

Note

  • Since the Celeb-DF videos do not have face annotations, we use the detection results from PyramidBox — the best-performing face detector on the WIDER dataset — as the ground truth for the other four detectors. To detect faces for PyramidBox itself, we use the results from the second-best detector: DSFD.
  • For the generation of VideoFacePoison, we significantly reduced the time cost. In the code, after generating the perturbations, we directly use them for testing.
  • Before executing VideoFacePoison, make sure that FacePoison has been applied to all frames.
python origin_detect.py   # Clean_sample
python adv_detect.py --exp 'exp3' # Adversarial_sample

The --exp argument supports four options: [origin, exp1, exp2, exp3], which correspond to the methods [FP-all, FP-fixed, FP-forward, VideoFacePoison] in the paper.

Acknowledgements 💐

We would like to thank the contributors of RetinaFace, YOLO5Face, PyramidBox, S3FD and DSFD repositories, for their open research and contributions.

Citation 💖

If you find FacePoison useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:

@article{zhu2025hiding,
  title={Hiding faces in plain sight: Defending deepfakes by disrupting face detection},
  author={Zhu, Delong and Li, Yuezun and Wu, Baoyuan and Zhou, Jiaran and Wang, Zhibo and Lyu, Siwei},
  journal={IEEE Transactions on Dependable and Secure Computing},
  year={2025}
}

@inproceedings{li2023face,
  title={Face Poison: Obstructing DeepFakes by Disrupting Face Detection},
  author={Li, Yuezun and Zhou, Jiaran and Lyu, Siwei},
  booktitle={IEEE International Conference on Multimedia and Expo},
  year={2023}
}

Contact 📧

Delong Zhu; zhudelong@stu.ouc.edu.cn

About

[TDSC 2025] Hiding Faces in Plain Sight: Defending DeepFakes by Disrupting Face Detection

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages