**NeurIPS 2024 (Poster) **
This repository represents the official implementation of the paper titled "FreqBlender: Enhancing DeepFake Detection by Blending Frequency Knowledge".
Hanzhe Li , Jiaran Zhou , Yuezun Li#, Baoyuan Wu, Bin Li, Junyu Dong ,
Vas Group
#Corresponding author.
The inference code was tested on:
Ubuntu 22.04 LTS, Python 3.9.17, CUDA 12.2, GeForce RTX 3090ti
You can create a virtual environment and install the required dependency libraries in the following:
pip install virtualenv
mkdir venv && cd venv
virtualenv FreqBlender
source FreqBlender/bin/activate
pip install -r requirement.txt-
Dataset.
Download your datasets Celeb-DF-v2, FaceForensics++ and put them on a hard drive with plenty of memory.
-
Download landmark detector (shape_predictor_81_face_landmarks.dat) from here and place it in
./dataPreprocess/folder. -
Dataset preprocess.
We can preprocess our data with the following:
python dataPreprocess/crop_dlib_ff.py -d Deepfakes -p "dataset_path" python dataPreprocess/crop_retina_ff.py -d Deepfakes -p "dataset_path"
-
Build a facebank for mobileface:
Here we use the first frame of each real video in the FF++(c23) dataset as the face feature of this video.
python net/mobileface/facebank.py -p "dataset_path"
-
Dataset statistics.
The statistics in the paper can be obtained by following the instructions below:
Notice: Frequency visualization can only be done after the data preparation phase.
python dataAnalysis/calculate_dct.py -d "dataset_name" -p "dataset_path"
then, we will have the frequency statistics (Deepfake_real.npy) corresponding to the forgery mode.
-
Statistical data visualization.
The visualizations in the paper can be obtained by following the instructions below:
python dataAnalysis/freq_explore.py -d "dataset_name"
-
Training process visualization.
tensorboard --logdir=./logs
-
Before you start training, you need to download our resnet weights and put them in the corresponding folder
(.FreqBlender/net/resnet/checkpoints). Or, you can choose to simply train a weight yourself into the corresponding folder. -
Before the final training, we also need to download the weights of the face prediction model and put it into the corresponding folder:
. └── net └── mobileface └── save_model ├── mtcnn │ └── ONet.pth │ └── PNet.pth │ └── RNet.pth └── mobilefacenet.pth -
Finally, all you need to do is modify the corresponding configuration file in
./config, and you can run.main.pydirectly. -
Our weights can be downloaded here(FPNet).
-
You can download our model weights(efn-b4) and directly put them into the code for testing:
python inference/inference_dataset.py -w "model_weights" -d "dataset_name"
-
You can also create a folder to place your own weights for testing.
Note: The model we are testing here is the model of the previous work that was trained using our method.
- If you want to train to get our weights, you need to replace the corresponding two files of SBI : 1)
.src/configs/sbi/base.jsonneeds to be replaced by base.json, 2).src/utils/sbi.pyneeds to be replaced by sbi.py. Then, put our weights(FPNet) and code(.net/aeNet/conv_autoencoder_pixelshuffle.py)in folder(.src/utils/aeNet)of SBI.
Thanks for your attention, please cite our paper:
@article{li2024freqblender,
title={FreqBlender: Enhancing DeepFake Detection by Blending Frequency Knowledge},
author={Hanzhe Li and Jiaran Zhou and Yuezun Li and Baoyuan Wu and Bin Li and Junyu Dong},
journal={Advances in Neural Information Processing Systems},
year={2024}
}