Official repository for Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation (WACV 2024).
Python 3.x environment with ffmpeg is needed. The rest of the requirements can be installed using:
pip install -r requirements.txt
Due to LRW license agreement, we are only able to provide a checkpoint of our model trained on CREMA.
The entire test set generated by our method can be downloaded from here.
-
Download and unpack checkpoints (our model and pretrained audio encoder).
-
Download and unpack preprocessed CREMA video and audio files.
-
Specify paths and options in
config_crema.yaml
(check comments in the file). -
Run the script
python sample.py
You can use audio recordings of your choosing freely. The only requirements are 16 kHz audio rate and a single audio channel. Please note our model is able to generate videos up to 9 seconds long depending on the audio.
It is highly recommended to use a frame from the provided CREMA videos. This instance of the model was trained on clips with green background only. If you want to use your identity frame anyway, please follow this repo for face alignment. Additionally, you may want to try segmenting the person and replacing background to green.
The training code can be found in the branch train. We aplogize for the delay.
@inproceedings{stypulkowski2024diffused,
title={Diffused heads: Diffusion models beat gans on talking-face generation},
author={Stypu{\l}kowski, Micha{\l} and Vougioukas, Konstantinos and He, Sen and Zi{\k{e}}ba, Maciej and Petridis, Stavros and Pantic, Maja},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={5091--5100},
year={2024}
}
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.