[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
-
Updated
Jun 26, 2024 - Python
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
Wav2Lip UHQ extension for Automatic1111
code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021
Parallel and High-Fidelity Text-to-Lip Generation; AAAI 2022 ; Official code
Database of "Learning to Predict Salient Faces: A Novel Visual-Audio Saliency Model", ECCV 2020
DoyenTalker uses deep learning techniques to generate personalized avatar videos that speak user-provided text in a specified voice. The system utilizes Coqui TTS for text-to-speech generation, along with various face rendering and animation techniques to create a video where the given avatar articulates the speech.
Thin plate spline motion model TPSMM converted to ONNX
Add a description, image, and links to the talking-face topic page so that developers can more easily learn about it.
To associate your repository with the talking-face topic, visit your repo's landing page and select "manage topics."