Keras version of Syncnet, by Joon Son Chung and Andrew Zisserman.
-
Updated
Feb 4, 2019 - Python
Keras version of Syncnet, by Joon Son Chung and Andrew Zisserman.
AR based android application using image processing and machine learning techniques, that makes a still images look like they are talking with audio generation and lip movements synced over that audio
Learning Lip Sync of Obama from Speech Audio
YerFace! A stupid facial performance capture engine for cartoon animation.
3D Avatar Lip Synchronization from speech (JALI based face-rigging)
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Extension of Wav2Lip repository for processing high-quality videos.
Audio-Visual Lip Synthesis via Intermediate Landmark Representation
A simple Google Colab notebook which can translate an original video into multiple languages along with lip sync.
Rhubarb Lip Sync is a command-line tool that automatically creates 2D mouth animation from voice recordings. You can use it for characters in computer games, in animated cartoons, or in any other project that requires animating mouths based on existing recordings.
Revolutionize virtual interactions with a Unity-based chatbot combining GPT-generated dialogue, Oculus Lip Sync, and Google Cloud Speech Recognition for lifelike conversations. See running version on the Upwork Page.
PyTorch Implementation for Paper "Emotionally Enhanced Talking Face Generation" (ICCVW'23 and ACM-MMW'23)
Wav2Lip UHQ extension for Automatic1111
Add a description, image, and links to the lip-sync topic page so that developers can more easily learn about it.
To associate your repository with the lip-sync topic, visit your repo's landing page and select "manage topics."