A simple GUI app to synchronize recorded audio with video lip movements using the Wav2Lip-HQ model.
- Live Recording: Record your own audio and video directly from the app.
- Lip Synchronization: Automatically apply the audio recording to the video to make the lips match the speech.
-
Clone the repository:
git clone https://github.com/cainky/Lipsync.git
-
Navigate to the repository directory:
cd Lipsync
-
Make sure you put the necessary model files into backend/wav2lip-hq/checkpoints as described in the wav2lip-hq directory
-
Install Poetry and the required packages for the backend (assuming you have Python already installed):
cd backend curl -sSL https://install.python-poetry.org | python3 - poetry install
-
Run the backend app:
poetry run app.py
-
Install the required packages for the frontend (assuming you have npm already installed)
cd ../frontend npm install
-
Run the frontend app:
npm run dev
- In the root project directory, run
docker-compose build
docker-compose up
- Frontend container is at http://localhost:3000
- Backend container is at http://localhost:5000
- Record Audio: Click the 'Record Audio' button and speak into your microphone.
- Record Video: Click the 'Record Video' button and record a video clip.
- Merge Recordings: Click the 'Merge Recordings' button and wait for the process to complete. Your output video will be appear when it's ready.
This project is licensed under the terms of the MIT License.