This repository demonstrates the use of the powerful Wav2Lip model to synchronize lip movements with speech in videos. The deep learning model generates human-like accurate lip sync that enhances the visual appeal of any talking-head video.
- Explanation: Link
We've used the following video and audio for this demonstration:
The final result after the lip sync can be found in the outputs.mp4
file in this repository.
This project/assessment was completed as part of a screening task assigned for the role of AI Engineer Intern. It served as a practical exercise in applying state-of-the-art AI methods to solve real-world problems, specifically in the domain of video processing and enhancement.