Visual Speech Recognition for Multiple Languages
-
Updated
Aug 17, 2023 - Python
Visual Speech Recognition for Multiple Languages
A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.
Auto-AVSR: Lip-Reading Sentences Project
"LipNet: End-to-End Sentence-level Lipreading" in PyTorch
Python toolkit for Visual Speech Recognition
Visual speech recognition with face inputs: code and models for F&G 2020 paper "Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition"
Deep Visual Speech Recognition in arabic words
Visual Speech Recognition using deep learing methods
Implementation of "Combining Residual Networks with LSTMs for Lipreading" in Keras and Tensorflow2.0
Speaker-Independent Speech Recognition using Visual Features
EMOLIPS: TWO-LEVEL APPROACH FOR LIP-READING EMOTIONAL SPEECH
Deep Visual Speech Recognition in arabic words
LipReadingITA: Keras implementation of the method described in the paper 'LipNet: End-to-End Sentence-level Lipreading'. Research project for University of Salerno.
In this repository, I try to use k2, icefall and Lhotse for lip reading. I will modify it for the lip reading task. Many different lip-reading datasets should be added. -_-
Online Knowledge Distillation using LipNet and an Italian dataset. Master's Thesis Project.
Strong Gateway using Speech Processing ,3D Vision and Language processing . Deployed using Django
Add a description, image, and links to the visual-speech-recognition topic page so that developers can more easily learn about it.
To associate your repository with the visual-speech-recognition topic, visit your repo's landing page and select "manage topics."