VIPL AVSU
Audio-Visual Speech Understanding Research Group at Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences
Pinned Loading
Repositories
Showing 6 of 6 repositories
- learn-an-effective-lip-reading-model-without-pains Public
The PyTorch Code and Model In "Learn an Effective Lip Reading Model without Pains", (https://arxiv.org/abs/2011.07557), which reaches the state-of-art performance in LRW-1000 dataset.
VIPL-Audio-Visual-Speech-Understanding/learn-an-effective-lip-reading-model-without-pains’s past year of commit activity - LipNet-PyTorch Public
The state-of-art PyTorch implementation of the method described in the paper "LipNet: End-to-End Sentence-level Lipreading" (https://arxiv.org/abs/1611.01599)
VIPL-Audio-Visual-Speech-Understanding/LipNet-PyTorch’s past year of commit activity - deep-face-speechreading Public
Visual speech recognition with face inputs: code and models for F&G 2020 paper "Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition"
VIPL-Audio-Visual-Speech-Understanding/deep-face-speechreading’s past year of commit activity - Lipreading-DenseNet3D Public
DenseNet3D Model In "LRW-1000: A Naturally-Distributed Large-Scale Benchmark for Lip Reading in the Wild", https://arxiv.org/abs/1810.06990
VIPL-Audio-Visual-Speech-Understanding/Lipreading-DenseNet3D’s past year of commit activity