Wav2Keyword is keyword spotting(KWS) based on Wav2Vec 2.0. This model shows state-of-the-art in Speech commands dataset V1 and V2.
-
Updated
Jan 11, 2023 - Python
Wav2Keyword is keyword spotting(KWS) based on Wav2Vec 2.0. This model shows state-of-the-art in Speech commands dataset V1 and V2.
A library built for easier audio self-supervised training, downstream tasks evaluation
Classify audio with neural nets on embedded systems like the Raspberry Pi
Pytorch Reimplementation of DiffWave unconditional generation: a high quality waveform synthesizer.
Kaggle Competitions: TensorFlow Speech Recognition Challenge
Pytorch implementation of BiFSMN, IJCAI 2022
Attention-based model for keywords spotting
Speech command recognition DenseNet transfer learning from UrbanSound8k in keras tensorflow
Generalized Deep Multiset Canonical Correlation Analysis for Multiview Learning of Speech Representations
Effective processing pipeline and advanced neural network architectures for small-footprint keyword spotting
Audio Classification with AlexNet and Speech Commands dataset
Multi-class classification of speech command data. Dataset collected from kaggle speech recognition challenge and used pyTorch for implementation.
This project is about spotting a keyword from the Google Speech Commands Dataset.
A Model-based Agent, for chinese speech recognize.
Add a description, image, and links to the speech-commands topic page so that developers can more easily learn about it.
To associate your repository with the speech-commands topic, visit your repo's landing page and select "manage topics."