Multilingual Voice Understanding Model
-
Updated
Oct 18, 2024 - Python
Multilingual Voice Understanding Model
The neural network model is capable of detecting five different male/female emotions from audio speeches. (Deep Learning, NLP, Python)
💎 A list of accessible speech corpora for ASR, TTS, and other Speech Technologies
[ACL 2024] Official PyTorch code for extracting features and training downstream models with emotion2vec: Self-Supervised Pre-Training for Speech Emotion Representation
Building and training Speech Emotion Recognizer that predicts human emotions using Python, Sci-kit learn and Keras
How to use our public wav2vec2 dimensional emotion model
Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)
speech emotion recognition using a convolutional recurrent networks based on IEMOCAP
The SpeechBrain project aims to build a novel speech toolkit fully based on PyTorch. With SpeechBrain users can easily create speech processing systems, ranging from speech recognition (both HMM/DNN and end-to-end), speaker recognition, speech enhancement, speech separation, multi-microphone speech processing, and many others.
Speaker independent emotion recognition
A collection of datasets for the purpose of emotion recognition/detection in speech.
Bidirectional LSTM network for speech emotion recognition.
TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18
Wav2Vec for speech recognition, classification, and audio classification
This repository contains PyTorch implementation of 4 different models for classification of emotions of the speech.
[ICASSP 2023] Official Tensorflow implementation of "Temporal Modeling Matters: A Novel Temporal Emotional Modeling Approach for Speech Emotion Recognition".
Using Convolutional Neural Networks in speech emotion recognition on the RAVDESS Audio Dataset.
Official implementation of INTERSPEECH 2021 paper 'Emotion Recognition from Speech Using Wav2vec 2.0 Embeddings'
Code for Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information
Add a description, image, and links to the speech-emotion-recognition topic page so that developers can more easily learn about it.
To associate your repository with the speech-emotion-recognition topic, visit your repo's landing page and select "manage topics."