Skip to content
/ UniVA Public

Code for paper "A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition"

Notifications You must be signed in to change notification settings

NUSTM/UniVA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition

Wenjie Zheng, Jianfei Yu, and Rui Xia

PyTorch Conference

📄 Paper

This repository contains the code for UniVA, a framework that proposes unimodal valence-arousal driven contrastive learning for the multimodal multi-label emotion recognition task.

overview

Dependencies

conda env create -f environment.yml

Data preparation

Download link. Also, there are two files that, due to upload size limitations, have been placed at the link.


Evaluating UniVA on the MOSEI dataset

you can check our UniVA-RoBERTa on 4 NVIDIA 3090 GPUs by running the script below

nohup bash run_MOSEI/run_MOSEI_TAV_ours.sh &

you can get the following results: Acc51.3 HL0.182 miF160.5 maF144.4

To evaluate the performance of UniVA-Glove on 1 NVIDIA 3090Ti GPU, run the script below

nohup bash run_MOSEI/run_MOSEI_TAV_ours_glove.sh &

you can get the following results: Acc49.2 HL0.205 miF157.2 maF137.2

Evaluating UniVA on the M3ED dataset

you can check our UniVA-RoBERTa on 4 NVIDIA 3090 GPUs by running the script below

nohup bash run_M3ED/run_M3ED_TAV_ours.sh &

you can get the following results: Acc50.6 HL0.149 miF153.4 maF140.2

To evaluate the performance of UniVA-Glove on 1 NVIDIA 3090Ti GPU, run the script below

nohup bash run_M3ED/run_M3ED_TAV_ours_glove.sh &

you can get the following results: Acc46.4 HL0.159 miF149.1 maF124.2


Citation

Please consider citing the following if this repo is helpful to your research.

@inproceedings{zheng2024univa,
  title={A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition},
  author={Zheng, Wenjie and Yu, Jianfei and Xia, Rui},
  booktitle={Proceedings of the 32st ACM International Conference on Multimedia},
  year={2024}
}

Please let me know if I can future improve in this repositories or there is anything wrong in our work. You can ask questions via issues in Github or contact me via email wjzheng@njust.edu.cn. Thanks for your support!

About

Code for paper "A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published