Paper List for Multimodal Sentiment Analysis
šRelated Repos
šRelated Courses
šRelated Datasets
šRelated Reviews
šRelated Conferences and Journals
šMultimodal Sentiment Analysis
šMultimodal Emotion Recognition in Conversations
- AWESOME-FER
- awesome-multimodal-ml
- awesome-multimodal-research
- CMU-Multimodal-SDK
- Multimodal-Emotion-Recognition
- Multimodal-Sentiment-Analysis
- awesome-sentiment-analysis
- awesome-nlp-sentiment-analysis
- (2020) CMU-MOSEAS, access
- (2020) CH-SIMS [paper, code]
- (2019) UR-FUNNY [paper]
- (2019) MELD [paper, code]
- (2018) MOSEI [paper] [code]
- (2016) MOSI [paper]
- (2013) MOUD [paper]
- (2013) ICT-MMMO [paper]
- (2011) YouTube [paper]
- (2008) IEMOCAP [paper]
- (2019) Trends in Integration of Vision and Language Research: A Survey of Tasks, Datasets, and Methods
- (2019) Multimodal Machine Learning: A Survey and Taxonomy (:bulb::bulb::bulb:)
- (2019) Deep Multimodal Representation Learning: A Survey
- (2019) Multimodal Intelligence: Representation Learning, Information Fusion, and Applications
- (2018) Multimodal Sentiment Analysis: Addressing Key Issues and Setting up Baselines
- (2016) Chinese Textual Sentiment Analysis: Datasets, Resources and Tools
- (2017) A review of affective computing: From unimodal analysis to multimodal fusion
- (2017) A survey of multimodal sentiment analysis
- (2013) Representation Learning: A Review and New Perspectives
- (2005) Multimodal approaches for emotion recognition: a survey
ACL & EMNLP & NAACL & CoLing, AAAI, IJCAI, ICLR, ICML, NeurIPS, ICMI, ACM-MM
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE Transactions on Multimedia, IEEE Transactions on Affective Computing, Knowledge-based System, Information Fusion, IEEE Access, IEEE Intelligent Systems,
- (2020) Integrating Multimodal Information in Large Pretrained Transformers [code]
- (2020) A Recipe for Creating Multimodal Aligned Datasets for Sequential Tasks Angela
- (2020) Sentiment and Emotion help Sarcasm? A Multi-task Learning Framework for Multi-Modal Sarcasm, Sentiment and Emotion Analysis [code]
- (2020) CH-SIMS: A Chinese Multimodal Sentiment Analysis Dataset with Fine-grained Annotations of Modality [code]
- (2020 Workshop) A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis [code]
- (2020 Workshop) Low Rank Fusion based Transformers for Multimodal Sequences
- (2019) Modality-based Factorization for Multimodal Fusion
- (2019) Divide, Conquer and Combine: Hierarchical Feature Fusion Network with Local and Global Perspectives for Multimodal Affective Computing
- (2019) Multimodal and MultiĀview Models for Emotion Recognition [code]
- (2019) Multimodal Transformer for Unaligned Multimodal Language Sequences [code]
- (2019) Contextual Inter-modal Attention for Multi-modal Sentiment Analysis [code]
- (2019) Towards Multimodal Sarcasm Detection (An Obviously Perfect Paper) [code]
- (2018) Getting the subtext without the text: Scalable multimodal sentiment classification from visual and acoustic modalities
- (2018) Recognizing Emotions in Video Using Multimodal DNN Feature Fusion [code]
- (2018) Efficient Low-rank Multimodal Fusion with Modality-Specific Factors [code]
- (2018) Multimodal Affective Analysis Using Hierarchical Attention Strategy with Word-Level Alignment
- (2018) Multimodal Relational Tensor Network for Sentiment and Emotion Classification
- (2018) Sentiment Analysis using Imperfect Views from Spoken Language and Acoustic Modalities
- (2018) Seq2Seq2Sentiment: Multimodal Sequence to Sequence Models for Sentiment Analysis
- (2018) Convolutional Attention Networks for Multimodal Emotion Recognition from Speech and Text Data
- (2018) DNN Multimodal Fusion Techniques for Predicting Video Sentiment
- (2017) Context-Dependent Sentiment Analysis in User-Generated Videos [code]
- (2017) Multimodal Machine Learning: Integrating Language, Vision and Speech
- (2020) Dual Low-Rank Multimodal Fusion
- (2020) MAF: Multimodal Alignment Framework for Weakly-Supervised Phrase Grounding
- (2020) Does my multimodal model learn cross-modal interactions? Itās harder to tell than you might think! [code]
- (2020) Multistage Fusion with Forget Gate for Multimodal Summarization in Open-Domain Videos [code]
- (2020) Multi-modal Multi-label Emotion Detection with Modality and Label Dependence [code]
- (2019) ContextĀaware Interactive Attention for MultiĀmodal Sentiment and Emotion Analysis [code]
- (2018) Associative Multichannel Autoencoder for Multimodal Word Representation [code]
- (2018) Contextual Inter-Āmodal Attention for MultiĀmodal Sentiment Analysis [code]
- (2018) Importance of SelfĀAttention for Sentiment Analysis
- (2018) Improving MultiĀ-label Emotion Classification via Sentiment Classification with Dual Attention Transfer Network
- (2017) Tensor Fusion Network for Multimodal Sentiment Analysis [code]
- (2015) Deep Convolutional Neural Network Textual Features and Multiple Kernel Learning for UtteranceĀlevel Multimodal Sentiment Analysis
- (2019) Strong and Simple Baselines for Multimodal Utterance Embeddings [code]
- (2019) Quantifiers in a Multimodal World: Hallucinating Vision with Language and Sound
- (2019) MultiĀtask Learning for MultiĀmodal Emotion Recognition and Sentiment Analysis [code]
- (2018) Multimodal Emoji Prediction
- (2018) Sentiment Analysis: Itās Complicated! [code]
- (2016) Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning
- (2018) Hybrid Attention based Multimodal Network for Spoken Language Classification
- (2018) Learning EmotionĀenriched Word Representations
- (2018) Emotion Detection and Classification in a Multigenre Corpus with Joint MultiĀTask Deep Learning
- (2016) Multimodal Mood Classification Ā A Case Study of Differences in Hindi and Western Songs
- (2019) VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis [code]
- (2019) Multi-Interactive Memory Network for Aspect Based Multimodal Sentiment Analysis [code]
- (2019) An Efficient Approach to Informative Feature Extraction from Multimodal Data
- (2019) Words Can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors [code]
- (2019) Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities
- (2019) Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion
- (2018) Learning Multimodal Word Representation via Dynamic Fusion Methods
- (2018) Predicting Depression Severity by Multi-Modal Feature Engineering and Fusion
- (2018) Memory Fusion Network for Multi-View Sequential Learning [code]
- (2018) Inferring Emotion from Conversational Voice Data: A Semi-Supervised Multi-Path Generative Neural Network Approach
- (2017) Multimodal Fusion of EEG and Musical Features in Music-Emotion Recognition
- (2016) Personalized Microblog Sentiment Classification via Multi-Task Learning
- (2019) Success Prediction on Crowdfunding with Multimodal Deep Learning
- (2019) AttnSense: Multi-level Attention Mechanism For Multimodal Human Activity Recognition [code]
- (2019) DeepCU: Integrating both Common and Unique Latent Information for Multimodal Sentiment Analysis [code]
- (2019) Adapting BERT for Target-Oriented Multimodal Sentiment Classification [code]
- (2019) Multi-view Clustering via Late Fusion Alignment Maximization
- (2019) Towards Discriminative Representation Learning for Speech Emotion Recognition [code]
- (2020) Factorized Multimodal Transformer for Multimodal Sequential Learning [code]
- (2020) OmniNet: A unified architecture for multi-modal multi-task learning [code, Review]
- (2019) DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation [code]
- (2018) ICON: Interactive Conversational Memory Network for Multimodal Emotion Detection [code]