Author: Yi Yang
Contact: yangyiycc@gmail.com
This is the Python implementation of the social attention model for sentiment analysis, described in
Yi Yang and Jacob Eisenstein "Overcoming Language Variation in Sentiment Analysis with Social Attention", TACL 2017
- Theano
- Keras
- Optional: CUDA Toolkit for GPU programming.
In order to reproduce the results reported in the paper, you will need
- The SemEval 2015 Twitter sentiment analysis datasets, as described in this paper.
- The data is available in the data/txt folder. Unfortunately, the text content is not available due to Twitter policy. You need to replace "content" with the real tweets.
- You can preprocss the raw tweets using (tweet = normalizeTextForSentiment(tokenizeRawTweetText(tweet), True)), which can be found in twokenize.py.
- The pretrained word embeddings (don't right click the link---use left click and Save link As...). You can save the file in data/word_embeddings.
- The pretrained author embeddings, which are available in data/author_embeddings.
Great, now you are ready to reproduce the results
-
Prepare the data, and generate the required data file semeval.pkl (available here)
python process_data.py data/word_embeddings/struc_skip_600.txt \ data/semeval.pkl \ data/txt/train_2013.txt \ data/txt/dev_2013.txt \ data/txt/test_2013.txt \ data/txt/test_2014.txt \ data/txt/test_2015.txt
-
Reproduce CNN baseline results
python cnn_baseline.py data/semeval.pkl
-
Reproduce mixture of experts baseline results
python mixture_expert.py data/semeval.pkl
-
Reproduce concatenation baseline results
python concat_baseline.py data/semeval.pkl data/author_embeddings/retweet.emb
-
Reproduce SOCIAL ATTENTION results
python social_attention.py data/semeval.pkl data/author_embeddings/retweet.emb
-
Run with pre-trained model (Test13 F1: 71.7 Test14 F1: 75.6 Test15 F1: 66.8 Average: 71.4)
python run_social_attention.py test data/semeval.pkl data/author_embeddings/retweet.emb model/social_attention_model.h5