This is a python implementation for extracting arousal and valence values from images, as presented in the work: Building Emotional Machines: Recognizing Image Emotions through Deep Neural Networks by Hye-Rin Kim, Yeong-Seok Kim, Seon Joo Kim, In-Kwon Lee.
- Make sure openCV is installed on both python and C++. Here is a tutorial to installing OpenCV on C++.
 - Tensorflow
 - scikit-learn
 
- 
To avoid loading of model weights again and again, we first pre-compute certain features for all images, that will be used in making the feature vector of the image.
- 
GIST Feature Extraction:
- Run the code segment:
 
cd GIST make IMAGEPATH=<path_to_directory_containing_all_images> make clean- This will create a file in the main folder named gists.txt, that contains the GIST descriptor of each image, one per line, in the format:
<FILENAME>:<GIST_Descriptor> 
 - 
VGG Object Feature Extraction:
- 
For the following command, imageFile = a file containing names of images and their A/V values separated by a comma, and imagesDir = the folder containing all training images. Run the code segment:
cd VGG python vgg16.py <imageFile> <imagesDir> <VGG_weights> - 
This will create a pickle in the main folder named vgg.pickle, that holds a python dictionary with names of images being mapped to their VGG object features.
 - 
More information about this VGG Descriptor can be found here.
 
 - 
 - 
Semantic Features Extraction:
- Run the file test.py in the semantic_features folder, providing 4 python arguments, namely: test_img_file, test_img_directory, weights_encoder and weights_decoder.
 
 
 - 
 - 
Now, using these extracted features, and more (LBP features), we will be constructing a feature vector for each training image.
 - 
Set the training parameters in the file params.json, as required.
 - 
Run the training file:
 
	python train.py <imageFile> <imagesDir>
- This will store the model in a modelData named directory, in the parent directory of the current folder. The name of folder in which contains a timestamp that will be used to recognise the stored model during the prediction.
 
- 
Again, go through the entire process of feature exrraction for the images for which the prediction needs to be made.
 - 
Set the prediction parameters in the file test_params.json, as required.
 - 
Run the prediction code:
 
	python predict.py <testImageFile> <testImgDir> <timestamp>
where testImageFile=file containing names of all images to be predicted for, testImgDir=directory containing all images to predicted for, timestamp=the timestamp of the model to be used for prediction.