This is final project for graduate, and it's only half part of whole completed project
This project can detect video or stream, there are up to 7 emotion can be classfied
- First step is trying to get as many image data as we can
- Normalize all the image data to the same size and grayscale
- Training the model based on preprocessed image data
- Predict outcome on video or stream
- Execute
imageScraper.pyand you can type in query string to search on google chrome, multiple query string is seperated by space - Execute
getUrls.jsto scanf through the google image page and get all of ORIGINAL (not compressed) image urls, and then links will be saved as txt - Execute
imageDownloader.pyto download parallelly all image based on the previous txt file, the output will store in a dir
- Execute
faceCutter.pyfor cut out the face part parallelly, and it will save as a grayscale image - Execute
manualClassifier.pyto select a dir to classify images into 7 different categories - Execute
dataAugmentation.pyto rotate, flip, shear to increment our image dataset
- Execute
emotionTrain.pyto set up a model and train, the model's struture is inemotionNetwork.py
- Execute
realTimeEmotionDetection.pyfor stream like web cam or other, this program will skip some frame to increase performance - Execute
videoEmotionDetection.pyfor video, this program is parallel and slightly complicated structure that use semaphor, thread and lock - Execute
estimate.pyto see the overall accuracy of the result


