Facial Expression Recognition into four categories. A combination of 68 standard landmark features and the derived feature vector is used. Artificial Neural Network and SVM were the primary models used for the experiment.
- DataSet1: Cohn-Kanade AU-Coded Expression Database, it includes 486 sequences from 97 posers. Each sequence begins with a neutral expression and proceeds to a peak expression.
- DataSet2: Japanese Female Facial Expression, the database contains 213 images of 7 facial expressions posed by 10 Japanese female models.
OpenCV, dLib, Sklearn, Tensorflow, Python
- We are very grateful to Dr. Subrahmanyam Gorthi, our academic advisor for the project.
- A special thanks to Romil Pawar for being test subject for the project.
-
Lucey, Patrick, et al. "The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression." Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on. IEEE, 2010.
-
Michael J. Lyons, Shigeru Akemastu, Miyuki Kamachi, Jiro Gyoba. Coding Facial Expressions with Gabor Wavelets, 3rd IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200-205 (1998).