Skip to content

softMonkeys/Sim2Real

Repository files navigation

Sim2Real: Human Social Signal Detection

data

Introduction

Sim2Real refers to techniques that can be used to transfer knowledge from one environment (e.g in simulation) to another (e.g. real world). While Sim2Real has been applied to various fields such as object recognition, human pose estimation etc. it has never been applied to recognizing human social signals. We want to deliver a high accuracy machine learning model to our supervisor Angelica Lim (angelica@sfu.ca) that can accurately recognize the following 3 facial expressions: Angry, Crying and Happy.

Synthetic Data Generation

total

To generate the synthetic data, go to open UnitySimulationCode by Unity version 2020.1.12f1. Once the project is created and loaded, you will be presented with the Unity Editor interface. From the top menu bar, open Window -> Package Manager. Click on the + sign at the top-left corner of the Package Manager window and then choose the option Add package from git URL.... Enter the address com.unity.perception and click Add. After these operations you shouldn't have any issue for running the code. Now, open Hierarchy -> TutorialScene -> Simulation Scenario. Under Inspector -> ForegroundObjectPlacementRandomizer GUI you should find the Prefabs list, this is where you put different human models for generating data. The current avaiable models are:

  • Angry: black_female_angry_Pivot black_male_angry_Pivot old_caicasian_female_angry_Pivot old_caicasian_male_angry_Pivot young_asian_female_angry_Pivot young_asian_male_angry_Pivot
  • Happy: black_female_happy_Pivot black_male_happy_Pivot old_caicasian_female_happy_Pivot old_caicasian_male_happy_Pivot young_asian_female_happy_Pivot young_asian_male_happy_Pivot
  • Crying: black_female_happy_Pivot black_male_happy_Pivot old_caicasian_female_happy_Pivot old_caicasian_male_happy_Pivot young_asian_female_happy_Pivot young_asian_male_happy_Pivot If you do not have Unity Simulation member, you can still generate the data by using your local machine. Under Simulation Scenario -> Inspector -> Fixed Length Scenario -> Constants -> Total Iterations has a default value of 50. It means it will generate 50 images after we clicked the Play button on top of the Unity editior. Change this value based on your requirement. You can find the path for the result under Console

data

Machine Learning Models

All of our trained models are saved in this link

Facial Expression Recognition with OpenFace and Neural Network

OpenFace Go to MachineLearningCode -> baselineML and open .ipynb files by Google Colab. Follow the documentation we have wrote in the code.

End-to-End Model

End-to-End Go to MachineLearningCode -> end_to_end_ML and open .ipynb files by Google Colab. Follow the documentation we have wrote in the code.