A comprehensive deep learning system for emotion recognition from facial expressions, with applications in fitness and wellness monitoring. This project uses EfficientNet architectures trained on facial expression datasets to identify seven core emotions: angry, disgust, fear, happy, neutral, sad, and surprise.
- High Accuracy Emotion Detection: Utilizes EfficientNet-B2 and EfficientNetV2-L models fine-tuned specifically for emotion recognition
- Real-time Processing: Support for webcam input for real-time emotion analysis
- RESTful API: FastAPI-based web service for integrating with other applications
- Fitness Integration: Combines emotion data with vital signs (via Arduino) for comprehensive wellness monitoring
- Enhanced Visualizations: Detailed metrics with confusion matrices and accuracy reports
EmotionDetectionCNN/
├── data/ # Data handling and preprocessing
│ ├── preprocessing/ # Scripts for data preparation
│ └── FER2013/ # Emotion dataset (not included in repo)
│
├── models/ # Model definitions
│ ├── model.py # Original EfficientNet model
│ ├── model_v2.py # Enhanced model with EfficientNetV2 support
│ └── trained_models/ # Pre-trained model weights (selected files only)
│
├── utils/ # Utility functions
│ ├── data_loader.py # Data loading and augmentation
│ ├── face_detector.py # Face detection and preprocessing
│ └── visualization.py # Result visualization and plotting
│
├── train.py # Training script
├── train_v2.py # Enhanced training with additional features
├── evaluate.py # Model evaluation
├── predict.py # Prediction on images/webcam
├── app.py # FastAPI web application
├── requirements.txt # Dependencies
└── README.md # This file
- Clone this repository:
git clone https://github.com/madboy482/EmotionDetectionCNN.git
cd EmotionDetectionCNN- Install the required dependencies:
pip install -r requirements.txt- Download the pre-trained model (if not included in the repo) and place it in the
models/trained_models/directory.
python train.py --train_dir="data/processed/train" --test_dir="data/processed/test" --batch_size=32 --num_epochs=25 --model_name="efficientnet-b2"python evaluate.py --model_path="models/trained_models/gpu_final_model_full.pth" --train_dir="data/processed/train" --test_dir="data/processed/test" --batch_size=32 --model_name="efficientnet-b2"On an image:
python predict.py --model_path="models/trained_models/gpu_final_model_full.pth" --image="path/to/image.jpg" --model_name="efficientnet-b2"Using webcam:
python predict.py --model_path="models/trained_models/gpu_final_model_full.pth" --model_name="efficientnet-b2"python app.pyThe API will be available at http://localhost:8000
The EfficientNet-B2 model achieves over 70% accuracy on the test set for the 7-class emotion recognition task.
This project is licensed under the MIT License - see the LICENSE file for details.
- FER-2013 dataset
- EfficientNet architecture developers
- PyTorch community

