This project implements a face recognition-based attendance system using several machine learning models. The entire pipeline leverages SRCNN, DeblurGANv2, and LIME models for image processing, along with a custom-trained VGGFace model for facial recognition. Below is a breakdown of the files, usage, and instructions for setting up the project.
- Purpose: Defines the SRCNN model used for image super-resolution.
- Details: Loads the pre-trained weights to enhance image quality.
- Purpose: Implements the DeblurGANv2 model to remove blur from images.
- Details: Loads the necessary pre-trained weights for deblurring.
- Purpose: Contains essential utility functions and building blocks.
- Details: Includes implementations for instance normalization and custom layer definitions required for the models.
- Purpose: Implements the LIME model for light enhancement to improve lighting conditions in images.
- Purpose: Cascades the SRCNN, DeblurGANv2, and LIME models to form the complete image processing pipeline.
- Purpose: Uses OpenCV’s Haar Cascade classifier to crop faces from images.
- Usage: Processes raw images from the Dataset folder, extracts faces, and saves them to the Headsets folder.
- Purpose: Script to test the functionality of the trained VGGFace model.
- Purpose: Jupyter notebook used for training the VGGFace model and creating the dataset.
- Dataset Structure:
- Place images in the format:
Dataset/{student_name}/images - The
crop.pyextracts faces from these images and stores them in:Headsets/{student_name}/images
- Place images in the format:
- Purpose: Main script for recognizing faces using the trained VGGFace model and logging attendance.
- Details: Uses the webcam to capture real-time images and recognizes students based on the trained VGGFace model.
- Purpose: Holds the details of each student in the class as a JSON object, which is referenced during attendance logging.
- Python Version: 3.11.9
- TensorFlow Version: 2.17
- Ensure the following folders exist and contain the appropriate files:
- Dataset/{student_name}/images (Raw Images)
- Headsets/{student_name}/images (Extracted Faces)
-
Install Dependencies:
Install the required Python libraries using:pip install tensorflow opencv-python-headless numpy matplotlib
-
Download Weights:
Download the pre-trained weights for all models from the following link:
Google Drive - Weights
Place the downloaded weights in the same directory as the code files. -
Train the VGGFace Model:
OpenVGGface_VGG16.ipynband run the notebook to train the VGGFace model on the dataset.
Ensure the dataset follows the structure described above. -
Extract Faces for Training:
Run thecrop.pyscript to extract faces from the dataset:python crop.py
-
Run the Attendance System:
UseAttendance.pyto launch the attendance logging system:python Attendance.py
-
Testing the VGGFace Model:
Usetest.pyto verify the VGGFace model's performance:python test.py
-
Image Enhancement Pipeline:
Run thepipeline.pyto process an image through the SRCNN, DeblurGANv2, and LIME models:python pipeline.py
/project-root
│
├── Dataset/
│ └── {student_name}/images/ (Raw student images)
├── Headsets/
│ └── {student_name}/images/ (Cropped faces)
├── SRCNN.py
├── DeblurGANv2.py
├── layer_utils.py
├── LIME.py
├── pipeline.py
├── crop.py
├── test.py
├── VGGface_VGG16.ipynb
├── Attendance.py
└── Class.json
- Ensure all the pre-trained weights are in the correct folder to avoid loading errors.
- Verify that your webcam is properly connected and recognized by the system before running
Attendance.py. - Adjust the model parameters if needed during training in
VGGface_VGG16.ipynb.