This project focuses on detecting American Sign Language (ASL) hand gestures using deep learning. It includes data collection with OpenCV and MediaPipe, training a MobileNetV2-based image classification model, and evaluating its performance on a custom dataset of ASL signs (0-9 and A-Z).
git clone https://github.com/therealsheero/ASL-Detection.git
cd ASL-Detection
📦 Requirements Install the required packages using:
pip install -r requirements.txt
python Collect_Data.py
Controls:
S
- Save frameQ
- Quit- Auto-cropping to hand region
python train_pth.ipynb
--data_dir data
--model mobilenetv2
--epochs 20
--output asl_mobilenetv2_best.pth
Training Results:
Epoch 20/20 | Epoch 20: Train Acc: 1.0000, Val Acc: 0.9722
Test Accuracy: 98.3%
python test.py
Metric | Value |
---|---|
Accuracy | 98.3% |
Collect_Data.py
: Hand tracking + data savertrain_pth.ipynb
: Model training pipelinetest.py
: Live webcam detection
▶️ How to Use
This model can be integrated into a real-time webcam-based ASL interpreter using OpenCV and MediaPipe or cvzone. Load the model, capture hand ROI, preprocess it, and run predictions.
🚀 Future Work
Add real-time ASL detection app
Build ASL-based games (e.g., ASL crossword)
Improve dataset diversity
Deploy on web or mobile using TensorFlow Lite or ONNX