This directory contains model files used by the hand gesture classification system.
MediaPipe hand landmark detection model used for feature extraction.
- Purpose: Detects 21 hand landmark points in images
- Used by:
FeatureExtractorclass - Source: MediaPipe hand landmarker model
- Download: If missing, manually download from: https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/1/hand_landmarker.task Or see official documentation: https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker/python The system will also attempt to use MediaPipe's bundled model if available.
Trained scikit-learn classifier for rock-paper-scissors gesture classification.
- Purpose: Classifies extracted hand features into rock, paper, or scissors
- Created by: Training script (
handmotion.train) - Used by: Prediction script (
handmotion.predict) - Format: Joblib-serialized scikit-learn model
Evaluation metrics from model training.
- Purpose: Stores accuracy, confusion matrix, and classification report
- Created by: Training script (
handmotion.train) - Format: JSON file with evaluation results
hand_landmarker.taskextracts hand landmarks from raw images- Features are processed and saved to
data/processed/rps/data.npz rps_classifier.joblibis trained on these features- Trained classifier is used for prediction on new images