Using a PointNet neural network and MediaPipe to recognize Sign Language symbols (an other hand gestures) in 3D
pip install -r requirements.txtpython3 . -hTo collect the data, we use the MediaPipe library to detect the hand landmarks and the OpenCV library to capture the video. The data is saved in a .json file located in the data/raw folder. To run the harvest data script, use the following command:
python3 src/webcam_harvest.py -hThe data preprocessing is done in the src/preprocess.ipynb notebook. It is used to clean the data and to create the training and test sets. The data is saved in a .json file located in the data/clean folder. To run the preprocessing script, run the src/preprocess.ipynb notebook.
The training is done in the src/train.ipynb notebook. It is used to train the PointNet neural network. The model is saved in a .pth file located in the data/model folder. To run the training script, run the src/train.ipynb notebook.
If you enjoy my work, please donate here