Skip to content

Latest commit

 

History

History
20 lines (20 loc) · 1.03 KB

README.md

File metadata and controls

20 lines (20 loc) · 1.03 KB

ASL Translator Project


This project uses a Raspberry Pi and computer vision to recognize American Sign Language (ASL) hand gestures. The default model can detect and translate ASL digits (0-9) into text in real-time using a live camera feed. The system uses a pre-trained TensorFlow model 'keras_model.h5' for classification, and users can optionally train their own model for custom gestures.

### How to use Project
* Clone this project into your raspberypi 5
* Install all necessary package in 'requirements.txt'
* pip install '-r requirement.txt'
* Note: Tensorflow version need to be '2.12.1' to run kesar_model.h5
* pip install 'tensorflow==2.12.1'


After install all requirements, run main.py to use the default pre-trained model (only recognize 0-9)

Or you can train your own model at :
https://teachablemachine.withgoogle.com/train/image by uploading photo for different classes.
Simply replace .h5 file after trainning to use the new model