Below is a basic demo of the project using ASL! Enjoy <3
This project aims to recognize and translate sign languages using a CNN model (LSTM). It leverages TensorFlow, Mediapipe, and NLP for feature extraction, model training, and text correction.
Clone the repository to your local machine using Git:
git clone https://github.com/godinezsteven1/AI-SignLanguage.git
cd AI-SignLanguageCreate a virtual environment to manage project dependencies:
python3 -m venv fai_project_envActivate the virtual environment:
- For macOS/Linux:
source fai_project_env/bin/activate
- For Windows:
fai_project_env\Scripts\activate 
Install the required Python packages using the requirements.txt file:
pip install -r requirements.txtTo make sure everything is working, try running the hand_tracking.py script to ensure TensorFlow and Mediapipe are installed correctly.
cd scripts
python hand_tracking.pyIf everything is set up correctly, you should see a window showing hand tracking in real time.
In the root directory:
python sign_recognition_gui.pyThis project uses environment variables for sensitive configuration. A template file .env.example is provided in the repository. Follow these steps to create your own .env file:
Where CLIENT_ID, CLIENT_SECRET, and USER_AGENT is found at 'https://www.reddit.com/prefs/apps' by creating your own API app key under YOUR account.
- 
Copy the template file: cp .env.example .env 
- 
Fill in information: 
CLIENT_ID=your_client_id_here
CLIENT_SECRET=your_client_secret_here
USER_AGENT=your_user_agent_here
POST_LIMIT=NUMBER_LIMIT_HEREIn order to get this information please go on Reddit, click on your avatar → User Settings → scroll down to the "Apps" section. You can manage and create apps from there.
The model successfully recognizes and classifies signs in real-time across multiple sign languages.



