This project aims to detect sign language gestures using a camera, translate them into corresponding text, and display the meaning in real-time. The tool uses machine learning, computer vision, and deep learning techniques to achieve this.
- Backend: Python
- Machine Learning: TensorFlow (CNN)
- Video Processing: OpenCV
- Gesture Detection: Mediapipe
- Database: MySQL (optional, for storing data)
- Serialization: Pickle (for storing trained model)
- Real-time gesture recognition using a camera.
- Gesture-to-text translation of sign language.
- Uses CNN for gesture classification.
- OpenCV for video processing.
- TensorFlow for machine learning model.
- Clone the repository:
git clone https://github.com/yourusername/sign-language-detection.git cd sign-language-detection
python -m venv venv
bash venv\Scripts\activate On macOS/Linux:
bash source venv/bin/activate
bash pip install tensorflow opencv-python mediapipe mysql-connector-python
bash python app.py