A simple yet powerful Python application that converts spoken words into American Sign Language (ASL) by playing corresponding video clips. Designed to promote inclusivity and accessibility for the Deaf and Hard of Hearing community.
- 🎤 Voice input via microphone
- 🔤 Speech-to-text conversion
- 📽️ ASL video playback using OpenCV
- 🖱️ User-friendly GUI built with Tkinter
- Python 3.x
- Required Python packages (can be installed via
requirements.txt) - Dataset at https://drive.google.com/drive/folders/1oQosXd5BNjIbHIOTL7R-NlbWuIo2-0nb?usp=sharing
-
Clone the Repository
git https://github.com/VishalShekha/TeToS.git cd TeToS -
Install Dependencies
pip install -r requirements.txt
-
Add the Dataset
- Download or prepare your ASL video dataset.
- Set the dataset path in
ttv.py.
-
Run the Application
python main.py
voice-to-asl/
│
├── main.py # Entry point — calls GUI and starts the app
├── gui.py # GUI with Tkinter — mic button & text display
├── ttv.py # ASL video handler using OpenCV
├── stt.py # Speech-to-text using Google's API
└── asl_videos/ # (Place your ASL video dataset here)
-
speech_recognition (as sr)
- Used to capture and transcribe audio input using Google's speech recognition API.
- Recognizer() handles audio-to-text conversion.
-
cv2 (OpenCV)
- Used for video handling and playback.
- Enables frame-by-frame control over sign language video display.
-
tkinter (as tk)
- Used for creating the desktop GUI application window.
- Allows integration of buttons, text labels, and image/video playback containers.
Contributions, issues, and feature requests are welcome! Feel free to fork the repo and submit a pull request.