🚀 SignSync is an AI-powered communication platform that bridges the gap between spoken language and sign language.
It converts speech/text into sign language animations (ISL & ASL) and vice versa, enabling inclusive communication for the deaf and hard-of-hearing community.
- 🧑🦽 Accessibility First: Helps people with hearing impairments communicate seamlessly.
- 🌐 Two-Way Communication: Converts speech to signs and signs to speech in real-time.
- 🇮🇳 Supports ISL & ASL: Covers both Indian Sign Language and American Sign Language.
- 🤖 AI & ML Integration: Uses NLP + Computer Vision to ensure accurate translation.
- 🎙 Speech → Sign Language (real-time conversion)
- ✋ Sign Language → Text/Speech (gesture recognition)
- 🌐 Multi-language support (future roadmap)
- 📊 Dashboard for usage analytics
- 🔉 Text-to-Speech & Voice output
- 🎥 Animated Sign Avatars for ISL & ASL
- Frontend: React.js + TailwindCSS
- Backend: Node.js + Express / Python Flask (for ML APIs)
- AI/ML: TensorFlow / PyTorch (gesture recognition, NLP models)
- Database: MongoDB / PostgreSQL
- Other Tools: OpenCV, MediaPipe, Google Speech-to-Text, TTS APIs
signsync/
├── frontend/ # React-based UI
├── backend/ # APIs for speech ↔ sign conversion
├── models/ # Trained ML models (gesture recognition, NLP)
├── dataset/ # Sign language datasets
├── docs/ # Documentation, research papers
└── README.md # You're here
##⚡ Installation
# Clone repo
git clone https://github.com/Sweety-Vigneshg/speech-to-sign-project.git
# Backend setup
cd backend
pip install -r requirements.txt
# Frontend setup
cd frontend
npm install
# Run servers
npm start # for frontend
python app.py # for backend ML service