Real-time fingerspelling video recognition achieving 74.4% letter accuracy on ChicagoFSWild+
-
Updated
Jan 4, 2025 - Python
Real-time fingerspelling video recognition achieving 74.4% letter accuracy on ChicagoFSWild+
ASL gesture recognition from the webcam using OpenCV & CNN
There are many applications where hand gesture can be used for interaction with systems like videogames, controlling UAV’s, medical equipment’s, etc. These hand gestures can also be used by handicapped people to interact with the systems. The main focus of this work is to create a vision based system to identify sign language gestures from real-…
The purpose of the Sign-Interfaced Machine Operating Network, or SIMON, is to develop a machine learning classifier that translates a discrete set of ASL sign language presentations from images of a hand into a response from another system.
A pi setup to recognize ASL signs using a pre-trained CNN model and speak it out using a suitable TTS engine with adaptive settings.
Socket connection to ASL Alphabet recognition model based in YOLO architecture
IntelliGest is a Smart Home Assistant device that utilizes machine learning and computer vision to detect ASL for Deaf Individuals
ASL Word Recognizer. Streamlit App where given a video of a person doing a sign, use an Inception I3D model to predict the word shown in the video. Hosted on Streamlit and AWS.
🤟 Real-Time ASL Detection using Deep Learning 🎥🧠 This project implements a real-time American Sign Language (ASL) alphabet recognition system using a custom-trained deep learning model with OpenCV and TensorFlow/Keras. The model was trained on a dataset of 3000 images per class (A-Z), resized to 200x200 pixels for optimal performance..
This is one of those projects i worked hard to understand and replicate with the dataset I had. Also, this is my first project repo!!!!. It is kind of a late submission (completed 2 weeks ago) .
A demo with basic CNN for image recognition
Developed 4 different machine learning models for recognising American Sign Language
AllEyezOnMe is a real-time ASL recognition system using a Random Forest classifier and MediaPipe. It detects hand landmarks from webcam input to predict ASL alphabet and numbers. The model is trained on diverse datasets to enhance accuracy and performance.
Using Python, OpenCV and TensorFlow to create an unsupervised Real-Time Object Detection Model to identify and translate American Sign Language (ASL) signs in real-time.
This repository contains my code for training and runnning a machine learning model, for classifying images of the American Sign Language (ASL) alphabet. The model was architected and trained using a Google's TensorFlow library.
A real-time American Sign Language (ASL) detection system that allows users to input text using hand gestures. This project uses computer vision and machine learning to recognize ASL signs and convert them into text.
Add a description, image, and links to the asl-recognizer topic page so that developers can more easily learn about it.
To associate your repository with the asl-recognizer topic, visit your repo's landing page and select "manage topics."