Skip to content

Goal of the Competition The goal of this competition is to detect and translate American Sign Language (ASL) fingerspelling into text. You will create a model trained on the largest dataset of its kind, released specifically for this competition.

License

Notifications You must be signed in to change notification settings

shubhangamjha/Google---American-Sign-Language-Fingerspelling-Recognition

Repository files navigation

Google---American-Sign-Language-Fingerspelling-Recognition

Goal of the Competition The goal of this competition is to detect and translate American Sign Language (ASL) fingerspelling into text. You will create a model trained on the largest dataset of its kind, released specifically for this competition. The data includes more than three million fingerspelled characters produced by over 100 Deaf signers captured via the selfie camera of a smartphone with a variety of backgrounds and lighting conditions.

Your work may help move sign language recognition forward, making AI more accessible for the Deaf and Hard of Hearing community.

About the competition Youtube

Context

Voice-enabled assistants open the world of useful and sometimes life-changing features of modern devices. These revolutionary AI solutions include automated speech recognition (ASR) and machine translation. Unfortunately, these technologies are often not accessible to the more than 70 million Deaf people around the world who use sign language to communicate, nor to the 1.5+ billion people affected by hearing loss globally.

Fingerspelling uses hand shapes that represent individual letters to convey words. While fingerspelling is only a part of ASL, it is often used for communicating names, addresses, phone numbers, and other information commonly entered on a mobile phone. Many Deaf smartphone users can fingerspell words faster than they can type on mobile keyboards. In fact, ASL fingerspelling can be substantially faster than typing on a smartphone’s virtual keyboard (57 words/minute average versus 36 words/minute US average). But sign language recognition AI for text entry lags far behind voice-to-text or even gesture-based typing, as robust datasets didn't previously exist.

Technology that understands sign language fits squarely within Google's mission to organize the world's information and make it universally accessible and useful. Google’s AI principles also support this idea and encourage Google to make products that empower people, widely benefit current and future generations, and work for the common good. This collaboration between Google and the Deaf Professional Arts Network will explore AI solutions that can be scaled globally (such as other sign languages), and support individual user experience needs while interacting with products.

Your participation in this competition could help provide Deaf and Hard of Hearing users the option to fingerspell words instead of using a keyboard. Besides convenient text entry for web search, map directions, and texting, there is potential for an app that can then translate this input using sign language-to-speech technology to speak the words. Such an app would enable the Deaf and Hard of Hearing community to communicate with hearing non-signers more quickly and smoothly.

Acknowledgement

Thanks to the Deaf Professional Arts Network and their community of Deaf signers who made this dataset possible. Thanks also to the students of RIT/NTID who interned at Google to help set this project's agenda and create the data collection tools.

About

Goal of the Competition The goal of this competition is to detect and translate American Sign Language (ASL) fingerspelling into text. You will create a model trained on the largest dataset of its kind, released specifically for this competition.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published