View the website here: Sightsense
The website will soon be hosted on sightsense.tech after DNS records complete changing.
We were inspired to create a product that could help individuals with visual impairments navigate the world around them with greater independence and autonomy.
SightSense is a wearable device that utilizes object detection, voice commands, and text-to-speech functionality to aid individuals with visual impairments in navigating their surroundings in real-time.
We built SightSense using a Jetson Nano 4GB module running on a Linux operating system, a Microsoft webcam, a speaker, and normal sunglasses. We also utilized advanced artificial intelligence algorithms for real-time object detection, captioning, and speech recognition.
We faced several challenges in building SightSense, including integrating all of the components seamlessly into a pair of sunglasses, optimizing the algorithms for real-time object detection, and ensuring the device was both affordable and accessible.
We are proud to have created a product that has the potential to enhance the independence and quality of life of individuals with visual impairments. We are also proud of the seamless integration of all the components into a practical wearable device.
During the development of SightSense, we gained experience in working with various cutting-edge technologies, including the Jetson Nano 4GB module, Microsoft webcam, and text-to-speech functionality. We learned how to integrate these technologies and optimize their performance to create a seamless user experience. Additionally, we developed expertise in programming languages such as Python and in using development tools like Tensorflow and OpenCV for real-time object detection and image processing. Through this project, we acquired new skills and knowledge that will enable us to tackle even more complex challenges in the future.
n the future, we plan to further refine SightSense's capabilities, including expanding the range of objects that can be detected, improving voice recognition accuracy, and adding additional features to further enhance the user's independence and autonomy. We also plan to conduct user testing to gather feedback and insights on the usability and effectiveness of our product.