Collected Alfred Workflows & Proof of Concept
-
Updated
May 21, 2025 - Swift
Collected Alfred Workflows & Proof of Concept
This app detects the text from the picture input using camera or photos gallery. The app uses MLVisionTextModel for on device detection. The Vision framework from MLKit of Google is used here.
iOS app that reads text from camera live preview.
Eyes is an iOS app designed to help blind and visually impaired people better understand the world around them.
Text recognition from image with Vision
an intuitive SwiftUI app for managing exams effortlessly. Create, edit, delete, and list exam templates, recognize student answers via the device camera, and automatically correct exams with customizable scoring rules. Export results as CSV files for easy sharing and analysis.
AllerScan is an allergy detection iOS application that scans food labels (in multiple languages!) for a given set of allergies. Built by Pingry students Rhea Kapur (Pingry '21), Eva Schiller (Pingry '21), Emma Huang (Pingry '21), and Olivia Taylor (Pingry '23) at FemmeHacks 2021.
A Swift command line tool for recognizing text in images using macOS's built-in Vision framework.
iOS Vision Framework Examples
iOS app with OCR and translation: Extract text from images using Vision framework and translate with built-in Translation API. SwiftUI + educational project.
👁️ Detect obstacles in real-time using LiDAR technology, enhancing awareness for visually impaired individuals beyond traditional methods.
Add a description, image, and links to the text-recognition topic page so that developers can more easily learn about it.
To associate your repository with the text-recognition topic, visit your repo's landing page and select "manage topics."