iOS application for finding formants in spoken sounds
-
Updated
Nov 14, 2025 - Swift
iOS application for finding formants in spoken sounds
Spokestack: give your iOS app a voice interface!
OpenAI API wrapper for Swift
OtosakuStreamingASR-iOS is a real-time speech recognition engine for iOS, built with Swift and Core ML. It uses a fast and lightweight streaming Conformer model optimized for on-device inference. Designed for developers who need efficient audio transcription on mobile.
Lightweight Swift library for log-Mel spectrogram extraction with Accelerate & CoreML)
Foundation-Models chat app tutorial for iOS with on-device LLMs, tools, and chat. Shows on-device inference with FoundationModels and calendar tool use. 🐙
An example project showing how we can use Apple Speech to Text cloud service and AWS Machine Learning to process and find meaning in text.
Add a description, image, and links to the speech-processing topic page so that developers can more easily learn about it.
To associate your repository with the speech-processing topic, visit your repo's landing page and select "manage topics."