Pre-built XCFrameworks for the RunAnywhere on-device ML inference framework.
Add to your Package.swift:
dependencies: [
.package(url: "https://github.com/RunAnywhereAI/runanywhere-binaries.git", from: "1.0.0")
]Then add the backend you need to your target:
.target(
name: "MyApp",
dependencies: [
// Choose one or more backends:
.product(name: "RunAnywhereONNX", package: "runanywhere-binaries"),
// .product(name: "RunAnywhereCoreML", package: "runanywhere-binaries"),
// .product(name: "RunAnywhereTFLite", package: "runanywhere-binaries"),
]
)Or in Xcode: File → Add Package Dependencies → Enter repository URL.
Add to your Podfile:
# Default ONNX backend
pod 'RunAnywhere'
# Or specific backend(s)
pod 'RunAnywhere/ONNX'
pod 'RunAnywhere/CoreML'
pod 'RunAnywhere/TFLite'
# All backends
pod 'RunAnywhere/All'Then run:
pod installDownload XCFrameworks from Releases:
- Download the
.xcframework.zipfor your chosen backend - Verify the checksum:
shasum -a 256 -c checksums.txt - Unzip and drag into your Xcode project
- In Build Phases → Link Binary With Libraries, add:
- Foundation.framework
- CoreML.framework (for ONNX/CoreML)
- Accelerate.framework
- Metal.framework (for CoreML/TFLite)
| Backend | Use Case | Size* |
|---|---|---|
| ONNX | General purpose, cross-platform models | ~50MB |
| CoreML | Apple Neural Engine optimization | ~5MB |
| TFLite | TensorFlow models, Android parity | ~20MB |
*Sizes are approximate and vary by version.
- ONNX Runtime (Recommended): Best compatibility, supports most model formats
- CoreML: Best performance on Apple devices, requires CoreML model format
- TFLite: Use when you need Android/iOS parity with TensorFlow models
- iOS 15.0+
- macOS 12.0+
See CHANGELOG.md for release notes.
MIT License - see LICENSE for details.