Complete on-device image classification with zero latency and maximum privacy
iphone-zoom-out-middle-move-out.1.online-video-cutter.com.mp4
- ๐ฅ MobileNetV2 Integration - Optimized CNN architecture running entirely on-device
- โก ONNX Runtime - High-performance inference engine for mobile platforms
- ๐ธ ImageNet Classification - Recognizes 1000+ object categories with high accuracy
- ๐จ Production-Ready Preprocessing - ImageNet normalization and center crop
- ๐ Top-K Predictions - Configurable multi-class prediction output
- โ Confidence Thresholding - Smart fallback for low-confidence predictions
- ๐ Privacy-First - All inference happens locally, no data leaves device
- ๐ฑ Cross-Platform - Runs on Android and iOS with Flutter
- Preprocessing Pipeline: ImageNet-standard normalization (mean/std)
- Input Processing: Center crop to 224x224 model input size
- Output Processing: Softmax activation for probability distribution
- Confidence Filtering: Configurable threshold with fallback handling
- Top-K Selection: Returns top N predictions with scores
Check out my blog post on Mastering On-Device ML in Flutter: A Guide to Softmax, Top-K, and Confidence Checks for more insights on implementing machine learning models efficiently.
โโโโโโโโโโโโโโโโโโโ
โ Camera/Gallery โ
โโโโโโโโโโฌโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ Image Loading โ
โ & Decoding โ
โโโโโโโโโโฌโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ Center Crop โ
โ (224x224) โ
โโโโโโโโโโฌโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ ImageNet โ
โ Normalization โ
โโโโโโโโโโฌโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ MobileNetV2 โ
โ ONNX Model โ
โโโโโโโโโโฌโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ Softmax โ
โ Activation โ
โโโโโโโโโโฌโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ Top-K + CI โ
โ Filtering โ
โโโโโโโโโโฌโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ Display Resultsโ
โโโโโโโโโโโโโโโโโโโ
dependencies:
flutter:
sdk: flutter
onnxruntime: ^1.15.0 # Or latest version
image: ^4.0.0- Clone the repository
https://github.com/PHom798/MobileNetV2-On-Device-Inference.git
cd mobilenetv2-flutter- Install dependencies
flutter pub get-
Add the ONNX model
- Download MobileNetV2 ONNX model
- Place in
assets/models/mobilenetv2.onnx - Add to
pubspec.yaml:
flutter: assets: - assets/models/mobilenetv2.onnx - assets/labels/imagenet_classes.txt
-
Run the app
flutter runimport 'package:onnxruntime/onnxruntime.dart';
class ImageClassifier {
late OrtSession session;
Future<void> initialize() async {
// Load model
final modelBytes = await rootBundle.load('assets/models/mobilenetv2.onnx');
session = OrtSession.fromBuffer(modelBytes.buffer.asUint8List());
}
Future<List<Prediction>> classify(String imagePath) async {
// 1. Load and preprocess image
final preprocessed = await preprocessImage(imagePath);
// 2. Run inference
final inputs = {'input': OrtValueTensor.createTensorWithDataList(
preprocessed,
[1, 3, 224, 224],
)};
final outputs = await session.runAsync(
OrtRunOptions(),
inputs,
);
// 3. Apply softmax
final logits = outputs[0]?.value as List<List<double>>;
final probabilities = softmax(logits[0]);
// 4. Get Top-K with confidence threshold
return getTopKPredictions(probabilities, k: 5, threshold: 0.1);
}
}Future<List<double>> preprocessImage(String path) async {
final img = decodeImage(File(path).readAsBytesSync())!;
// Center crop to 224x224
final cropped = copyCrop(img,
x: (img.width - 224) ~/ 2,
y: (img.height - 224) ~/ 2,
width: 224,
height: 224,
);
// ImageNet normalization
final mean = [0.485, 0.456, 0.406];
final std = [0.229, 0.224, 0.225];
List<double> normalized = [];
for (var c = 0; c < 3; c++) {
for (var y = 0; y < 224; y++) {
for (var x = 0; x < 224; x++) {
final pixel = cropped.getPixel(x, y);
final value = (pixel[c] / 255.0 - mean[c]) / std[c];
normalized.add(value);
}
}
}
return normalized;
}| Property | Value |
|---|---|
| Architecture | MobileNetV2 |
| Input Size | 224 ร 224 ร 3 |
| Parameters | ~3.5M |
| Model Size | ~14MB |
| Classes | 1000 (ImageNet) |
| Top-1 Accuracy | ~71.8% |
| Top-5 Accuracy | ~90.3% |
| Inference Time | 20-50ms (device dependent) |
class ModelConfig {
static const int inputSize = 224;
static const int topK = 5;
static const double confidenceThreshold = 0.1;
static const List<double> imagenetMean = [0.485, 0.456, 0.406];
static const List<double> imagenetStd = [0.229, 0.224, 0.225];
}- 100% On-Device - No internet required
- Zero Data Transmission - Images never leave device
- GDPR Compliant - No external data processing
- Low Latency - Instant results without network delay
- Offline First - Works without connectivity
- Efficient - Optimized for mobile CPUs
- No API Costs - Zero inference fees
- Scalable - No server infrastructure needed
- Sustainable - Reduced carbon footprint
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
- MobileNetV2 - Sandler et al., 2018
- ONNX Runtime - Microsoft's cross-platform inference engine
- ImageNet - Dataset and pretrained weights
- Flutter Team - Amazing cross-platform framework
For questions, feedback, or collaborations:

