This repository includes instructions on how to test on CPU the speed and accuracy of the ResNet models optimized by CoCoPIE. The base framework is OpenVINO.
All copyright of the optimized ResNet models (resnet50_78top1, resnet50_76top1, resnet50_75top1) referenced in this repository belongs to CoCoPIE Inc. No distribution or commercial use without the consent in writing by CoCoPIE Inc.
Testing Platform:
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
CPU:
12th Gen Intel(R) Core(TM) i7-12700K
Dataset
Models
Baselines: origin-small is the official resnet50 model, origin-large is the official resnet101 model
Optimized models by CoCoPIE (three variants with different accuracy-speed tradeoffs): resnet50_78top1, resnet50_76top1, resnet50_75top1
Speed and Accuracy
(Batch size: 1)
| Model | Accuracy | Median Latency (ms) | Average Latency (ms) | Min Latency (ms) | Max Latency (ms) | FPS | FLOPs (G) | Params (M) |
|---|---|---|---|---|---|---|---|---|
| origin-small (resnet50) | 76.15% | 9.57 | 10.29 | 9.22 | 28.45 | 96.26 | 4.11 | 25.55 |
| origin-large (resnet101) | 77.37% | 17.92 | 18.65 | 17.37 | 30.93 | 53.35 | 7.83 | 44.55 |
| optimized (resnet50_78top1) | 78.09% | 8.51 | 8.91 | 8.18 | 18.25 | 111.17 | 3.38 | 21.81 |
| optimized (resnet50_76top1) | 76.24% | 4.07 | 4.28 | 3.82 | 12.17 | 228.85 | 1.41 | 11.14 |
| optimized (resnet50_75top1) | 75.17% | 3.53 | 3.71 | 3.26 | 10.87 | 263.25 | 1.21 | 9.10 |
-
Download the ResNet models
-
Install OpenVINO by referring to the official installation guide: Installing OpenVINO with pip. Install onnx and onnxruntime by commands:
pip install onnx onnxruntime. -
Convert the ONNX models to the OpenVINO format. Run the following command:
python convert_onnx2openvino.py --onnx-path <your onnx path> --output-path <your output openvino file path>Replace
<your onnx path>with the path to your ONNX model, and<your output openvino file path>with the desired output path for the converted model. -
Measure the model's performance using OpenVINO by running command
benchmark_appoffered by OpenVINO.benchmark_app -m <your openvino>.xml -d CPU -hint latencyReplace
<your openvino>with the path to your OpenVINO model XML file.Please refer to the official OpenVINO documentation for more detailed instructions and troubleshooting.
-
Test the accuracy of the ONNX model. Download ImageNet if you haven't. Run the following command:
python inference_onnx.py --onnx-path <path_to_onnx_model> --data <path_to_data>
where, <path_to_data> is the root path of the ImageNet dataset.
Note: The inference_onnx.py script uses the ONNX Runtime (rather than OpenVINO) with CPU support, and the number of threads is set to 12.
CoCoPIE 10/2023