You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## How to use the hardware capabilities to accelerate inference:
94
+
## How to Use the Hardware Capabilities to Accelerate Inference:
95
95
96
96
In order to ensure an optimized inference time, leverage the features of the [MPLAB® XC-32 Compilers](https://www.microchip.com/en-us/tools-resources/develop/mplab-xc-compilers)
97
97
by activating the third level of compilation in your MPLAB® X project. Doing this ensures an extended use of the hardware capabilities of the
@@ -109,8 +109,10 @@ As shown in the example image below:
109
109
-**Clock Frequency:** 498 MHz
110
110
-**Compiler:** XC32 v4.30
111
111
112
+
**The model used for the benchmarking operations is a [`mobilenet-v2-tensorflow2-035-128-classification-v2`](https://www.kaggle.com/models/google/mobilenet-v2) model.**
113
+
112
114
### Inference time
113
-
The following table shows the inference time for the example model (`mobilenet-v2-tensorflow2-035-128-classification-v2`) converted and run with different optimization levels.
115
+
The following table shows the inference time for the example model converted and run with different optimization levels.
114
116
115
117
| Optimization Level | Inference Time (ms) | Notes/Flags Used |
@@ -126,7 +128,7 @@ Results may vary depending on compiler version, memory configuration, and other
126
128
127
129
### Model Output Consistency Metrics
128
130
129
-
The following table summarizes the results of comparing the logits (raw model outputs) produced by the TensorFlow reference Model and the compiled C model file, running on the target.
131
+
The following table summarizes the results of comparing the logits (raw model outputs) produced by the TensorFlow example model and the compiled `model.c` file, running on the target.
130
132
This comparison was performed to validate the integrity and robustness of the model conversion and deployment process.
131
133
132
134
**All results below were obtained with the MPLAB Harmony v3 compiled at the `-O3` optimization level.**
@@ -142,7 +144,7 @@ This comparison was performed to validate the integrity and robustness of the mo
142
144
| Top-5 Agreement | 100.00% | Percentage of images where the top 5 predicted classes match |
143
145
144
146
**Interpretation:**
145
-
These results demonstrate that the embedded model’s outputs are virtually identical to the host reference, with only negligible differences attributable to floating-point precision. Both Top-1 and Top-5 classification results are in perfect agreement, confirming the correctness and robustness of the deployment at the `-O3` optimization level.
147
+
These results demonstrate that the converted model’s outputs are virtually identical to the initial example, with only negligible differences attributable to floating-point precision. Both Top-1 and Top-5 classification results are in perfect agreement, confirming the correctness and robustness of the deployment at the `-O3` optimization level.
0 commit comments