Skip to content

Commit fd2b6ed

Browse files
Fixed formatting
1 parent 239e8ec commit fd2b6ed

File tree

1 file changed

+7
-5
lines changed

1 file changed

+7
-5
lines changed

README.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ It enables you to convert TensorFlow models to C code, ready for seamless integr
2222
- [Usage](#usage)
2323
- [Arguments](#arguments)
2424
- [Examples](#examples)
25-
- [How to use the hardware capabilities to accelerate inference](#how-to-use-the-hardware-capabilities-to-accelerate-inference)
25+
- [How to Use the Hardware Capabilities to Accelerate Inference](#how-to-use-the-hardware-capabilities-to-accelerate-inference)
2626
- [Benchmarking](#benchmarking)
2727
- [Inference Time](#inference-time)
2828
- [Model Output Consistency Metrics](#model-output-consistency-metrics)
@@ -91,7 +91,7 @@ python3 -m tf2mplabh3
9191
```bash
9292
python3 -m tf2mplabh3 -m path/to/model -v 1
9393
```
94-
## How to use the hardware capabilities to accelerate inference:
94+
## How to Use the Hardware Capabilities to Accelerate Inference:
9595

9696
In order to ensure an optimized inference time, leverage the features of the [MPLAB® XC-32 Compilers](https://www.microchip.com/en-us/tools-resources/develop/mplab-xc-compilers)
9797
by activating the third level of compilation in your MPLAB® X project. Doing this ensures an extended use of the hardware capabilities of the
@@ -109,8 +109,10 @@ As shown in the example image below:
109109
- **Clock Frequency:** 498 MHz
110110
- **Compiler:** XC32 v4.30
111111

112+
**The model used for the benchmarking operations is a [`mobilenet-v2-tensorflow2-035-128-classification-v2`](https://www.kaggle.com/models/google/mobilenet-v2) model.**
113+
112114
### Inference time
113-
The following table shows the inference time for the example model (`mobilenet-v2-tensorflow2-035-128-classification-v2`) converted and run with different optimization levels.
115+
The following table shows the inference time for the example model converted and run with different optimization levels.
114116

115117
| Optimization Level | Inference Time (ms) | Notes/Flags Used |
116118
|--------------------|---------------------|--------------------|
@@ -126,7 +128,7 @@ Results may vary depending on compiler version, memory configuration, and other
126128

127129
### Model Output Consistency Metrics
128130

129-
The following table summarizes the results of comparing the logits (raw model outputs) produced by the TensorFlow reference Model and the compiled C model file, running on the target.
131+
The following table summarizes the results of comparing the logits (raw model outputs) produced by the TensorFlow example model and the compiled `model.c` file, running on the target.
130132
This comparison was performed to validate the integrity and robustness of the model conversion and deployment process.
131133

132134
**All results below were obtained with the MPLAB Harmony v3 compiled at the `-O3` optimization level.**
@@ -142,7 +144,7 @@ This comparison was performed to validate the integrity and robustness of the mo
142144
| Top-5 Agreement | 100.00% | Percentage of images where the top 5 predicted classes match |
143145

144146
**Interpretation:**
145-
These results demonstrate that the embedded model’s outputs are virtually identical to the host reference, with only negligible differences attributable to floating-point precision. Both Top-1 and Top-5 classification results are in perfect agreement, confirming the correctness and robustness of the deployment at the `-O3` optimization level.
147+
These results demonstrate that the converted model’s outputs are virtually identical to the initial example, with only negligible differences attributable to floating-point precision. Both Top-1 and Top-5 classification results are in perfect agreement, confirming the correctness and robustness of the deployment at the `-O3` optimization level.
146148

147149

148150
## License

0 commit comments

Comments
 (0)