Skip to content

Command-line tool that automates the conversion of TensorFlow models to C code for use with MPLAB® Harmony v3.

License

Notifications You must be signed in to change notification settings

MicrochipTech/tf2mplabh3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TensorFlow to MPLAB® Harmony v3 Model Converter

Welcome to tf2mplabh3!

This project is proudly developed and maintained by Microchip Technology Inc.
It enables you to convert TensorFlow models to C code, ready for seamless integration into your MPLAB® Harmony v3 embedded projects.


What is this?

tf2mplabh3 is a command-line tool that automates the conversion of TensorFlow models to C code, making it easy to deploy machine learning models on embedded systems using MPLAB® Harmony v3.


Table of Contents


Features

  • Convert TensorFlow SavedModel to C code
  • Easy CLI interface
  • Verbosity control for logging
  • Ready for integration with MPLAB Harmony v3

Installation

Clone the repository and run the installation script:

git clone --recursive https://github.com/MicrochipTech/tf2mplabh3.git
cd tf2mplabh3
sudo ./install.sh

If you already cloned without --recursive, run:

git submodule update --init --recursive

Quick Start

Activate the virtual environment and run the script, by passing the TensorFlow model path as shown in the example below:

source .venv/bin/activate
python3 -m tf2mplabh3 -m examples/mobilenet-v2-tensorflow2-035-128-classification-v2

Usage

python3 -m tf2mplabh3 [options]

Arguments

Argument Description Default
-m, --model Path to TensorFlow SavedModel directory examples/mobilenet-v2-tensorflow2-035-128-classification-v2
-onnx, --onnx_model Path to output ONNX intermediate model file examples/model.onnx
-c_file, --c_model_file Path to output C model file examples/model.c
--tag SavedModel tag (e.g., serve) None
--signature_def Signature def key (e.g., serving_default) None
--onnx2c Path to the onnx2c executable c_deps/onnx2c/build/onnx2c
-v, --verbosity Verbosity level (0=quiet, 1=all logs) 0
--overwrite Overwrite existing ONNX or C model files Not used

Examples

Convert a model with default settings:

python3 -m tf2mplabh3

Convert a specific model and increase verbosity:

python3 -m tf2mplabh3 -m path/to/model -v 1

How to use the hardware capabilities to accelerate inference:

In order to ensure an optimized inference time, leverage the features of the MPLAB® XC-32 Compilers by activating the third level of compilation in your MPLAB® X project. Doing this ensures a extended use of the hardware capabilities of the device.

As shown in the example image below:

MPLAB Screenshot

Benchmarking

The following table shows the inference time for the example model (mobilenet-v2-tensorflow2-035-128-classification-v2) converted and run with different optimization levels.

All benchmarks were performed on:

Optimization Level Inference Time (ms) Notes/Flags Used
None 7536.600 No optimization
-O1 1730.550 Basic optimization
-O2 1368.060 More optimization
-O3 1188.790 Optimize for speed
-Os 1382.300 Optimize for size

Note:
Inference time was measured as the average over 100 runs.
Results may vary depending on compiler version, memory configuration, and other system activity.

License

Apache-2.0 License

Acknowledgments

About

Command-line tool that automates the conversion of TensorFlow models to C code for use with MPLAB® Harmony v3.

Topics

Resources

License

Stars

Watchers

Forks