This document has instructions for running MobileNet V1 FP32 inference using Intel-optimized TensorFlow.
Note that the ImageNet dataset is used in these MobileNet V1 examples. Download and preprocess the ImageNet dataset using the instructions here. After running the conversion script you should have a directory with the ImageNet dataset in the TF records format.
This step is required only for running accuracy, for running benchmark we do not need to provide dataset.
Download and preprocess the ImageNet dataset using the instructions here. After running the conversion script you should have a directory with the ImageNet dataset in the TF records format.
Set the DATASET_DIR
to point to this directory when running MobileNet V1.
Script name | Description |
---|---|
fp32_online_inference.sh |
Runs online inference (batch_size=1). |
fp32_batch_inference.sh |
Runs batch inference (batch_size=100). |
fp32_accuracy.sh |
Measures the model accuracy (batch_size=100). |
multi_instance_batch_inference.sh |
A multi-instance run that uses all the cores for each socket for each instance with a batch size of 56. Uses synthetic data if no DATASET_DIR is set. |
multi_instance_online_inference.sh |
A multi-instance run that uses 4 cores per instance with a batch size of 1. Uses synthetic data if no DATASET_DIR is set. |
Setup your environment using the instructions below, depending on if you are using AI Kit:
Setup using AI Kit on Linux | Setup without AI Kit on Linux | Setup without AI Kit on Windows |
---|---|---|
To run using AI Kit on Linux you will need:
|
To run without AI Kit on Linux you will need:
|
To run without AI Kit on Windows you will need:
|
After finishing the setup above, download the pretrained model and set the
PRETRAINED_MODEL
environment var to the path to the frozen graph.
If you run on Windows, please use a browser to download the pretrained model using the link below.
For Linux, run:
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_8/mobilenet_v1_1.0_224_frozen.pb
export PRETRAINED_MODEL=$(pwd)/mobilenet_v1_1.0_224_frozen.pb
Set environment variables for the path to your DATASET_DIR
for ImageNet
and an OUTPUT_DIR
where log files will be written. Navigate to your
model zoo directory and then run a quickstart script on either Linux or Windows.
# cd to your model zoo directory
cd models
export PRETRAINED_MODEL=<path to the frozen graph downloaded above>
export DATASET_DIR=<path to the ImageNet TF records>
export OUTPUT_DIR=<directory where log files will be written>
./quickstart/image_recognition/tensorflow/mobilenet_v1/inference/cpu/fp32/<script name>.sh
Using cmd.exe
run:
# cd to your model zoo directory
cd models
set PRETRAINED_MODEL=<path to the frozen graph downloaded above>
set DATASET_DIR=<path to the ImageNet TF records>
set OUTPUT_DIR=<directory where log files will be written>
bash quickstart\image_recognition\tensorflow\mobilenet_v1\inference\cpu\fp32\<script name>.sh
Note: You may use
cygpath
to convert the Windows paths to Unix paths before setting the environment variables. As an example, if the dataset location on Windows isD:\user\ImageNet
, convert the Windows path to Unix as shown:cygpath D:\user\ImageNet /d/user/ImageNet
Then, set the
DATASET_DIR
environment variableset DATASET_DIR=/d/user/ImageNet
.
- To run more advanced use cases, see the instructions here
for calling the
launch_benchmark.py
script directly. - To run the model using docker, please see the oneContainer
workload container:
https://software.intel.com/content/www/us/en/develop/articles/containers/mobilenetv1-fp32-inference-tensorflow-container.html.