This document has instructions for running SSD-ResNet34 BFloat16 inference using Intel-optimized TensorFlow.
The SSD-ResNet34 accuracy scripts (bfloat16_accuracy.sh and bfloat16_accuracy_1200.sh) use the COCO validation dataset in the TF records format. See the COCO dataset document for instructions on downloading and preprocessing the COCO validation dataset.
The performance benchmarking scripts (bfloat16_inference.sh and bfloat16_inference_1200.sh) use synthetic data, so no dataset is required.
Script name | Description |
---|---|
bfloat16_accuracy_1200.sh | Runs an accuracy test using data in the TF records format with an input size of 1200x1200. |
bfloat16_accuracy.sh | Runs an accuracy test using data in the TF records format with an input size of 300x300. |
bfloat16_inference_1200.sh | Runs inference with a batch size of 1 using synthetic data with an input size of 1200x1200. Prints out the time spent per batch and total samples/second. |
bfloat16_inference.sh | Runs inference with a batch size of 1 using synthetic data with an input size of 300x300. Prints out the time spent per batch and total samples/second. |
Setup your environment using the instructions below, depending on if you are using AI Kit:
Setup using AI Kit | Setup without AI Kit |
---|---|
To run using AI Kit you will need:
|
To run without AI Kit you will need:
|
The TensorFlow models and
benchmarks repos are used by
SSD-ResNet34 BFloat16 inference. Clone those at the git SHAs specified
below and set the TF_MODELS_DIR
environment variable to point to the
directory where the models repo was cloned.
git clone --single-branch https://github.com/tensorflow/models.git tf_models
git clone --single-branch https://github.com/tensorflow/benchmarks.git ssd-resnet-benchmarks
cd tf_models
export TF_MODELS_DIR=$(pwd)
git checkout f505cecde2d8ebf6fe15f40fb8bc350b2b1ed5dc
cd ../ssd-resnet-benchmarks
git checkout 509b9d288937216ca7069f31cfb22aaa7db6a4a7
cd ..
Download the SSD-ResNet34 pretrained model for either the 300x300 or 1200x1200
input size, depending on which quickstart script you are
going to run. Set the PRETRAINED_MODEL
environment variable for the path to the
pretrained model that you'll be using.
# ssd-resnet34 300x300
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_8/ssd_resnet34_fp32_bs1_pretrained_model.pb
export PRETRAINED_MODEL=$(pwd)/ssd_resnet34_fp32_bs1_pretrained_model.pb
# ssd-resnet34 1200x1200
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_8/ssd_resnet34_fp32_1200x1200_pretrained_model.pb
export PRETRAINED_MODEL=$(pwd)/ssd_resnet34_fp32_1200x1200_pretrained_model.pb
After installing the prerequisites and cloning the models and benchmarks
repos, and downloading the pretrained model, set environment variables
for the path to your DATASET_DIR
(for accuracy testing only --
inference benchmarking uses synthetic data) and an OUTPUT_DIR
where
log files will be written. Once the required environment variables are set,
you can then run a quickstart script from the
Model Zoo.
# cd to your model zoo directory
cd models
# set environment variables
export DATASET_DIR=<directory with the validation-*-of-* files (for accuracy testing only)>
export TF_MODELS_DIR=<path to the TensorFlow Models repo>
export PRETRAINED_MODEL=<path to the 300x300 or 1200x1200 pretrained model pb file>
export OUTPUT_DIR=<directory where log files will be written>
./quickstart/object_detection/tensorflow/ssd-resnet34/inference/cpu/bfloat16/<script name>.sh
- To run more advanced use cases, see the instructions here
for calling the
launch_benchmark.py
script directly. - To run the model using docker, please see the oneContainer
workload container:
https://software.intel.com/content/www/us/en/develop/articles/containers/ssd-resnet34-bfloat16-inference-tensorflow-container.html.