Sample example how to load a Tensorflow Object detection API v2 model and serve prediction in C++
Accompanying Medium post here
The current config uses the following dependencies (based on Tensorflow tested build). Check out build from source configs for more details.
- Tensorflow 2.3.0
- CUDA 10.1
- cuDNN 7.6
- Bazel 3.1.0
- Protobuf 3.9.2
- OpenCV 4.3.0 (required only for the example)
docker build . -t boraraktim/tensorflow2_cpp
-
Download the object detection model from TF object detection model zoo. We use the efficientdet_d3_coco17_tpu-32 for this example and unpack it.
-
Start container and mount the model volume
docker run --gpus all -it --rm -v efficientdet_d3_coco17_tpu-32/:/object_detection/models/ boraraktim/tensorflow2_cpp
directory structure
-|/object_detection/models/
-|efficientdet_d3_coco17_tpu-32
|--saved_model
|--assets/
|--saved_model.pb
|-- ...
- Build the project using cmake
root@8122f3e1dc5b:/object_detection# mkdir build
root@8122f3e1dc5b:/object_detection# cd build && cmake ..
root@8122f3e1dc5b:/object_detection/build# make
./get_prediction <path/to/saved_model> <path/to/image.jpg> <path/to/output.jpg>
Example,
root@8122f3e1dc5b:/object_detection/build# ./get_prediction ../models/efficientdet_d3_coco17_tpu-32/saved_model/ ../test-image-anoir-chafik-2_3c4dIFYFU-unsplash.jpg ../sample_prediction.jpg
Image from Unspalsh