-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
openvino predict component and pipeline example #180
Merged
Merged
Changes from 1 commit
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
# Inference component with OpenVINO inference engine | ||
|
||
This component takes the following parameters: | ||
* path to the model in Intermediate Representation format ( xml and bin files) | ||
* numpy file with the input dataset. Input shape should fit to the used model definition. | ||
* path to the folder where the inference results in numpy format should be uploaded | ||
|
||
In the component logs are included inference performance details. | ||
|
||
It is a generic component which can be used to process arbitrary data and any OpenVINO model. | ||
It can be also considered as an example how to create more customized version. | ||
|
||
```bash | ||
python3 predict.py --help | ||
usage: predict.py [-h] [--model_bin MODEL_BIN] [--model_xml MODEL_XML] | ||
[--input_numpy_file INPUT_NUMPY_FILE] | ||
[--output_folder OUTPUT_FOLDER] | ||
|
||
Component executing inference operation | ||
|
||
optional arguments: | ||
-h, --help show this help message and exit | ||
--model_bin MODEL_BIN | ||
GCS or local path to model weights file (.bin) | ||
--model_xml MODEL_XML | ||
GCS or local path to model graph (.xml) | ||
--input_numpy_file INPUT_NUMPY_FILE | ||
GCS or local path to input dataset numpy file | ||
--output_folder OUTPUT_FOLDER | ||
GCS or local path to results upload folder | ||
``` | ||
|
||
|
||
## building docker image | ||
|
||
|
||
```bash | ||
docker build --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy . | ||
``` | ||
|
||
## testing the image locally | ||
|
||
```bash | ||
COMMAND = python3 predict.py \ | ||
--model_bin gs://<path>/model.bin \ | ||
--model_xml gs://<path>/model.xml \ | ||
--input_numpy_file gs://<path>/datasets/imgs.npy \ | ||
--output_folder gs://<path>/outputs | ||
docker run --rm -it -e GOOGLE_APPLICATION_CREDENTIALS=/etc/credentials/gcp_key.json \ | ||
-v ${PWD}/key.json:/etc/credentials/gcp_key.json <image name> $COMMAND | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,73 @@ | ||
FROM ubuntu:16.04 as DEV | ||
RUN apt-get update && apt-get install -y \ | ||
curl \ | ||
ca-certificates \ | ||
python3-pip \ | ||
python-dev \ | ||
libgfortran3 \ | ||
vim \ | ||
build-essential \ | ||
cmake \ | ||
curl \ | ||
wget \ | ||
libssl-dev \ | ||
ca-certificates \ | ||
git \ | ||
libboost-regex-dev \ | ||
gcc-multilib \ | ||
g++-multilib \ | ||
libgtk2.0-dev \ | ||
pkg-config \ | ||
unzip \ | ||
automake \ | ||
libtool \ | ||
autoconf \ | ||
libpng12-dev \ | ||
libcairo2-dev \ | ||
libpango1.0-dev \ | ||
libglib2.0-dev \ | ||
libgtk2.0-dev \ | ||
libswscale-dev \ | ||
libavcodec-dev \ | ||
libavformat-dev \ | ||
libgstreamer1.0-0 \ | ||
gstreamer1.0-plugins-base \ | ||
libusb-1.0-0-dev \ | ||
libopenblas-dev | ||
RUN curl -L -o 2018_R3.tar.gz https://github.com/opencv/dldt/archive/2018_R3.tar.gz && \ | ||
tar -zxf 2018_R3.tar.gz && \ | ||
rm 2018_R3.tar.gz && \ | ||
rm -Rf dldt-2018_R3/model-optimizer | ||
WORKDIR dldt-2018_R3/inference-engine | ||
RUN mkdir build && cd build && cmake -DGEMM=MKL -DENABLE_MKL_DNN=ON -DCMAKE_BUILD_TYPE=Release .. | ||
RUN cd build && make -j4 | ||
RUN pip3 install cython numpy && mkdir ie_bridges/python/build && cd ie_bridges/python/build && \ | ||
cmake -DInferenceEngine_DIR=/dldt-2018_R3/inference-engine/build -DPYTHON_EXECUTABLE=`which python3` -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.5m.so -DPYTHON_INCLUDE_DIR=/usr/include/python3.5m .. && \ | ||
make -j4 | ||
|
||
|
||
FROM ubuntu:16.04 as PROD | ||
WORKDIR /predict | ||
|
||
RUN apt-get update && apt-get install -y \ | ||
curl \ | ||
ca-certificates \ | ||
python3-pip \ | ||
python3-dev \ | ||
vim \ | ||
virtualenv | ||
|
||
COPY --from=DEV /dldt-2018_R3/inference-engine/bin/intel64/Release/lib/*.so /usr/local/lib/ | ||
COPY --from=DEV /dldt-2018_R3/inference-engine/ie_bridges/python/build/ /usr/local/lib/ | ||
COPY --from=DEV /dldt-2018_R3/inference-engine/temp/mkltiny_lnx_20180511/lib/libiomp5.so /usr/local/lib/ | ||
ENV LD_LIBRARY_PATH=/usr/local/lib | ||
ENV PYTHONPATH=/usr/local/lib | ||
COPY requirements.txt . | ||
RUN pip3 install -r requirements.txt | ||
COPY predict.py . | ||
|
||
|
||
|
||
|
||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,136 @@ | ||
from inference_engine import IENetwork, IEPlugin | ||
import argparse | ||
import numpy as np | ||
from urllib.parse import urlparse | ||
from google.cloud import storage | ||
import datetime | ||
from shutil import copy | ||
import os | ||
|
||
|
||
def get_local_file(source_path): | ||
parsed_path = urlparse(source_path) | ||
if parsed_path.scheme == "gs": | ||
bucket_name = parsed_path.netloc | ||
file_path = parsed_path.path[1:] | ||
file_name = os.path.split(parsed_path.path)[1] | ||
try: | ||
gs_client = storage.Client() | ||
bucket = gs_client.get_bucket(bucket_name) | ||
blob = bucket.blob(file_path) | ||
blob.download_to_filename(file_name) | ||
except Exception as er: | ||
print(er) | ||
return "" | ||
elif parsed_path.scheme == "": | ||
# in case of local path just pass the input argument | ||
if os.path.isfile(source_path): | ||
file_name = source_path | ||
else: | ||
print("file " + source_path + "is not accessible") | ||
file_name = "" | ||
return file_name | ||
|
||
|
||
def upload_file(source_file, target_folder): | ||
parsed_path = urlparse(target_folder) | ||
if parsed_path.scheme == "gs": | ||
bucket_name = parsed_path.netloc | ||
print("bucket_name", bucket_name) | ||
folder_path = parsed_path.path[1:] | ||
print("folder path", folder_path) | ||
try: | ||
gs_client = storage.Client() | ||
bucket = gs_client.get_bucket(bucket_name) | ||
print(folder_path + "/" + source_file) | ||
blob = bucket.blob(folder_path + "/" + source_file) | ||
blob.upload_from_filename(source_file) | ||
except Exception as er: | ||
print(er) | ||
return False | ||
elif parsed_path.scheme == "": | ||
if target_folder != ".": | ||
copy(source_file, target_folder) | ||
return True | ||
|
||
|
||
def main(): | ||
parser = argparse.ArgumentParser( | ||
description='Component executing inference operation') | ||
parser.add_argument('--model_bin', type=str, | ||
help='GCS or local path to model weights file (.bin)') | ||
parser.add_argument('--model_xml', type=str, | ||
help='GCS or local path to model graph (.xml)') | ||
parser.add_argument('--input_numpy_file', type=str, | ||
help='GCS or local path to input dataset numpy file') | ||
parser.add_argument('--output_folder', type=str, | ||
help='GCS or local path to results upload folder') | ||
args = parser.parse_args() | ||
|
||
device = "CPU" | ||
plugin_dir = None | ||
|
||
model_xml = get_local_file(args.model_xml) | ||
print("model xml", model_xml) | ||
if model_xml == "": | ||
exit(1) | ||
model_bin = get_local_file(args.model_bin) | ||
print("model bin", model_bin) | ||
if model_bin == "": | ||
exit(1) | ||
input_numpy_file = get_local_file(args.input_numpy_file) | ||
print("input_numpy_file", input_numpy_file) | ||
if input_numpy_file == "": | ||
exit(1) | ||
|
||
cpu_extension = "/usr/local/lib/libcpu_extension.so" | ||
|
||
plugin = IEPlugin(device=device, plugin_dirs=plugin_dir) | ||
if cpu_extension and 'CPU' in device: | ||
plugin.add_cpu_extension(cpu_extension) | ||
|
||
print("inference engine:", model_xml, model_bin, device) | ||
|
||
# Read IR | ||
print("Reading IR...") | ||
net = IENetwork.from_ir(model=model_xml, weights=model_bin) | ||
|
||
input_blob = next(iter(net.inputs)) | ||
output_blob = next(iter(net.outputs)) | ||
print(output_blob) | ||
|
||
print("Loading IR to the plugin...") | ||
exec_net = plugin.load(network=net, num_requests=1) | ||
n, c, h, w = net.inputs[input_blob] | ||
imgs = np.load(input_numpy_file, mmap_mode='r', allow_pickle=False) | ||
imgs = imgs.transpose((0, 3, 1, 2)) | ||
print("loaded data", imgs.shape) | ||
|
||
# batch size is taken from the size of input array first dimension | ||
batch_size = net.inputs[input_blob][0] | ||
|
||
combined_results = {} # dictionary storing results for all model outputs | ||
|
||
for x in range(0, imgs.shape[0] - batch_size, batch_size): | ||
img = imgs[x:(x + batch_size)] | ||
start_time = datetime.datetime.now() | ||
results = exec_net.infer(inputs={input_blob: img}) | ||
end_time = datetime.datetime.now() | ||
duration = (end_time - start_time).total_seconds() * 1000 | ||
print("inference duration:", duration, "ms") | ||
for output in results.keys(): | ||
if output in combined_results: | ||
combined_results[output] = np.append(combined_results[output], | ||
results[output], 0) | ||
else: | ||
combined_results[output] = results[output] | ||
|
||
for output in results.keys(): | ||
filename = output.replace("/", "_") + ".npy" | ||
np.save(filename, combined_results[output]) | ||
status = upload_file(filename, args.output_folder) | ||
print("upload status", status) | ||
|
||
|
||
if __name__ == "__main__": | ||
main() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
numpy | ||
google-cloud-storage | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
# OpenVINO predict pipeline | ||
|
||
This is an example of simple one step pipeline implementation including OpenVINO predict component. | ||
|
||
It can execute predict operation for a dataset in numpy format and provided model in Intermediate Representation format. | ||
|
||
This format of models can be generated based on trained model from various frameworks like TensorFlow, Caffe, MXNET and Kaldi. | ||
|
||
Dataset in the numpy file needs to match the shape of the provided model input. | ||
|
||
This pipeline execute the predict operation and sends the results for each model output a numpy file with the name | ||
representing the output tensor name. | ||
|
||
*Note:* Executing this pipeline required building the docker image according to the guidelines on | ||
[OpenVINO predict component doc](../../../components/openvino/predict). | ||
The image name pushed to the docker registry should be configured in the pipeline script `numpy_predict` | ||
|
||
## Examples of the parameters | ||
|
||
model_bin - gs://<path>/model.bin | ||
|
||
model_xml - gs://<path>/model.xml | ||
|
||
input_numpy_file - gs://<path>/datasets/imgs.npy | ||
|
||
output_folder - gs://<path>/outputs | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
import kfp.dsl as dsl | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This file is still in the /samples directory. Could we move it to the /contrib directory? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. samples are now also moved to contrib |
||
|
||
@dsl.pipeline( | ||
name='Prediction pipeline', | ||
description='Execute prediction operation for the dataset from numpy file' | ||
) | ||
def openvino_predict( | ||
model_xml: dsl.PipelineParam, | ||
model_bin: dsl.PipelineParam, | ||
input_numpy_file: dsl.PipelineParam, | ||
output_folder: dsl.PipelineParam): | ||
"""A one-step pipeline.""" | ||
dsl.ContainerOp( | ||
name='openvino-predict', | ||
image='<image name>', | ||
command=['python3', 'predict.py'], | ||
arguments=[ | ||
'--model_bin', model_bin, | ||
'--model_xml', model_xml, | ||
'--input_numpy_file', input_numpy_file, | ||
'--output_folder', output_folder], | ||
file_outputs={}) | ||
|
||
|
||
if __name__ == '__main__': | ||
import kfp.compiler as compiler | ||
compiler.Compiler().compile(openvino_predict, __file__ + '.tar.gz') |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is still in the /samples directory. Could we move it to the /contrib directory?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
samples are now also moved to contrib