Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python Model API package: add main documentation #3268

Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
853599a
add documentation to Model API
anzhella-pankratova Feb 18, 2022
cb9aabe
add readme.md
anzhella-pankratova Feb 18, 2022
e88ecb8
fix spelling
anzhella-pankratova Feb 21, 2022
70cf732
add Model API section to object_detection_demo readme
anzhella-pankratova Feb 21, 2022
94ffd56
remove extra whitespace
anzhella-pankratova Feb 21, 2022
71c9f9a
add bullet points
anzhella-pankratova Feb 21, 2022
e1320e4
Apply suggestions
anzhella-pankratova Feb 21, 2022
d581e16
modify the usage example
anzhella-pankratova Feb 21, 2022
c6bd178
Modify documentation
anzhella-pankratova Feb 22, 2022
b5739fd
add extra module
anzhella-pankratova Feb 24, 2022
8701286
don't check relative links for Model API package
anzhella-pankratova Feb 24, 2022
8cca162
update check-documentation.py
anzhella-pankratova Feb 24, 2022
4b5d7fa
prepare-documentation for Python Model API
anzhella-pankratova Feb 24, 2022
325e4bf
suggestions
anzhella-pankratova Feb 25, 2022
29d247b
move the list of supported demos to demos/README.md
anzhella-pankratova Feb 25, 2022
36eb8a9
remove list of demos, remove statement in documentation
anzhella-pankratova Feb 28, 2022
41398d2
add documentation to Model API
anzhella-pankratova Feb 18, 2022
94d3976
add readme.md
anzhella-pankratova Feb 18, 2022
3ec58fc
fix spelling
anzhella-pankratova Feb 21, 2022
6ff38da
add Model API section to object_detection_demo readme
anzhella-pankratova Feb 21, 2022
0b5e278
remove extra whitespace
anzhella-pankratova Feb 21, 2022
a0ac903
add bullet points
anzhella-pankratova Feb 21, 2022
3951fb2
Apply suggestions
anzhella-pankratova Feb 21, 2022
a78852d
modify the usage example
anzhella-pankratova Feb 21, 2022
6a8144a
Modify documentation
anzhella-pankratova Feb 22, 2022
31bf4d7
add extra module
anzhella-pankratova Feb 24, 2022
f543d57
don't check relative links for Model API package
anzhella-pankratova Feb 24, 2022
a4adf83
update check-documentation.py
anzhella-pankratova Feb 24, 2022
ee53da8
prepare-documentation for Python Model API
anzhella-pankratova Feb 24, 2022
ede0f45
suggestions
anzhella-pankratova Feb 25, 2022
19872b1
move the list of supported demos to demos/README.md
anzhella-pankratova Feb 25, 2022
384c1d9
remove list of demos, remove statement in documentation
anzhella-pankratova Feb 28, 2022
e1217b4
OMZ models instead of architectures, OV supported Python instead cert…
vladimir-dudnik Mar 1, 2022
52ed496
pull recent changes
anzhella-pankratova Mar 2, 2022
06d0e99
remove python in documentation, update package structure section
anzhella-pankratova Mar 2, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions ci/check-documentation.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,11 @@ def complain(message):
omz_github_url = 'https://github.com/openvinotoolkit/open_model_zoo/'

for md_path in sorted(all_md_files):

# skip the checks for Python Model API package
if 'model_api' in str(md_path):
continue
Wovchena marked this conversation as resolved.
Show resolved Hide resolved

referenced_md_files = set()

md_path_rel = md_path.relative_to(OMZ_ROOT)
Expand Down
5 changes: 5 additions & 0 deletions ci/prepare-documentation.py
Original file line number Diff line number Diff line change
Expand Up @@ -382,6 +382,11 @@ def main():
title='OMZ Model API OVMS adapter')
ovms_adapter_element.attrib[XML_ID_ATTRIBUTE] = 'omz_model_api_ovms_adapter'

model_api_element = add_page(output_root, navindex_element, id='omz_python_model_api',
path='demos/common/python/openvino/model_zoo/model_api/README.md',
title='OMZ Python Model API')
model_api_element.attrib[XML_ID_ATTRIBUTE] = 'omz_python_model_api'

for md_path in all_md_paths:
if md_path not in documentation_md_paths:
raise RuntimeError(f'{all_md_paths[md_path]}: '
Expand Down
30 changes: 1 addition & 29 deletions demos/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,35 +195,7 @@ cmake -A x64 <open_model_zoo>/demos

### <a name="model_api_installation"></a>Python\* model API installation

Python Model API with model wrappers and pipelines can be installed as a part of OpenVINO&trade; toolkit or from source.
Installation from source is as follows:

1. Install Python (version 3.6 or higher), [setuptools](https://pypi.org/project/setuptools/):

2. Build the wheel with the following command:

```sh
python <omz_dir>/demos/common/python/setup.py bdist_wheel
```
The built wheel should appear in the dist folder;
Name example: `openmodelzoo_modelapi-0.0.0-py3-none-any.whl`

3. Install the package in the clean environment with `--force-reinstall` key:
```sh
python -m pip install openmodelzoo_modelapi-0.0.0-py3-none-any.whl --force-reinstall
```
Alternatively, instead of building the wheel you can use the following command inside `<omz_dir>/demos/common/python/` directory to build and install the package:
```sh
python -m pip install .
```

When the model API package is installed, you can import it as follows:
```sh
python -c "from openvino.model_zoo import model_api"
```

> **NOTE**: On Linux and macOS, you may need to type `python3` instead of `python`. You may also need to [install pip](https://pip.pypa.io/en/stable/installation/).
> For example, on Ubuntu execute the following command to get pip installed: `sudo apt install python3-pip`.
To run Python demo applications, you need to install the Python* Model API package. Refer to [Python* Model API documentation](common/python/openvino/model_zoo/model_api/README.md#installing-python*-model-api-package) to learn about its installation.
ivikhrev marked this conversation as resolved.
Show resolved Hide resolved

### <a name="build_python_extensions"></a>Build the Native Python\* Extension Modules

Expand Down
150 changes: 150 additions & 0 deletions demos/common/python/openvino/model_zoo/model_api/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
# Python* Model API package

Model API package is a set of wrapper classes for particular tasks and model architectures, simplifying data preprocess and postprocess as well as routine procedures (model loading, asynchronous execution, etc...)
An application feeds model class with input data, then the model returns postprocessed output data in user-friendly format.

## Package structure

The Python* Model API consists of 3 libraries:
ivikhrev marked this conversation as resolved.
Show resolved Hide resolved
* _adapters_ implements a common interface to allow Model API wrappers usage with different executors: OpenVINO, OVMS. See [Model API adapters](#model-api-adapters) section
ivikhrev marked this conversation as resolved.
Show resolved Hide resolved
* _models_ implements wrappers for each architecture. See [Model API Wrappers](#model-api-wrappers) section
* _pipelines_ implements pipelines for model inference and manage the synchronous/asynchronous execution. See [Model API Pipelines](#model-api-pipelines) section

### Prerequisites

The package requires
- Python (version 3.6 or higher)
- OpenVINO™ toolkit
Wovchena marked this conversation as resolved.
Show resolved Hide resolved

If you build Python* Model API package from source, you should install the OpenVINO™ toolkit. See the options:

Use installation package for [Intel® Distribution of OpenVINO™ toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit-download.html) or build the open-source version available in the [OpenVINO GitHub repository](https://github.com/openvinotoolkit/openvino) using the [build instructions](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).

Also, you can install the OpenVINO Python\* package via the command:
```sh
pip install openvino
```

## Installing Python* Model API package

Use the following command to install Python* Model API from source:
```sh
pip install <omz_dir>/demos/common/python
```

Alternatively, you can generate the package using a wheel. Follow the steps below:
1. Build the wheel.

```sh
python <omz_dir>/demos/common/python/setup.py bdist_wheel
```
The wheel should appear in the dist folder.
Name example: `openmodelzoo_modelapi-0.0.0-py3-none-any.whl`

2. Install the package in the clean environment with `--force-reinstall` key.
```sh
pip install openmodelzoo_modelapi-0.0.0-py3-none-any.whl --force-reinstall
```

To verify the package is installed, you might use the following command:
```sh
python -c "from openvino.model_zoo import model_api"
```

## Model API Wrappers

The Python* Model API package provides model wrappers, which implement standardized preprocessing/postprocessing functions per "task type" and incapsulate model-specific logic for usage of different models in a unified manner inside the application.

The wrapper interface is simple and flexible, which gives capabilities for the creation of custom wrappers covering different architectures and use cases.

The following tasks can be solved with wrappers usage:

| Task type | Model API wrappers |
|----------------------------|--------------------|
| Background Matting | <ul><li>`VideoBackgroundMatting`</li><li>`ImageMattingWithBackground`</li></ul> |
| Classification | <ul><li>`Classification`</li></ul> |
| Deblurring | <ul><li>`Deblurring`</li></ul> |
| Human Pose Estimation | <ul><li>`HpeAssociativeEmbedding`</li><li>`OpenPose`</li></ul> |
| Instance Segmentation | <ul><li>`MaskRCNNModel`</li><li>`YolactModel`</li></ul> |
| Monocular Depth Estimation | <ul><li> `MonoDepthModel`</li></ul> |
| Named Entity Recognition | <ul><li>`BertNamedEntityRecognition`</li></ul> |
| Object Detection | <ul><li>`CenterNet`</li><li>`DETR`</li><li>`CTPN`</li><li>`FaceBoxes`</li><li>`RetinaFace`</li><li>`RetinaFacePyTorch`</li><li>`SSD`</li><li>`UltraLightweightFaceDetection`</li><li>`YOLO`</li><li>`YoloV3ONNX`</li><li>`YoloV4`</li><li>`YOLOF`</li><li>`YOLOX`</li></ul> |
| Question Answering | <ul><li>`BertQuestionAnswering`</li></ul> |
| Salient Object Detection | <ul><li>`SalientObjectDetectionModel`</li></ul> |
| Semantic Segmentation | <ul><li>`SegmentationModel`</li></ul> |

## Model API Adapters

Model API wrappers are executor-agnostic, meaning it does not implement the specific model inference or model loading, instead it can be used with different executors having the implementation of common interface methods in adapter class respectively.

Currently, `OpenvinoAdapter` and `OVMSAdapter` are supported.

#### OpenVINO Adapter

`OpenvinoAdapter` hides the OpenVINO™ toolkit API, which allows Model API wrappers launching with models represented in Intermediate Representation (IR) format.
It accepts a path to either `xml` model file or `onnx` model file.

#### OpenVINO Model Server Adapter

`OVMSAdapter` hides the OpenVINO Model Server python client API, which allows Model API wrappers launching with models served by OVMS.

Refer to __[`OVMSAdapter`](adapters/ovms_adapter.md)__ to learn about running demos with OVMS.

For OpenVINO Model Server Adapter employment, you need to install the package with extra module:
```sh
pip install <omz_dir>/demos/common/python[ovms]
```

## Model API Pipelines

Model API Pipelines represent the high-level wrappers upon the input data and accessing model results management.
They perform the data submission for model inference, verification of inference status, whether the result is ready or not, and results accessing.

The `AsyncPipeline` is available, which handles the asynchronous execution of a single model.

## Ready-to-use Model API solutions

To apply Model API wrappers in custom applications, learn the provided example of common scenario of how to use Python* Model API.

In the example, the SSD architecture is used to predict bounding boxes on input image `"sample.png"`. The model execution is produced by `OpenvinoAdapter`, therefore we submit the path to the model's `xml` file.
ivikhrev marked this conversation as resolved.
Show resolved Hide resolved

Once the SSD model wrapper instance is created, we get the predictions by the model in one line: `ssd_model(input_data)` - the wrapper performs the preprocess method, synchronous inference on OpenVINO™ toolkit side and postprocess method.

```python
import cv2
# import model wrapper class
from openvino.model_zoo.model_api.models import SSD
anzhella-pankratova marked this conversation as resolved.
Show resolved Hide resolved
# import inference adapter and helper for runtime setup
from openvino.model_zoo.model_api.adapters import OpenvinoAdapter, create_core
anzhella-pankratova marked this conversation as resolved.
Show resolved Hide resolved


# read input image using opencv
input_data = cv2.imread("sample.png")

# define the path to mobilenet-ssd model in IR format
model_path = "public/mobilenet-ssd/FP32/mobilenet-ssd.xml"

# create adapter for OpenVINO™ runtime, pass the model path
model_adapter = OpenvinoAdapter(create_core(), model_path, device="CPU")

# create model API wrapper for SSD architecture
# preload=True loads the model on CPU inside the adapter
ssd_model = SSD(model_adapter, preload=True)
eaidova marked this conversation as resolved.
Show resolved Hide resolved

# apply input preprocessing, sync inference, model output postprocessing
results = ssd_model(input_data)
```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We won't ever verify the snippet works. To me it's the strong reason to delete it. You should refer to a demo instead

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the majority prefers to delete it, I will do it

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not about voting.

  1. This is about explaining why your solution is correct.
  2. Even if you insist on conducting a poll, your voice won't count. My voice won't count either. Only the voice of the person who is responsible for OMZ matters. Which is @vladimir-dudnik's one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My point is the demos contain only complex cases with asynchronous models execution, it is not the only one usage scenario. We should provide somewhere the example of simple synchronous model call.

Didn't get point about the snippet work verification. Many packages have a documentation with API examples. The package is going to be updated with new releases, and with the new releases the documentation will be also updated as well as the snippet.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can write a dedicated sample and refer to it. You will anyway need to cover your lib with tests. The sample could be a part of tests.

Didn't get point about the snippet work verification. Many packages have a documentation with API examples. The package is going to be updated with new releases, and with the new releases the documentation will be also updated as well as the snippet.

Such packages have people whos work to continuously check the example is consistent. We don't have such people. If packages don't do that, they usually end up having broken examples.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is the solution. Keep this section, but remove it after tests are added

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay


To study the complex scenarios, refer to [Open Model Zoo Python* demos](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos), where the asynchronous inference is applied.

The list of Open Model Zoo demos with Model API support:
- [BERT Named Entity Recognition Python* Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/bert_named_entity_recognition_demo/python)
- [BERT Question Answering Python* Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/bert_question_answering_demo/python)
- [BERT Question Answering Embedding Python* Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/bert_question_answering_embedding_demo/python)
- [Classification Python* Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/classification_demo/python)
- [Image Deblurring Python* Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/deblurring_demo/python)
- [Human Pose Estimation Python* Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/human_pose_estimation_demo/python)
- [Instance Segmentation Python* Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/instance_segmentation_demo/python)
- [MonoDepth Python* Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/monodepth_demo/python)
- [Object Detection Python* Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/object_detection_demo/python)
- [Image Segmentation Python* Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/segmentation_demo/python)
Wovchena marked this conversation as resolved.
Show resolved Hide resolved
4 changes: 4 additions & 0 deletions demos/common/python/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,9 @@
with open(SETUP_DIR / 'requirements.txt') as f:
required = f.read().splitlines()

with open(SETUP_DIR / 'requirements_ovms.txt') as f:
ovms_required = f.read().splitlines()

packages = find_packages(str(SETUP_DIR))
package_dir = {'openvino': str(SETUP_DIR / 'openvino')}

Expand All @@ -47,4 +50,5 @@
packages=packages,
package_dir=package_dir,
install_requires=required,
extras_require={'ovms': ovms_required}
)
6 changes: 6 additions & 0 deletions demos/object_detection_demo/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,12 @@ Async API operates with a notion of the "Infer Request" that encapsulates the in

> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#general-conversion-parameters).

## Model API

The demo utilizes model wrappers, adapters and pipelines from [Python* Model API](../../common/python/openvino/model_zoo/model_api/README.md).

The generalized interface of wrappers with its unified results representation provides the support of multiple different object detection model topologies in one demo.

## Preparing to Run

For demo input image or video files, refer to the section **Media Files Available for Demos** in the [Open Model Zoo Demos Overview](../../README.md).
Expand Down