Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions demos/age_gender_recognition/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ curl --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_z
:::{dropdown} **Deploying with Docker**
Start OVMS container with image pulled in previous step and mount `model` directory :
```bash
chmod -R 755 model
docker run --rm -d -u $(id -u):$(id -g) -v $(pwd)/model:/models/age_gender -p 9000:9000 -p 8000:8000 openvino/model_server:latest --model_path /models/age_gender --model_name age_gender --port 9000 --rest_port 8000
```
:::
Expand Down
3 changes: 2 additions & 1 deletion demos/benchmark/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,8 @@ The application can be used with any model or pipeline served in OVMS, by reques
### Prepare the model
Start OVMS with resnet50-binary model:
```bash
curl -L --create-dir https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.bin -o resnet50-binary/1/model.bin https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.xml -o resnet50-binary/1/model.xml
curl -L --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.bin -o resnet50-binary/1/model.bin https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.xml -o resnet50-binary/1/model.xml
chmod -R 755 resnet50-binary
```

### Prepare the server
Expand Down
2 changes: 1 addition & 1 deletion demos/bert_question_answering/python/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
#


FROM ubuntu:20.04
FROM ubuntu:22.04
RUN apt update && apt install -y python3-pip && apt-get clean && rm -rf /var/lib/apt/lists/*
WORKDIR /bert
COPY bert_question_answering.py tokens_bert.py html_reader.py requirements.txt ./
Expand Down
5 changes: 3 additions & 2 deletions demos/face_detection/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,14 +71,15 @@ optional arguments:
Start the OVMS service locally:

```console
curl --create-dir https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/face-detection-retail-0004/FP32/face-detection-retail-0004.bin -o model/1/face-detection-retail-0004.bin
curl --create-dir https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/face-detection-retail-0004/FP32/face-detection-retail-0004.xml -o model/1/face-detection-retail-0004.xml
curl --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/face-detection-retail-0004/FP32/face-detection-retail-0004.bin -o model/1/face-detection-retail-0004.bin
curl --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/face-detection-retail-0004/FP32/face-detection-retail-0004.xml -o model/1/face-detection-retail-0004.xml
```

## Deploying OVMS

:::{dropdown} **Deploying with Docker**
```bash
chmod -R 755 model
docker run --rm -d -u $(id -u):$(id -g) -v `pwd`/model:/models -p 9000:9000 openvino/model_server:latest --model_path /models --model_name face-detection --port 9000 --shape auto
```
:::
Expand Down
3 changes: 2 additions & 1 deletion demos/horizontal_text_detection/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The client can work efficiently also over slow internet connection with long lat
### Download horizontal text detection model from OpenVINO Model Zoo

```bash
curl -L --create-dir https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/horizontal-text-detection-0001/FP32/horizontal-text-detection-0001.bin -o horizontal-text-detection-0001/1/horizontal-text-detection-0001.bin https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/horizontal-text-detection-0001/FP32/horizontal-text-detection-0001.xml -o horizontal-text-detection-0001/1/horizontal-text-detection-0001.xml
curl -L --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/horizontal-text-detection-0001/FP32/horizontal-text-detection-0001.bin -o horizontal-text-detection-0001/1/horizontal-text-detection-0001.bin https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/horizontal-text-detection-0001/FP32/horizontal-text-detection-0001.xml -o horizontal-text-detection-0001/1/horizontal-text-detection-0001.xml
```

```bash
Expand All @@ -20,6 +20,7 @@ horizontal-text-detection-0001

### Start the OVMS container:
```bash
chmod -R 755 horizontal-text-detection-0001
docker run -d -u $(id -u):$(id -g) -v $(pwd)/horizontal-text-detection-0001:/model -p 9000:9000 openvino/model_server:latest \
--model_path /model --model_name text --port 9000 --layout NHWC:NCHW
```
Expand Down
3 changes: 2 additions & 1 deletion demos/image_classification/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ make

Start OVMS with resnet50-binary model:
```bash
curl -L --create-dir https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.bin -o resnet50-binary/1/model.bin https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.xml -o resnet50-binary/1/model.xml
curl -L --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.bin -o resnet50-binary/1/model.bin https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.xml -o resnet50-binary/1/model.xml
```

# Client requesting prediction synchronously
Expand All @@ -30,6 +30,7 @@ The client also tests server responses for accuracy.

## Prepare the server
```bash
chmod -R 755 resnet50-binary
docker run -d -u $(id -u):$(id -g) -v $(pwd)/resnet50-binary:/model -p 9001:9001 openvino/model_server:latest \
--model_path /model --model_name resnet --port 9001 --layout NHWC:NCHW
```
Expand Down
19 changes: 14 additions & 5 deletions demos/mediapipe/holistic_tracking/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,17 +50,25 @@ The models setup should look like this
│   └── 1
│   └── hand_recrop.tflite
├── holistic_tracking.pbtxt
├── iris_landmark
│   └── 1
│   └── iris_landmark.tflite
├── mediapipe
│   └── modules
│   └── hand_landmark
│   └── handedness.txt
├── mediapipe_holistic_tracking.py
├── palm_detection_full
│   └── 1
│   └── palm_detection_full.tflite
├── pose_detection
│   └── 1
│   └── pose_detection.tflite
── pose_landmark_full
└── 1
└── pose_landmark_full.tflite


── pose_landmark_full
│   └── 1
│   └── pose_landmark_full.tflite
├── README.md
└── requirements.txt
```
## Server Deployment
:::{dropdown} **Deploying with Docker**
Expand All @@ -71,6 +79,7 @@ docker pull openvino/model_server:latest

```
```bash
chmod -R 755 .
docker run -d -v $PWD/mediapipe:/mediapipe -v $PWD:/models -p 9000:9000 openvino/model_server:latest --config_path /models/config_holistic.json --port 9000
```
:::
Expand Down
1 change: 1 addition & 0 deletions demos/mediapipe/image_classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ curl --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_z
## Server Deployment
:::{dropdown} **Deploying with Docker**
```bash
chmod -R 755 resnetMediapipe
docker run -d -v $PWD:/mediapipe -p 9000:9000 openvino/model_server:latest --config_path /mediapipe/config.json --port 9000
```
:::
Expand Down
1 change: 1 addition & 0 deletions demos/mediapipe/object_detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ python mediapipe_object_detection.py --download_models
## Server Deployment
:::{dropdown} **Deploying with Docker**
```bash
sed -i 's;ssdlite_object_detection_labelmap.txt;/demo/ssdlite_object_detection_labelmap.txt;g' graph.pbtxt
docker run -d -v $PWD:/demo -p 9000:9000 openvino/model_server:latest --config_path /demo/config.json --port 9000
```
:::
Expand Down
4 changes: 2 additions & 2 deletions demos/optical_character_recognition/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,8 +93,8 @@ Converted east-resnet50 model will have the following interface:
### Text-recognition model
Download [text-recognition](https://github.com/openvinotoolkit/open_model_zoo/tree/2022.1.0/models/intel/text-recognition-0014) model and store it in `${PWD}/text-recognition/1` folder.
```bash
curl -L --create-dir https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/text-recognition-0014/FP32/text-recognition-0014.bin -o text-recognition/1/model.bin https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/text-recognition-0014/FP32/text-recognition-0014.xml -o text-recognition/1/model.xml
chmod -R 755 text-recognition/
curl -L --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/text-recognition-0014/FP32/text-recognition-0014.bin -o text-recognition/1/model.bin https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/text-recognition-0014/FP32/text-recognition-0014.xml -o text-recognition/1/model.xml
chmod -R 755 text-recognition
```

text-recognition model will have the following interface:
Expand Down
5 changes: 3 additions & 2 deletions demos/person_vehicle_bike_detection/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,15 @@ The purpose of this demo is to show how to send data from multiple sources (came

## Prepare model files
```console
curl --create-dir https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/person-vehicle-bike-detection-crossroad-0078/FP32/person-vehicle-bike-detection-crossroad-0078.bin -o model/1/person-vehicle-bike-detection-crossroad-0078.bin
curl --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/person-vehicle-bike-detection-crossroad-0078/FP32/person-vehicle-bike-detection-crossroad-0078.bin -o model/1/person-vehicle-bike-detection-crossroad-0078.bin

curl --create-dir https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/person-vehicle-bike-detection-crossroad-0078/FP32/person-vehicle-bike-detection-crossroad-0078.xml -o model/1/person-vehicle-bike-detection-crossroad-0078.xml
curl --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/person-vehicle-bike-detection-crossroad-0078/FP32/person-vehicle-bike-detection-crossroad-0078.xml -o model/1/person-vehicle-bike-detection-crossroad-0078.xml
```

## Server Deployment
:::{dropdown} **Deploying with Docker**
```bash
chmod -R 755 model
docker run -d -v `pwd`/model:/models -p 9000:9000 openvino/model_server:latest --model_path /models --model_name person-vehicle-detection --port 9000 --shape auto
```
:::
Expand Down
1 change: 1 addition & 0 deletions demos/python_demos/Dockerfile.ubuntu
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ ENV LD_LIBRARY_PATH=/ovms/lib
ENV PYTHONPATH=/ovms/lib/python
RUN apt update && apt install -y python3-pip git
COPY requirements.txt .
ENV PIP_BREAK_SYSTEM_PACKAGES=1
RUN pip3 install -r requirements.txt
USER ovms
ENTRYPOINT [ "/ovms/bin/ovms" ]
13 changes: 7 additions & 6 deletions demos/single_face_analysis_pipeline/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,10 @@ You can prepare the workspace that contains all the above by just running
You can prepare the workspace that contains all the above by running

```console
curl --create-dir https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/age-gender-recognition-retail-0013/FP32/age-gender-recognition-retail-0013.xml -o workspace/age-gender-recognition-retail-0013/1/age-gender-recognition-retail-0013.xml
curl --create-dir https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/age-gender-recognition-retail-0013/FP32/age-gender-recognition-retail-0013.bin -o workspace/age-gender-recognition-retail-0013/1/age-gender-recognition-retail-0013.bin
curl --create-dir https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.xml -o workspace/emotions-recognition-retail-0003/1/emotions-recognition-retail-0003.xml
curl --create-dir https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.bin -o workspace/emotions-recognition-retail-0003/1/emotions-recognition-retail-0003.bin
cp config.json workspace/.
curl --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/age-gender-recognition-retail-0013/FP32/age-gender-recognition-retail-0013.xml -o workspace/age-gender-recognition-retail-0013/1/age-gender-recognition-retail-0013.xml
curl --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/age-gender-recognition-retail-0013/FP32/age-gender-recognition-retail-0013.bin -o workspace/age-gender-recognition-retail-0013/1/age-gender-recognition-retail-0013.bin
curl --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.xml -o workspace/emotions-recognition-retail-0003/1/emotions-recognition-retail-0003.xml
curl --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.bin -o workspace/emotions-recognition-retail-0003/1/emotions-recognition-retail-0003.bin
```

### Final directory structure
Expand All @@ -48,7 +47,6 @@ workspace/
│   └── 1
│   ├── age-gender-recognition-retail-0013.bin
│   └── age-gender-recognition-retail-0013.xml
├── config.json
└── emotions-recognition-retail-0003
└── 1
├── emotions-recognition-retail-0003.bin
Expand All @@ -58,6 +56,8 @@ workspace/
## Server Deployment
:::{dropdown} **Deploying with Docker**
```bash
cp config.json workspace/.
chmod -R 755 workspace
docker run -p 9000:9000 -d -v ${PWD}/workspace:/workspace openvino/model_server --config_path /workspace/config.json --port 9000
```
:::
Expand All @@ -70,6 +70,7 @@ Assuming you have unpacked model server package, make sure to:
as mentioned in [deployment guide](../../../docs/deploying_server_baremetal.md), in every new shell that will start OpenVINO Model Server.
```bat
cd demos\single_face_analysis_pipeline\python
copy config.json workspace
ovms --config_path workspace/config.json --port 9001
```
:::
Expand Down
4 changes: 2 additions & 2 deletions demos/using_onnx_model/python/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,13 @@ default: client_preprocessing

client_preprocessing:
# Download ONNX ResNet50 model
curl --fail -L --create-dir https://github.com/onnx/models/raw/main/validated/vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx -o workspace/resnet50-onnx/1/resnet50-caffe2-v1-9.onnx
curl --fail -L --create-dirs https://github.com/onnx/models/raw/main/validated/vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx -o workspace/resnet50-onnx/1/resnet50-caffe2-v1-9.onnx

BASE_OS?=ubuntu

server_preprocessing:
# Download ONNX ResNet50 model
curl --fail -L --create-dir https://github.com/onnx/models/raw/main/validated/vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx -o workspace/resnet50-onnx/1/resnet50-caffe2-v1-9.onnx
curl --fail -L --create-dirs https://github.com/onnx/models/raw/main/validated/vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx -o workspace/resnet50-onnx/1/resnet50-caffe2-v1-9.onnx
# Build custom node
cd ../../../src/custom_nodes && \
make BASE_OS=${BASE_OS} NODES=image_transformation && \
Expand Down
5 changes: 3 additions & 2 deletions docs/ovms_quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ docker pull openvino/model_server:latest
Store components of the model in the `model/1` directory. Here are example commands pulling an object detection model from Kaggle:

```console
curl --create-dir https://www.kaggle.com/api/v1/models/tensorflow/faster-rcnn-resnet-v1/tensorFlow2/faster-rcnn-resnet50-v1-640x640/1/download -o model/1/1.tar.gz
tar xzf 1.tar.gz -C model/1
curl -L --create-dirs https://www.kaggle.com/api/v1/models/tensorflow/faster-rcnn-resnet-v1/tensorFlow2/faster-rcnn-resnet50-v1-640x640/1/download -o model/1/1.tar.gz
tar xzf model/1/1.tar.gz -C model/1
```

OpenVINO Model Server expects a particular folder structure for models - in this case `model` directory has the following content:
Expand All @@ -73,6 +73,7 @@ For more information about the directory structure and how to deploy multiple mo
### Step 4: Start the Model Server
:::{dropdown} **Deploying with Docker**
```bash
chmod -R 755 model
docker run -d -u $(id -u) --rm -v ${PWD}/model:/model -p 9000:9000 openvino/model_server:latest --model_name faster_rcnn --model_path /model --port 9000
```

Expand Down
2 changes: 1 addition & 1 deletion third_party/opencv/install_opencv.sh
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ fi
#===================================================================================================
# OpenCV installation

if [ "$os" == "ubuntu24.04" ] || [ "$os" == "ubuntu22.04" ] ; then
if [ "$os" == "ubuntu24.04" ] || [ "$os" == "ubuntu22.04" ] || [ "$os" == "ubuntu20.04" ] ; then
export DEBIAN_FRONTEND=noninteractive
apt update && apt install -y build-essential git cmake \
&& rm -rf /var/lib/apt/lists/*
Expand Down