Skip to content

Commit

Permalink
Merge branch 'branch-22.09' of https://github.com/nv-morpheus/Morpheus
Browse files Browse the repository at this point in the history
…into starter-dfp-docstring
  • Loading branch information
efajardo-nv committed Sep 27, 2022
2 parents 1edfffd + 85c05fd commit b6c59ad
Show file tree
Hide file tree
Showing 25 changed files with 671 additions and 72 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ Use the following command to launch a Docker container for Triton loading all of
```bash
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \
-v $PWD/models:/models \
nvcr.io/nvidia/tritonserver:22.06-py3 \
nvcr.io/nvidia/tritonserver:22.08-py3 \
tritonserver --model-repository=/models/triton-model-repo \
--exit-on-error=false \
--log-info=true \
Expand Down
3 changes: 1 addition & 2 deletions cmake/dependencies.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ list(APPEND CMAKE_MESSAGE_CONTEXT "dep")
set(CPM_SOURCE_CACHE "${CMAKE_SOURCE_DIR}/.cache/cpm")
# Prevent cpm_init from trying to tell us where to put cpm.cmake
include(get_cpm)
rapids_cpm_init(OVERRIDE "${CMAKE_CURRENT_SOURCE_DIR}/cmake/deps/rapids_cpm_package_overrides.json")

# Cant use rapids_cpm_init() for now since the `rapids_cpm_download()` creates a
# new scope when importing CPM. Manually do the other commands and import CPM on
Expand Down Expand Up @@ -84,7 +83,7 @@ endif()

# libcudacxx -- get an explicit lubcudacxx build, matx tries to pull a tag that doesn't exist.
# =========
set(LIBCUDACXX_VERSION "1.6.0" CACHE STRING "Version of libcudacxx to use")
set(LIBCUDACXX_VERSION "1.8.0" CACHE STRING "Version of libcudacxx to use")
include(deps/Configure_libcudacxx)

# matx
Expand Down
11 changes: 0 additions & 11 deletions cmake/deps/rapids_cpm_package_overrides.json

This file was deleted.

3 changes: 3 additions & 0 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -145,6 +145,9 @@ RUN --mount=type=bind,from=conda_bld_morpheus,source=/opt/conda/conda-bld,target
CONDA_ALWAYS_YES=true /opt/conda/bin/mamba install -n morpheus -c local -c rapidsai -c nvidia -c nvidia/label/dev -c conda-forge morpheus &&\
# Install runtime dependencies that are pip-only
/opt/conda/bin/mamba env update -n morpheus --file docker/conda/environments/cuda${CUDA_VER}_runtime.yml &&\
# Install jupyter support (e.g., DFP)
# TODO: this might not be the right spot to get these
/opt/conda/bin/mamba install -n morpheus ipywidgets jupyterlab nb_conda_kernels &&\
# Clean and activate
conda clean -afy

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ Note: This step assumes you have both [Docker](https://docs.docker.com/engine/in
From the root of the Morpheus project we will launch a Triton Docker container with the `models` directory mounted into the container:

```shell
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 -v $PWD/models:/models nvcr.io/nvidia/tritonserver:22.02-py3 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --log-info=true
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 -v $PWD/models:/models nvcr.io/nvidia/tritonserver:22.08-py3 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --log-info=true
```

Once we have Triton running, we can verify that it is healthy using [curl](https://curl.se/). The `/v2/health/live` endpoint should return a 200 status code:
Expand Down
4 changes: 2 additions & 2 deletions examples/abp_nvsmi_detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,12 +65,12 @@ This example utilizes the Triton Inference Server to perform inference.

Pull the Docker image for Triton:
```bash
docker pull nvcr.io/nvidia/tritonserver:22.02-py3
docker pull nvcr.io/nvidia/tritonserver:22.08-py3
```

From the Morpheus repo root directory, run the following to launch Triton and load the `abp-nvsmi-xgb` XGBoost model:
```bash
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 -v $PWD/models:/models nvcr.io/nvidia/tritonserver:22.02-py3 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model abp-nvsmi-xgb
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 -v $PWD/models:/models nvcr.io/nvidia/tritonserver:22.08-py3 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model abp-nvsmi-xgb
```

This will launch Triton and only load the `abp-nvsmi-xgb` model. This model has been configured with a max batch size of 32768, and to use dynamic batching for increased performance.
Expand Down
4 changes: 2 additions & 2 deletions examples/abp_pcap_detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ To run this example, an instance of Triton Inference Server and a sample dataset

### Triton Inference Server
```bash
docker pull nvcr.io/nvidia/tritonserver:22.02-py3
docker pull nvcr.io/nvidia/tritonserver:22.08-py3
```

##### Deploy Triton Inference Server
Expand All @@ -35,7 +35,7 @@ Bind the provided `abp-pcap-xgb` directory to the docker container model repo at
cd <MORPHEUS_ROOT>/examples/abp_pcap_detection

# Launch the container
docker run --rm --gpus=all -p 8000:8000 -p 8001:8001 -p 8002:8002 -v $PWD/abp-pcap-xgb:/models/abp-pcap-xgb --name tritonserver nvcr.io/nvidia/tritonserver:22.02-py3 tritonserver --model-repository=/models --exit-on-error=false --model-control-mode=poll --repository-poll-secs=30
docker run --rm --gpus=all -p 8000:8000 -p 8001:8001 -p 8002:8002 -v $PWD/abp-pcap-xgb:/models/abp-pcap-xgb --name tritonserver nvcr.io/nvidia/tritonserver:22.08-py3 tritonserver --model-repository=/models --exit-on-error=false --model-control-mode=poll --repository-poll-secs=30
```

##### Verify Model Deployment
Expand Down
Loading

0 comments on commit b6c59ad

Please sign in to comment.