Skip to content

Commit

Permalink
Update README and docs post-20.12 release (triton-inference-server#2498)
Browse files Browse the repository at this point in the history
  • Loading branch information
dzier authored Feb 9, 2021
1 parent 87d6fdf commit 4a93002
Show file tree
Hide file tree
Showing 5 changed files with 15 additions and 15 deletions.
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@

**LATEST RELEASE: You are currently on the master branch which tracks
under-development progress towards the next release. The latest
release of the Triton Inference Server is 2.5.0 and is available on
release of the Triton Inference Server is 2.6.0 and is available on
branch
[r20.11](https://github.com/triton-inference-server/server/tree/r20.11).**
[r20.12](https://github.com/triton-inference-server/server/tree/r20.12).**

Triton Inference Server provides a cloud and edge inferencing solution
optimized for both CPUs and GPUs. Triton supports an HTTP/REST and
Expand All @@ -44,11 +44,11 @@ available as a shared library with a C API that allows the full
functionality of Triton to be included directly in an
application.

The current release of the Triton Inference Server is 2.5.0 and
corresponds to the 20.11 release of the tritonserver container on
The current release of the Triton Inference Server is 2.6.0 and
corresponds to the 20.12 release of the tritonserver container on
[NVIDIA GPU Cloud (NGC)](https://ngc.nvidia.com). The branch for this
release is
[r20.11](https://github.com/triton-inference-server/server/tree/r20.11).
[r20.12](https://github.com/triton-inference-server/server/tree/r20.12).

## Features

Expand Down Expand Up @@ -103,8 +103,8 @@ release is

**The master branch documentation tracks the upcoming,
under-development release and so may not be accurate for the current
release of Triton. See the [r20.11
documentation](https://github.com/triton-inference-server/server/tree/r20.11#documentation)
release of Triton. See the [r20.12
documentation](https://github.com/triton-inference-server/server/tree/r20.12#documentation)
for the current release.**

[Triton Architecture](docs/architecture.md) gives a high-level
Expand Down
2 changes: 1 addition & 1 deletion deploy/single_server/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
replicaCount: 1

image:
imageName: nvcr.io/nvidia/tritonserver:20.11-py3
imageName: nvcr.io/nvidia/tritonserver:20.12-py3
pullPolicy: IfNotPresent
modelRepositoryPath: gs://triton-inference-server-repository/model_repository
numGpus: 1
Expand Down
4 changes: 2 additions & 2 deletions docs/build.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ $ ./build.py --build-dir=/tmp/citritonbuild --enable-logging --enable-stats --en
If you are building on master/main branch then \<container tag\>
should be set to "main". If you are building on a release branch you
should set \<container tag\> to match the branch name. For example, if
you are building on the r20.11 branch you should set \<container tag\>
to be "r20.11". You can use a different \<container tag\> for a
you are building on the r20.12 branch you should set \<container tag\>
to be "r20.12". You can use a different \<container tag\> for a
component to instead use the corresponding branch/tag in the
build. For example, if you have a branch called "mybranch" in the
[identity_backend](https://github.com/triton-inference-server/identity_backend)
Expand Down
4 changes: 2 additions & 2 deletions docs/client_libraries.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ under-development version). The branch you use for the client build
should match the version of Triton you are using.

```bash
$ git checkout r20.11
$ git checkout r20.12
```

Then, issue the following command to build the C++ client library and
Expand Down Expand Up @@ -195,7 +195,7 @@ want to build (or the master branch if you want to build the
under-development version).

```bash
$ git checkout r20.11
$ git checkout r20.12
```

#### Ubuntu 20.04
Expand Down
6 changes: 3 additions & 3 deletions docs/custom_operations.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ simple way to ensure you are using the correct version of TensorRT is
to use the [NGC TensorRT
container](https://ngc.nvidia.com/catalog/containers/nvidia:tensorrt)
corresponding to the Triton container. For example, if you are using
the 20.11 version of Triton, use the 20.11 version of the TensorRT
the 20.12 version of Triton, use the 20.12 version of the TensorRT
container.

## TensorFlow
Expand Down Expand Up @@ -108,7 +108,7 @@ simple way to ensure you are using the correct version of TensorFlow
is to use the [NGC TensorFlow
container](https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow)
corresponding to the Triton container. For example, if you are using
the 20.11 version of Triton, use the 20.11 version of the TensorFlow
the 20.12 version of Triton, use the 20.12 version of the TensorFlow
container.

## PyTorch
Expand Down Expand Up @@ -152,7 +152,7 @@ simple way to ensure you are using the correct version of PyTorch is
to use the [NGC PyTorch
container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch)
corresponding to the Triton container. For example, if you are using
the 20.11 version of Triton, use the 20.11 version of the PyTorch
the 20.12 version of Triton, use the 20.12 version of the PyTorch
container.

## ONNX
Expand Down

0 comments on commit 4a93002

Please sign in to comment.