Skip to content

Commit

Permalink
Update README and docs post-20.10 release (#2172)
Browse files Browse the repository at this point in the history
  • Loading branch information
dzier committed Oct 27, 2020
1 parent 62c9220 commit 05d1f57
Show file tree
Hide file tree
Showing 5 changed files with 12 additions and 12 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@

**LATEST RELEASE: You are currently on the master branch which tracks
under-development progress towards the next release. The latest
release of the Triton Inference Server is 2.3.0 and is available on
release of the Triton Inference Server is 2.4.0 and is available on
branch
[r20.09](https://github.com/triton-inference-server/server/tree/r20.09).**
[r20.10](https://github.com/triton-inference-server/server/tree/r20.10).**

Triton Inference Server provides a cloud and edge inferencing solution
optimized for both CPUs and GPUs. Triton supports an HTTP/REST and
Expand All @@ -44,11 +44,11 @@ available as a shared library with a C API that allows the full
functionality of Triton to be included directly in an
application.

The current release of the Triton Inference Server is 2.3.0 and
corresponds to the 20.09 release of the tensorrtserver container on
The current release of the Triton Inference Server is 2.4.0 and
corresponds to the 20.10 release of the tensorrtserver container on
[NVIDIA GPU Cloud (NGC)](https://ngc.nvidia.com). The branch for this
release is
[r20.09](https://github.com/triton-inference-server/server/tree/r20.09).
[r20.10](https://github.com/triton-inference-server/server/tree/r20.10).

## Features

Expand Down
2 changes: 1 addition & 1 deletion deploy/single_server/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
replicaCount: 1

image:
imageName: nvcr.io/nvidia/tritonserver:20.08-py3
imageName: nvcr.io/nvidia/tritonserver:20.10-py3
pullPolicy: IfNotPresent
modelRepositoryPath: gs://triton-inference-server-repository/model_repository
numGpus: 1
Expand Down
2 changes: 1 addition & 1 deletion docs/build.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ $ ./build.py --version=0.0.0 --container-version=20.10dev ...
If you are building on master/main branch then <container tag> should
be set to "main". If you are building on a release branch you should
set the <container tag> to match. For example, if you are building on
the r20.09 branch you should set <container tag> to be "r20.09". If
the r20.10 branch you should set <container tag> to be "r20.10". If
can use a different <container tag> for a component to instead use the
corresponding branch/tag in the build. For example, if you have a
branch called "mybranch" in the identity_backend repo that you want to
Expand Down
4 changes: 2 additions & 2 deletions docs/client_libraries.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ under-development version). The branch you use for the client build
should match the version of Triton you are using.

```bash
$ git checkout r20.09
$ git checkout r20.10
```

Then, issue the following command to build the C++ client library and
Expand Down Expand Up @@ -187,7 +187,7 @@ want to build (or the master branch if you want to build the
under-development version).

```bash
$ git checkout r20.09
$ git checkout r20.10
```

#### Ubuntu 18.04
Expand Down
6 changes: 3 additions & 3 deletions docs/custom_operations.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ simple way to ensure you are using the correct version of TensorRT is
to use the [NGC TensorRT
container](https://ngc.nvidia.com/catalog/containers/nvidia:tensorrt)
corresponding to the Triton container. For example, if you are using
the 20.09 version of Triton, use the 20.09 version of the TensorRT
the 20.10 version of Triton, use the 20.10 version of the TensorRT
container.

## TensorFlow
Expand Down Expand Up @@ -108,7 +108,7 @@ simple way to ensure you are using the correct version of TensorFlow
is to use the [NGC TensorFlow
container](https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow)
corresponding to the Triton container. For example, if you are using
the 20.09 version of Triton, use the 20.09 version of the TensorFlow
the 20.10 version of Triton, use the 20.10 version of the TensorFlow
container.

## PyTorch
Expand Down Expand Up @@ -158,7 +158,7 @@ simple way to ensure you are using the correct version of PyTorch is
to use the [NGC PyTorch
container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch)
corresponding to the Triton container. For example, if you are using
the 20.09 version of Triton, use the 20.09 version of the PyTorch
the 20.10 version of Triton, use the 20.10 version of the PyTorch
container.

## ONNX
Expand Down

0 comments on commit 05d1f57

Please sign in to comment.