You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-11
Original file line number
Diff line number
Diff line change
@@ -31,12 +31,7 @@ In the case of building on top of a custom base container, you first must determ
31
31
version of the PyTorch C++ ABI. If your source of PyTorch is pytorch.org, likely this is the pre-cxx11-abi in which case you must modify `//docker/dist-build.sh` to not build the
You can then build the container using the build command in the [docker README](docker/README.md#instructions)
40
35
41
36
If you would like to build outside a docker container, please follow the section [Compiling Torch-TensorRT](#compiling-torch-tensorrt)
42
37
@@ -121,10 +116,10 @@ torch.jit.save(trt_ts_module, "trt_torchscript_module.ts") # save the TRT embedd
121
116
These are the following dependencies used to verify the testcases. Torch-TensorRT can work with other versions, but the tests are not guaranteed to pass.
122
117
123
118
- Bazel 5.2.0
124
-
- Libtorch 2.1.0.dev20230314 (built with CUDA 11.7)
125
-
- CUDA 11.7
126
-
- cuDNN 8.5.0
127
-
- TensorRT 8.5.1.7
119
+
- Libtorch 2.1.0.dev20230419 (built with CUDA 11.8)
120
+
- CUDA 11.8
121
+
- cuDNN 8.8.0
122
+
- TensorRT 8.6.0
128
123
129
124
## Prebuilt Binaries and Wheel files
130
125
@@ -252,7 +247,7 @@ A tarball with the include files and library can then be found in bazel-bin
252
247
### Running Torch-TensorRT on a JIT Graph
253
248
254
249
> Make sure to add LibTorch to your LD_LIBRARY_PATH <br>
Copy file name to clipboardExpand all lines: docker/README.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
5
5
* The `Dockerfile` currently uses <ahref="https://github.com/bazelbuild/bazelisk">Bazelisk</a> to select the Bazel version, and uses the exact library versions of Torch and CUDA listed in <ahref="https://github.com/pytorch/TensorRT#dependencies">dependencies</a>.
6
6
* The desired versions of CUDNN and TensorRT must be specified as build-args, with major, minor, and patch versions as in: `--build-arg TENSORRT_VERSION=a.b.c --build-arg CUDNN_VERSION=x.y.z`
7
-
*[**Optional**] The desired base image be changed by explicitly setting a base image, as in `--build-arg BASE_IMG=nvidia/cuda:11.7.1-devel-ubuntu22.04`, though this is optional
7
+
*[**Optional**] The desired base image be changed by explicitly setting a base image, as in `--build-arg BASE_IMG=nvidia/cuda:11.8.0-devel-ubuntu22.04`, though this is optional
8
8
*[**Optional**] Additionally, the desired Python version can be changed by explicitly setting a version, as in `--build-arg PYTHON_VERSION=3.10`, though this is optional as well.
9
9
10
10
* This `Dockerfile` installs `pre-cxx11-abi` versions of Pytorch and builds Torch-TRT using `pre-cxx11-abi` libtorch as well.
@@ -17,14 +17,14 @@ Note: By default the container uses the `pre-cxx11-abi` version of Torch + Torch
17
17
18
18
### Instructions
19
19
20
-
- The example below uses CUDNN 8.5.0 and TensorRT 8.5.1
20
+
- The example below uses CUDNN 8.8.0 and TensorRT 8.6.0
21
21
- See <ahref="https://github.com/pytorch/TensorRT#dependencies">dependencies</a> for a list of current default dependencies.
Copy file name to clipboardExpand all lines: tools/cpp_benchmark/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ This is a quick benchmarking application for Torch-TensorRT. It lets you run sup
6
6
7
7
Run with bazel:
8
8
9
-
> Note: Make sure libtorch and TensorRT are in your LD_LIBRARY_PATH before running, if you need a location you can `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:[WORKSPACE ROOT]/bazel-Torch-TensorRT/external/libtorch/lib:[WORKSPACE ROOT]/bazel-Torch-TensorRT/external/tensorrt/lib`
9
+
> Note: Make sure libtorch and TensorRT are in your LD_LIBRARY_PATH before running, if you need a location you can `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:[WORKSPACE ROOT]/bazel-TensorRT/external/libtorch/lib:[WORKSPACE ROOT]/bazel-TensorRT/external/tensorrt/lib`
10
10
11
11
```sh
12
12
bazel run //tools/cpp_benchmark --cxxopt="-DNDEBUG" --cxxopt="-DJIT" --cxxopt="-DTRT" -- [PATH TO JIT MODULE FILE] [INPUT SIZE (explicit batch)]
0 commit comments