Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to CUDA10.2 for TensorRT #3084

Merged
merged 10 commits into from
Feb 25, 2020
Prev Previous commit
Next Next commit
add shape inference instruction for TensorRT
  • Loading branch information
stevenlix committed Feb 25, 2020
commit fc921b10ee34c4ff1a82c8c690bd867f190c44fc
5 changes: 3 additions & 2 deletions docs/execution_providers/TensorRT-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,11 @@ status = session_object.Load(model_file_name);
```
The C API details are [here](../C_API.md#c-api).

If certain operators in the model are not supported by TensorRT, ONNX Runtime will partition the graph and only send supported subgraphs to TensorRT execution provider. Because TensorRT requires all inputs of the subgraph have shape specified, ONNX Runtime will thow error if there is no shape for any inputs. In this case please infer shapes on the model first by running shape inference [here](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/core/providers/nuphar/scripts/symbolic_shape_infer.py).
#### Shape Inference for TensorRT subgraphs
If some operators in the model are not supported by TensorRT, ONNX Runtime will partition the graph and only send supported subgraphs to TensorRT execution provider. Because TensorRT requires that all inputs of the subgraphs have shape specified, ONNX Runtime will throw error if there is no input shape info. In this case please run shape inference for the model first by running script [here](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/core/providers/nuphar/scripts/symbolic_shape_infer.py).

#### Sample
To run Faster R-CNN model on TensorRT execution provider,
This example shows how to run Faster R-CNN model on TensorRT execution provider,

First, download Faster R-CNN onnx model from onnx model zoo [here](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/faster-rcnn).

Expand Down