Skip to content

An issue of image inferencing using TensorRT and onnx #401

@QuinVIVER

Description

@QuinVIVER

Environment:
python=3.10
torch=2.8.0
tensorRT=10.1.0
cuda=12.4.1

the onnx and tensorRT model is transformed under same enviroment as the test script.
using the onnx script provided in deployment.md gives me this error:

[ONNXRuntimeError] : 1 : FAIL : Type Error: Type parameter (T) of Optype (Add) bound to different types (tensor(float16) and tensor(float) in node ()

using the tensorRT script provided in deployment.md gives this error below:

KeyError: 'unnorm_image_features'
Neither address or allocator is set for output tensor unnorm_image_features. Call setOutputTensorAddress, setTensorAddress of setOutputAllocator before enqueue/execute.

Is this related to tensorRT model transforming problem? Existing issues don't give a clue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions