Skip to content

Bug:The getMaxBatchSize() function will always return 1. #9

@bedoom

Description

@bedoom

我使用的设备是:nvidia agx xavier

具体环境:
jetpack 5.1.3
tensorrt 8.5
cuda 11.4
opencv 4.5

在运行时有报错说 getMaxBatchSize() 不可用,具体信息如下:

[2025-03-24 12:53:00][warn][trt_infer.cpp:25]:NVInfer: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[2025-03-24 12:53:02][info][trt_infer.cpp:397]: Infer 0xffff5400fda0 detail
[2025-03-24 12:53:02][info][trt_infer.cpp:398]: Base device: [ID 0]<Xavier>[arch 7.2][GMEM 22.30 GB/30.26 GB]
[2025-03-24 12:53:02][warn][trt_infer.cpp:25]:NVInfer: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[2025-03-24 12:53:02][info][trt_infer.cpp:399]: Max Batch Size: 1
[2025-03-24 12:53:02][info][trt_infer.cpp:400]: Inputs: 1
[2025-03-24 12:53:02][info][trt_infer.cpp:404]:         0.images : shape {1 x 3 x 640 x 640}
[2025-03-24 12:53:02][info][trt_infer.cpp:406]: Outputs: 1
[2025-03-24 12:53:02][info][trt_infer.cpp:410]:         0.output : shape {1 x 8400 x 84}
[2025-03-24 12:53:02][warn][trt_infer.cpp:25]:NVInfer: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.

我使用 https://netron.app/ 看了onnx文件是支持动态批处理的,同时我也使用提供的 .sh 文件进行 .trt 导出。请问有解决办法吗

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions