Skip to content

Conversation

@binliunls
Copy link
Contributor

Fixes #6124 .

Description

When running the inference with torchscript wrapped TensorRT models, the evaluator would give an error. This is caused by the with engine.mode() code run the training method of engine.network without checking. In this PR, an attribute check has been added to cover this issue.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

Signed-off-by: binliu <binliu@nvidia.com>
@binliunls binliunls requested review from Nic-Ma and wyli March 12, 2023 12:35
@wyli
Copy link
Contributor

wyli commented Mar 12, 2023

I think the root cause is here n.training is used without checking:

training = [n for n in nets if n.training]

perhaps we fix that utility function? also this
eval_list = [n for n in nets if not n.training]

Signed-off-by: binliu <binliu@nvidia.com>
Signed-off-by: binliu <binliu@nvidia.com>
@wyli
Copy link
Contributor

wyli commented Mar 14, 2023

/build

@wyli wyli enabled auto-merge (squash) March 14, 2023 14:03
@wyli wyli merged commit 0a904fb into Project-MONAI:dev Mar 14, 2023
@binliunls binliunls deleted the 6124-add-trtmodel-check branch March 30, 2023 15:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Got an error when running bundle inference with TensorRT torchscript

3 participants