Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic Batch Support for TRT #6955

Merged
merged 34 commits into from
Dec 1, 2020
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Resolve PR comments
  • Loading branch information
Ubuntu committed Nov 30, 2020
commit dc6aaef908c9e83496d1d2b2ee1f60561196fd56
6 changes: 6 additions & 0 deletions python/tvm/relay/op/contrib/tensorrt.py
Original file line number Diff line number Diff line change
Expand Up @@ -206,6 +206,9 @@ def _func_wrapper(expr):
]
for arg in args
]
# Batched multiply operations don't work in implicit batch mode. The following shapes
# have been excluded because they occur in PT MaskRCNN model. The long term solution is
# to switch to explicit batch mode after performance regressions are solved.
if all(
codeislife99 marked this conversation as resolved.
Show resolved Hide resolved
[list(map(int, shape)) in [[300, 64, 7, 7], [300, 1, 1, 1]] for shape in shapes]
):
Expand Down Expand Up @@ -881,12 +884,15 @@ def is_valid_subgraph(params, body):
input_batch_sizes = []
for var in params:
# In implicit batch mode, all inputs must have same batch size
# TODO: (codeislife99) : Fix different dynamic batch size inputs

if isinstance(var.checked_type, relay.TupleType):
for tupe_type in var.checked_type.fields:
# Scalar inputs not allowed
if len(tupe_type.shape) == 0:
logger.info("tensorrt: scalar inputs not supported")
return False

if not isinstance(tupe_type.shape[0], tvm.tir.expr.Any):
input_batch_sizes.append(int(tupe_type.shape[0]))
else:
Expand Down