-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add dynamic support to roll/scaler_tensor #3023
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/permutation.py 2024-07-24 23:03:23.876817+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/permutation.py 2024-07-24 23:06:40.622496+00:00
@@ -33,11 +33,12 @@
layer = ctx.net.add_shuffle(input)
layer.second_transpose = tuple(permutation)
set_layer_name(layer, target, name, source_ir)
return layer.get_output(0)
-# for the Tensorrt Slice layer:
+
+# for the Tensorrt Slice layer:
# we need calculate the start offset that the slice layer uses to create the output slice.
# in this static shape scenario, the start returned is the sequence of int(constant)
def calc_start_by_static_shape(
input: TRTTensor,
shifts: Sequence[int],
@@ -58,11 +59,12 @@
start = [0] * len(input.shape)
for d, s in shift_dict.items():
start[d] = get_positive_dim(-s, input.shape[d])
return start
-# for the Tensorrt Slice layer:
+
+# for the Tensorrt Slice layer:
# we need calculate the start offset that the slice layer uses to create the output slice.
# in this dynamic shape scenario, the start returned is the tensor
def calc_start_by_dynamic_shape(
ctx: ConversionContext,
target: Target,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, pending CI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/permutation.py 2024-07-25 00:02:47.043030+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/permutation.py 2024-07-25 00:04:40.361103+00:00
@@ -33,11 +33,12 @@
layer = ctx.net.add_shuffle(input)
layer.second_transpose = tuple(permutation)
set_layer_name(layer, target, name, source_ir)
return layer.get_output(0)
-# for the Tensorrt Slice layer:
+
+# for the Tensorrt Slice layer:
# we need calculate the start offset that the slice layer uses to create the output slice.
# in this static shape scenario, the start returned is the sequence of int(constant)
def calc_start_by_static_shape(
input: TRTTensor,
shifts: Sequence[int],
@@ -58,11 +59,12 @@
start = [0] * len(input.shape)
for d, s in shift_dict.items():
start[d] = get_positive_dim(-s, input.shape[d])
return start
-# for the Tensorrt Slice layer:
+
+# for the Tensorrt Slice layer:
# we need calculate the start offset that the slice layer uses to create the output slice.
# in this dynamic shape scenario, the start returned is the tensor
def calc_start_by_dynamic_shape(
ctx: ConversionContext,
target: Target,
Description
Add dynamic support to roll/scaler_tensor
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: