Skip to content

feat: support for grouped inputs #1201

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
Aug 9, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
7393fa8
feat: support for grouped inputs
narendasan Jul 23, 2022
b26d768
tests: fix test model paths
narendasan Jul 24, 2022
b2a5183
tests: Fix tests
narendasan Jul 24, 2022
8385253
chore: Update generateRandomTensors uses
narendasan Jul 26, 2022
d479c98
fix: fix the fallback related issue after merging collection
bowang007 Jul 27, 2022
b7178ff
feat: Better input signature logging
narendasan Jul 27, 2022
2b22767
Merge branch 'fix_collection_partitioning' into squashed_collections
narendasan Jul 27, 2022
418d1e5
refactor: still fallback when a trt segment has tuple/list input/output
bowang007 Jul 28, 2022
ea7562c
Merge branch 'squashed_collections' into fix_collection_partitioning
bowang007 Jul 28, 2022
c9d4788
refactor: still fallback when a trt segment has tuple/list input/output
bowang007 Jul 28, 2022
9403f88
Merge branch 'fix_collection_partitioning' of https://github.com/NVID…
narendasan Jul 28, 2022
5cff257
chore: Apply liniting
narendasan Jul 28, 2022
f866dba
fix: fix the bug that ListConstruct is in TRT subgraph when it's enti…
bowang007 Aug 2, 2022
9bce034
Merge pull request #1220 from pytorch/fix_collection_partitioning
narendasan Aug 2, 2022
6d0b1d3
fix: fix the error that collection input segmented into trt subgraph
bowang007 Aug 3, 2022
253b3c7
Merge pull request #1225 from pytorch/fix_collection_partitioning
narendasan Aug 3, 2022
8b891fb
feat(//core/conversion/converters/evaluators): New evaluators for
narendasan Aug 3, 2022
f519935
feat(collections): Enable grouped inputs via partial compilation
narendasan Aug 4, 2022
5fadfd4
Merge branch 'master' into squashed_collections
narendasan Aug 4, 2022
bce8464
feat(element_wise): Auto cast to higher precision for mismatched types
narendasan Aug 6, 2022
891440d
refactor: Disable input_signature in torchscript backend due to lack of
narendasan Aug 6, 2022
09e032c
Merge remote-tracking branch 'origin/master' into squashed_collections
narendasan Aug 8, 2022
223dfd1
chore: remove commented out code
narendasan Aug 8, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
refactor: Disable input_signature in torchscript backend due to lack of
generic interface

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>
  • Loading branch information
narendasan committed Aug 6, 2022
commit 891440da148b3cee64e0828e3e3a7f6cfe2cb0db
5 changes: 5 additions & 0 deletions py/torch_tensorrt/csrc/register_tensorrt_classes.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,11 @@ void RegisterTRTCompileSpec() {
static auto TORCHTRT_UNUSED TRTInputSignatureTSRegistration =
torch::class_<torch_tensorrt::pyapi::InputSignature>("tensorrt", "_InputSignature")
.def(torch::init<>())
.def("_set_signature_ivalue_torchbind",
[](const c10::intrusive_ptr<torch_tensorrt::pyapi::InputSignature>& self,
torch::jit::IValue ival) {
self->signature_ivalue = ival;
})
.def("__str__", &torch_tensorrt::pyapi::InputSignature::to_str);

ADD_FIELD_GET_SET_REGISTRATION(
Expand Down
21 changes: 4 additions & 17 deletions py/torch_tensorrt/ts/_compile_spec.py
Original file line number Diff line number Diff line change
Expand Up @@ -327,20 +327,6 @@ def TensorRTCompileSpec(inputs=[],
torch.randn((1, 3, 224, 244)) # Use an example tensor and let torch_tensorrt infer settings
]

input_signature Union(List, Tuple, torch_tensorrt.Input, torch.Tensor): A formatted collection of input specifications for the module. Input Sizes can be specified as torch sizes, tuples or lists. dtypes can be specified using
torch datatypes or torch_tensorrt datatypes and you can use either torch devices or the torch_tensorrt device type enum to select device type. **This API should be considered beta-level stable and may change in the future** ::

input_signature=([
torch_tensorrt.Input((1, 3, 224, 224)), # Static NCHW input shape for input #1
torch_tensorrt.Input(
min_shape=(1, 224, 224, 3),
opt_shape=(1, 512, 512, 3),
max_shape=(1, 1024, 1024, 3),
dtype=torch.int32
format=torch.channel_last
), # Dynamic input shape for input #2
], torch.randn((1, 3, 224, 244))) # Use an example tensor and let torch_tensorrt infer settings for input #3

device (Union(torch_tensorrt.Device, torch.device, dict)): Target device for TensorRT engines to run on ::

device=torch_tensorrt.Device("dla:1", allow_gpu_fallback=True)
Expand All @@ -362,7 +348,7 @@ def TensorRTCompileSpec(inputs=[],

compile_spec = {
"inputs": inputs,
"input_signature": input_signature,
#"input_signature": input_signature,
"device": device,
"disable_tf32":
disable_tf32, # Force FP32 layers to use traditional as FP32 format vs the default behavior of rounding the inputs to 10-bit mantissas before multiplying, but accumulates the sum using 23-bit mantissas
Expand All @@ -384,12 +370,13 @@ def TensorRTCompileSpec(inputs=[],

backend_spec = torch.classes.tensorrt.CompileSpec()

if input_signature is not None:
raise ValueError("Input signature parsing is not currently supported in the TorchScript backend integration")

for i in parsed_spec.inputs:
clone = _internal_input_to_torch_class_input(i)
backend_spec._append_input(clone)

backend_spec._set_input_signature(parsed_spec.input_signature)

d = torch.classes.tensorrt._Device()
d._set_device_type(int(parsed_spec.device.device_type))
d._set_gpu_id(parsed_spec.device.gpu_id)
Expand Down