Skip to content

feat: Implement Input class support for FX backend. #1763

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed

Conversation

peri044
Copy link
Collaborator

@peri044 peri044 commented Mar 23, 2023

Description

  1. This PR adds support for providing torch_tensorrt.Input(shape=x, dtype=x) to FX backend.

Example workflow would be similar to TS backend.

inputs = [torch_tensorrt.Input(shape=(1, 3, 3, 4), dtype=torch.float32)]
trt_mod = torch_tensorrt.compile(
            mod,
            ir="fx",
            inputs=inputs,
            min_acc_module_size=1,
        )
  1. Internally FX performs a) standard lowering passes b) TRT lowering. The standard lowering passes traces through the model which require pytorch tensors for execution.

For standard lowering passes
a) User input of type torch_tensorrt.Input -> Example pytorch tensors (using example_tensors())
b) User input of type torch.Tensors - this type will directly pass through tracing and hence we don't need to do anything.

For TRT lowering
a) self._trt_input is used to convert torch_tensorrt.Input into an InputTensorSpec
b) This can now support dynamic shapes using the same front end interface as Torchscript.

  1. TSInput class is derived from torch_tensorrt.Input because the latter had C++ calls (to_internal). For FX backend, we don't need them for --fx-only installation.

  2. max_batch_size - This has been removed and already deprecated in TRT (not available in TS backend) as well.

  3. FX tracing and lowering previously assumes input tensors reside on the same device as the model parameters. In the case of torch_tensorrt.Input, when we extract example tensors, the default device used is cuda:0. This assumption might not be valid for some non standard use cases. More refactoring is required to ensure we specify the right device and it is being passed accordingly.

Type of change

  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@peri044 peri044 added the WIP Work is in progress, pull request should not be merged yet label Mar 23, 2023
@peri044 peri044 requested a review from narendasan March 23, 2023 00:49
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

github-actions[bot]

This comment was marked as resolved.

github-actions[bot]

This comment was marked as resolved.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- py/torch_tensorrt/ts/ts_input.py	2023-03-23 20:35:35.440710 +0000
+++ py/torch_tensorrt/ts/ts_input.py	2023-03-23 20:35:54.340256 +0000
@@ -5,10 +5,11 @@

from torch_tensorrt import _C
from torch_tensorrt import _enums
from torch_tensorrt import _Input
from torch_tensorrt._Input import Input
+

class TSInput(Input):
    """
    Defines an input to a module in terms of expected shape, data type and tensor format.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

github-actions[bot]

This comment was marked as resolved.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

github-actions[bot]

This comment was marked as resolved.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

github-actions[bot]

This comment was marked as resolved.

@peri044 peri044 removed the WIP Work is in progress, pull request should not be merged yet label Mar 29, 2023
Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

chore: Replace InputTensorSpec with Input

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

feat: Allow torchtrt.Input support for FX backend

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

refactor: Implement conversions from Input -> Pyt tensors, add Input utilities etc.

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

chore: Use InputTensorSpec internally

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

chore: Linter fixes

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

chore: add ts_input.py file

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

chore: Linter fixes

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

chore: minor fixes

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

chore: revert FX changes

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

chore: Address Torchscript test case failures

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

chore: remove device placement of input tensors

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

chore: Linter fixes

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

chore: refactor code

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

chore: Remove max_batch_size and replace generate_input_specs calls

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

chore: linter fixes

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

@peri044 peri044 requested a review from frank-wei March 29, 2023 21:35
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

@@ -98,13 +99,17 @@ def benchmark(

model = model.cuda().eval()
inputs = [x.cuda() for x in inputs]

# inputs = [torch_tensorrt.Input(shape=(128, 3, 224, 224), dtype=torch.float32)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are these comments for?

@@ -41,6 +40,7 @@ class _ShapeMode(Enum):
DOMAIN_OFFSET = 2.0
low_tensor_domain_incl = 0.0
high_tensor_domain_excl = low_tensor_domain_incl + DOMAIN_OFFSET
torch_dtype = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we derive torch_dtype from self.dtype?

@@ -173,59 +176,6 @@ def __str__(self) -> str:
else:
raise RuntimeError("Unknown input shape mode")

def _to_internal(self) -> _C.Input:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was this taken out?

use_experimental_rt: bool = False,
) -> TRTModule:
interp = TRTInterpreter(
mod, InputTensorSpec.from_tensors(inputs), explicit_batch_dimension=True
mod, _Input.Input.from_tensors(inputs), explicit_batch_dimension=True
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this just be Input.from_tensors

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah

from torch_tensorrt._Input import Input


class TSInput(Input):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a hidden class, we dont want people using this, handle any conversion to TSInput internally

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's the intended usage. In the current design, for torchscript, they still use the same interface
torch_tensorrt.Input() and we internally convert into TSInput in the ts/_compile_spec.py file. Anything different that you have in mind ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No this would be fine

Copy link
Contributor

@frank-wei frank-wei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, I can not approve this PR for some reasons:

  1. fx2trt is used internally and core APIs are called by different services. So far, we did not build the Import mechanism to protect the internal service. (ImportIt is used in other Pytorch projects which will run internal CI that covers not only unit tests but also other prod tests for any PR). So we can not take the risk to take any potential big change since that will break our internal services.
  2. Due to the above reason, any change for FX path needs to consider backward compatibility. Also consider to reduce the PR size for easy review.
    cc @yinghai @wushirong

@@ -153,7 +153,6 @@ def validate_conversion(self):

def run(
self,
max_batch_size=64,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am afraid we can not do this change. We have to maintain backward compatibility on the API, otherwise, it will break our internal product.

@@ -4,58 +4,59 @@

from .types import Shape, ShapeRange
from .utils import get_dynamic_dims


def generate_input_specs(inputs, lower_setting, additional_inputs=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here. These API are used in many internal products.

@narendasan
Copy link
Collaborator

@frank-wei can we expose the old API somewhere else for backwards compatibility but move to the unified one for users? One of the big problems right now is it is difficult to move from torchscript to fx since many of the settings are named differently or used differently

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

github-actions[bot]

This comment was marked as spam.

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>
github-actions[bot]

This comment was marked as off-topic.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

@peri044
Copy link
Collaborator Author

peri044 commented Apr 5, 2023

Closing this in favour of #1807

@peri044 peri044 closed this Apr 5, 2023
@frank-wei
Copy link
Contributor

@frank-wei can we expose the old API somewhere else for backwards compatibility but move to the unified one for users? One of the big problems right now is it is difficult to move from torchscript to fx since many of the settings are named differently or used differently

Specifically, I am thinking the unified version could happen on aten2trt?

@@ -116,6 +117,43 @@ def from_tensors(cls, tensors: Sequence[torch.Tensor]) -> List["InputTensorSpec"
assert isinstance(tensors, (list, tuple))
return [cls.from_tensor(t) for t in tensors]

@classmethod
Copy link
Contributor

@frank-wei frank-wei Apr 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume we use the input_obj is general interface.

@@ -262,7 +268,13 @@ def _default_replace_mutable_op_pass(self) -> PassManager:
def build_trt_lower_pipeline(
self, input: Input, additional_input: Optional[Input] = None
) -> PassManager:
self._input = input

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we start a new func instead of change this build_trt_lower_pipeline behavior?

@@ -21,6 +22,30 @@
FINAL_CHECK_RTOL_MULTIPLIER: float = 10


def extract_example_tensors_from_input(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we pass real tensor as input to the lowering workflow? Why do we need the input_obj as input?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants