Skip to content

❓ [Question] How wo you export a triton kernel with model to a serialized engine that can be run in c++? #3469

Open
@cmgreen210

Description

@cmgreen210

❓ Question

How wo you export a triton kernel with model to a serialized engine that can be run in c++?

What you have already tried

Read through python examples.

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • PyTorch Version (e.g., 1.0):
  • CPU Architecture:
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, libtorch, source):
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives:
  • Python version:
  • CUDA version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

Metadata

Metadata

Assignees

Labels

questionFurther information is requested

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions