Skip to content

✨[Feature] Output buffer optimization in runtime module #3275

Closed
@keehyuna

Description

@keehyuna

Is your feature request related to a problem? Please describe.

Output buffer optimization in runtime module

Describe the solution you'd like

  • Assuming that the input shape does not change frequently, output buffer is created in previous forward()
  • Latency hiding by creating the tensor for next output buffer
  • Potentially Cuda and CPU(preparing next output buffer) can be overlapped

Describe alternatives you've considered

if runtime module maintains persistent output buffers across multiple inference runs, it allows to reuse previously allocated memory for output tensors, potentially improving performance by reducing memory allocation overhead. But it can not handle live tensors from a previous invocation. Second invocation of model will overwrite output buffer of previous run.

Additional context

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions