Skip to content

Async Inference #7464

Open
Open
@verdie-g

Description

@verdie-g

Is your feature request related to a problem? Please describe.

I'd like to use NVIDIA Triton for my inferences. Since it's a remote gRPC server, I would need to use async. I'm not super familiar with ML.NET yet, but it seems like it currently does not support that.

Describe the solution you'd like

Async APIs to do an inference (e.g. PredictionEngine.PredictAsync).

Would that make sense?

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requestuntriagedNew issue has not been triaged

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions