Open
Description
Is your feature request related to a problem? Please describe.
I'd like to use NVIDIA Triton for my inferences. Since it's a remote gRPC server, I would need to use async. I'm not super familiar with ML.NET yet, but it seems like it currently does not support that.
Describe the solution you'd like
Async APIs to do an inference (e.g. PredictionEngine.PredictAsync).
Would that make sense?