You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using the yolov8-seg model. I have exported it as an onnx and would like to deploy it on the triton inference server. I think this is the intended way, am I right?
When I want to deploy it I can only give one output dimension for the model with the clearml serving cli. But the model has two outputs, if I write two outputs into the CLI command the first one will be overwritten, is this a bug or am I doing something wrong?
If this is not possible I have seen that I can use a custom model/Preprocess and then the ultralytics library and do the inference myself outside of triton. Is it possible there to make the model persistent like a class variable of the Process class or does the model get reloaded each time?
4, If triton is the preferred way because yolov8 supports direct triton inference, does that work with clearml serving or because a wrapper is built around it it does not work?
Thanks in advance. Maybe you can provide an example on how to do it :)
The text was updated successfully, but these errors were encountered:
Hi there,
You have a tutorial up where you show how to use the yolov8 library (https://clear.ml/docs/latest/docs/integrations/yolov8/) with clearml. You also state that these models are easy to use. I have a few questions:
yolov8-seg
model. I have exported it as an onnx and would like to deploy it on the triton inference server. I think this is the intended way, am I right?4, If triton is the preferred way because yolov8 supports direct triton inference, does that work with clearml serving or because a wrapper is built around it it does not work?
Thanks in advance. Maybe you can provide an example on how to do it :)
The text was updated successfully, but these errors were encountered: