Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploy YOLOv8 Segmentation model #77

Open
Flippchen opened this issue Jun 22, 2024 · 0 comments
Open

Deploy YOLOv8 Segmentation model #77

Flippchen opened this issue Jun 22, 2024 · 0 comments

Comments

@Flippchen
Copy link

Hi there,

You have a tutorial up where you show how to use the yolov8 library (https://clear.ml/docs/latest/docs/integrations/yolov8/) with clearml. You also state that these models are easy to use. I have a few questions:

  1. I am using the yolov8-seg model. I have exported it as an onnx and would like to deploy it on the triton inference server. I think this is the intended way, am I right?
  2. When I want to deploy it I can only give one output dimension for the model with the clearml serving cli. But the model has two outputs, if I write two outputs into the CLI command the first one will be overwritten, is this a bug or am I doing something wrong?
  3. If this is not possible I have seen that I can use a custom model/Preprocess and then the ultralytics library and do the inference myself outside of triton. Is it possible there to make the model persistent like a class variable of the Process class or does the model get reloaded each time?
    4, If triton is the preferred way because yolov8 supports direct triton inference, does that work with clearml serving or because a wrapper is built around it it does not work?

Thanks in advance. Maybe you can provide an example on how to do it :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant