Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support VLM model and GPT4V API #2058

Closed
xunfeng1980 opened this issue Dec 12, 2023 · 3 comments
Closed

Support VLM model and GPT4V API #2058

xunfeng1980 opened this issue Dec 12, 2023 · 3 comments

Comments

@xunfeng1980
Copy link

VLM model: Qwen-VL/LLaVA etc
VLM API: GPT-4V API https://platform.openai.com/docs/guides/vision

@l4b4r4b4b4
Copy link

I strongly second this. However Llava 1.6 models are very peculiar :D

@Reichenbachian
Copy link

It's fine if it's peculiar. We can just error (BadRequest) anything that doesn't have image first or multiple images for instance, but still keep the same format. We shouldn't design the endpoint around the model. The model should fit into the endpoint.

@ywang96
Copy link
Member

ywang96 commented Jun 7, 2024

Closing this as we merged #5237

@ywang96 ywang96 closed this as completed Jun 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants