Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Docker image for 0.5.4 does not include package timm==0.9.10 to run MiniCPMV #8107

Closed
1 task done
bjornjee opened this issue Sep 3, 2024 · 4 comments · Fixed by #8792
Closed
1 task done
Labels
bug Something isn't working

Comments

@bjornjee
Copy link

bjornjee commented Sep 3, 2024

Your current environment

nil

🐛 Describe the bug

Running docker image 0.5.4 with the following entrypoint:

python3 -m vllm.entrypoints.openai.api_server --port 8088 --load-format auto --gpu-memory-utilization 0.90 --enforce-eager --disable-log-requests --model HwwwH/MiniCPM-V-2 --served-model-name HwwwH/MiniCPM-V-2 --trust-remote-code --max-model-len 4096 --tensor-parallel-size 1

Server errors out with error, code ref:

install timm==0.9.10

Willing to create MR for fix

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@bjornjee bjornjee added the bug Something isn't working label Sep 3, 2024
@jeejeelee
Copy link
Collaborator

timm needs to be installed locally , rather than as a common module.

@bjornjee
Copy link
Author

bjornjee commented Sep 3, 2024

curious if there is a reason why it is not baked in the Dockerfile. For users who run docker image directly, it will be better if the docker image has all required dependencies

@jeejeelee
Copy link
Collaborator

curious if there is a reason why it is not baked in the Dockerfile. For users who run docker image directly, it will be better if the docker image has all required dependencies

This module is already included in https://github.com/vllm-project/vllm/blob/main/requirements-test.txt.
For Dockerfile, if I understand correctly, it should not need to be manually installed again, see: https://github.com/vllm-project/vllm/blob/main/Dockerfile#L124

@DarkLight1337
Copy link
Member

curious if there is a reason why it is not baked in the Dockerfile. For users who run docker image directly, it will be better if the docker image has all required dependencies

Different models may have their own dependencies, with some defined on the HuggingFace repo and thus outside of our control. It would be quite inefficient to install the dependencies for every model when you will only use a couple of them.

In any case, it should be easy to create a Dockerfile that adds an extra step on top of the existing one to install the dependency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants