Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IP Bind Error on v0.5.0.post1 #5867

Closed
pchunduru10 opened this issue Jun 26, 2024 · 15 comments · Fixed by #5873
Closed

IP Bind Error on v0.5.0.post1 #5867

pchunduru10 opened this issue Jun 26, 2024 · 15 comments · Fixed by #5873

Comments

@pchunduru10
Copy link

Using the same forum.As I am encountering the same error on 0.5.0.post1 version.

I am launching openai.api server using the following command and consistently seeing the IP-Bind error. It works on 0.4.2 version which is my current image. I am looking to upgrade to newer vllm version to access features/fixes that were failing on earlier version.

CMD="python -u -m vllm.entrypoints.openai.api_server \
        -- host 0.0.0.0 \
         --port $VLLM_PORT \
        --model $launch_model \
        --tensor-parallel-size $NUM_GPUS \
        --download-dir /data"

Error:

ERROR:    [Errno 98] error while attempting to bind on address ('0.0.0.0', 9000): address already in use

Appreciate any help with this.

Originally posted by @pchunduru10 in #1141 (comment)

@youkaichao
Copy link
Member

That port is already used. You need to change the value of VLLM_PORT .

@pchunduru10
Copy link
Author

I have tried bunch of different ports, none of them worked so far.

@youkaichao
Copy link
Member

you can talk with your admin to see if there's any firewall rules

@pchunduru10
Copy link
Author

I am able to access them on 0.4.2, so wondering what has changed with 0.5.0 and post1 version. Since nothing on my side has changed wrt to firewall.

@youkaichao
Copy link
Member

oh, VLLM_PORT is actually not the port the server will listen on, it is another environment variable that determines the port vllm uses for internal communication. see https://docs.vllm.ai/en/latest/serving/env_vars.html for details.

You need to change the env name.

@pchunduru10
Copy link
Author

pchunduru10 commented Jun 26, 2024

Are these not the cli args for host and port set in python CMD: python -u -m vllm.entrypoints.openai.api_server CLI-ARGS

@youkaichao
Copy link
Member

--port is a cli arg
VLLM_PORT is a reserved env var used by vLLM.

@pchunduru10
Copy link
Author

I am setting a local env called VLLM_PORT in my docker container , so its essentially just passing the value to the VLLM instance/api-server, something like this:
python -u -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 9000

@youkaichao
Copy link
Member

the local env called VLLM_PORT conflicts with vLLM. please use another name.

@pchunduru10
Copy link
Author

Yes it worked ! So its the conflict with defined ENV name. Thanks much !

@robertgshaw2-neuralmagic
Copy link
Collaborator

@youkaichao do you think we should update our internal ENV variable names to be a bit less likely to conflict with user environment variables?

  • e.g. something like VLLM_SYS_{EXISTING_NAME}

@youkaichao
Copy link
Member

That's too much maintenance burden. I prefer to warn users that they should use another prefix to avoid conflict.

@frittentheke
Copy link
Contributor

frittentheke commented Jun 27, 2024

In case someone else runs into this on Kubernetes:

Don't name the service targeting your vLLM pods vllm as this is cause variables like VLLM_PORT to be set by the Kubelet
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables

and this variable apparently is used by VLLM itself.

@youkaichao
Copy link
Member

@frittentheke thanks for the report, updated the doc at #5916

@jfcherng
Copy link

oh, VLLM_PORT is actually not the port the server will listen on, it is another environment variable that determines the port vllm uses for internal communication. see docs.vllm.ai/en/latest/serving/env_vars.html for details.

You need to change the env name.

Thank you @youkaichao. This saves my day.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants