Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IP bind error #1141

Closed
qyhdt opened this issue Sep 22, 2023 · 4 comments
Closed

IP bind error #1141

qyhdt opened this issue Sep 22, 2023 · 4 comments

Comments

@qyhdt
Copy link

qyhdt commented Sep 22, 2023

Hi, vllm team
I hit an error when I start a service according to your documentation, gives me below error.
Anyone can help me to take a look.

Documentation:
image
Error
image

@qyhdt
Copy link
Author

qyhdt commented Sep 22, 2023

It needs to add two arguments -- host xxxx --port 8000, and also if anyone want to let this service can be access from other machine, host need to be set as 0.0.0.0

@viktor-ferenczi
Copy link
Contributor

You have some other process running on port 8000 or your previous instance still keep that port blocked for a few more seconds (TCP/IP stack (mis)feature).

How to find the existing process running on port 8000:

netstat -nlp | grep 8000

Then just kill it and try starting vLLM again.

@hmellor
Copy link
Collaborator

hmellor commented Mar 8, 2024

Closing this issue as stale as there has been no discussion in the past 3 months.

If you are still experiencing the issue you describe, feel free to re-open this issue.

@hmellor hmellor closed this as completed Mar 8, 2024
@pchunduru10
Copy link

Using the same forum.As I am encountering the same error on 0.5.0.post1 version.

I am launching openai.api server using the following command and consistently seeing the IP-Bind error. It works on 0.4.2 version which is my current image. I am looking to upgrade to newer vllm version to access features/fixes that were failing on earlier version.

CMD="python -u -m vllm.entrypoints.openai.api_server \
        -- host 0.0.0.0 \
         --port $VLLM_PORT \
        --model $launch_model \
        --tensor-parallel-size $NUM_GPUS \
        --download-dir /data"

Appreciate any help with this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants