-
Notifications
You must be signed in to change notification settings - Fork 29.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster workers not sharing ports after reopening a listener #6693
Comments
addaleax
added
cluster
Issues and PRs related to the cluster subsystem.
net
Issues and PRs related to the net subsystem.
labels
May 11, 2016
4 tasks
santigimeno
added a commit
to santigimeno/node
that referenced
this issue
May 27, 2016
It allows reopening a server after it has been closed. Fixes: nodejs#6693 PR-URL: nodejs#6981 Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Ron Korving <ron@ronkorving.nl> Reviewed-By: James M Snell <jasnell@gmail.com>
Fixed by 0c29436 |
Fishrock123
pushed a commit
to Fishrock123/node
that referenced
this issue
May 30, 2016
It allows reopening a server after it has been closed. Fixes: nodejs#6693 PR-URL: nodejs#6981 Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Ron Korving <ron@ronkorving.nl> Reviewed-By: James M Snell <jasnell@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Based on the responses to this question, I'm trying to figure out why having multiple workers call
server.listen()
on the same port/address doesn't cause any issues, but having an old worker callserver.close()
followed by aserver.listen()
on the same port will repeatedly give the errorEADDRINUSE
.It does not seem to be a case of the listener not closing correctly, as a
close
event is emitted, which is when I attempt to set up the new listener. While this worker is gettingEADDRINUSE
, newly spawned workers are able callserver.listen()
with no issues.Here is a simple test that will demonstrate the problem. As workers are forked every 100ms, they will establish a listener on port 16000. When worker 10 is forked, it will establish a timeout to tear down its listener after 1s. Once a
close
event is emitted, it will attempt to callserver.listen()
on port 16000 again and get theEADDRINUSE
error. For consistency, this test explicitly provides the same address during binding to avoid any potential issues with core modules dealing with anull
address.This particular implementation will cause worker 10 to then take up all cycles once it hits the error during binding, thereby keeping the master process from forking new workers. If a delay is added before calling
server.listen()
, worker 10 will still continue to hitEADDRINUSE
while the master continually forks new workers that are capable of establishing listeners.Initial output from this test case:
(This issue has all the same information as this StackOverflow post that I posted a couple days back.)
The text was updated successfully, but these errors were encountered: