-
-
Notifications
You must be signed in to change notification settings - Fork 31.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bpo-31804: multiprocessing calls flush on sys.stdout at exit even if … #5492
Conversation
…it is None (pythonw) If you start Python by pythonw.exe on Windows platform then sys.stdout and sys.stderr are set to None. If you also use multiprocessing then when the child process finishes BaseProcess._bootstrap calls sys.stdout.flush() and sys.stderr.flush() finally. This causes the process return code to be not zero (it is 1).
Hello, and thanks for your contribution! I'm a bot set up to make sure that the project can legally accept your contribution by verifying you have signed the PSF contributor agreement (CLA). Unfortunately we couldn't find an account corresponding to your GitHub username on bugs.python.org (b.p.o) to verify you have signed the CLA (this might be simply due to a missing "GitHub Name" entry in your b.p.o account settings). This is necessary for legal reasons before we can look at your contribution. Please follow the steps outlined in the CPython devguide to rectify this issue. Thanks again to your contribution and we look forward to looking at it! |
Thanks for posting this. Before going forward, you'll need to do two things:
I'll also post some comments on your PR. |
Lib/multiprocessing/process.py
Outdated
@@ -314,8 +314,10 @@ def _bootstrap(self): | |||
finally: | |||
threading._shutdown() | |||
util.info('process exiting with exitcode %d' % exitcode) | |||
sys.stdout.flush() | |||
sys.stderr.flush() | |||
if sys.stdout is not None and not sys.stdout.closed: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A minor issue, but I'd like to stick to the idiom already used in the previous PR: https://github.com/python/cpython/pull/4073/files#diff-d6a41af14d72f372f37ced0cc15c1f58L17
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course. It's the easier to ask for forgiveness than permission idiom.
Lib/test/_test_multiprocessing.py
Outdated
@@ -653,6 +653,47 @@ def test_forkserver_sigkill(self): | |||
self.check_forkserver_death(signal.SIGKILL) | |||
|
|||
|
|||
class TestStdOutAndErr(unittest.TestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps you can move these tests closer to the test added by the other PR:
https://github.com/python/cpython/pull/4073/files#diff-b046ab474480855fb4a01de88cfc82bbR585
I also wonder if those tests can be unified. They seem to be doing very similar things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My 1st guess was to update that test case too. But someone has disabled that test case by adding a _ prefix to the class name. Also the test module itself is prefixed by _. So these tests are not run automatically. And I couldn't make that test case work.
Also that test seemed to test the inter process Event handling, so I guessed I would stick to the strict and often unpractical "only test one thing at a time" principle.
I think the "_test_multiprocess.py" modul should be refurbished but I don't think that I am the right person for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ha. You misunderstood how _test_multiprocessing
works :-) The test classes there are used as base classes for the actual tests in test_multiprocessing_fork
, test_multiprocessing_forkserver
and test_multiprocessing_spawn
. So you just have to run one of those three test modules to get the tests to work!
(and that's why those test classes have an underscore prefix)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoa! :) Now it all makes sense! I will make yet another commit (3rd) and unify the test cases.
Lib/test/_test_multiprocessing.py
Outdated
|
||
def test_no_stdio(self): | ||
""" | ||
bpo-31804: If you start Python by pythonw then sys.stdout and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's no need for the docstring (or comment) to be that long. Just put the reference to the issue number + a 2- or 3-line summary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I trimmed the docstring.
A Python core developer has requested some changes be made to your pull request before we can consider merging it. If you could please address their requests along with any other requests in other reviews from core developers that would be appreciated. Once you have made the requested changes, please leave a comment on this pull request containing the phrase |
…it is None (pythonw.exe) - Review changes
I have made the requested changes; please review again |
Thanks for making the requested changes! @pitrou: please review the changes made to this pull request. |
…dd close method to _file_like
I unified the unit tests. I recognized that the previous test did not test the spawn method correctly, because it set the import sys
import tempfile
import multiprocessing
def print_stderr():
print('stderr:', sys.stderr.fileno(), sys.stderr)
print('__stderr__:', sys.__stderr__.fileno(), sys.__stderr__)
if __name__ == '__main__':
sys.stderr = tempfile.TemporaryFile('w')
p = multiprocessing.Process(target=print_stderr)
print('Parent:')
print_stderr()
print('Child:')
p.start()
p.join() According to the documentation of multiprocessing:
I'm not so sure what unnecessary means but I have read that inheriting file descriptors can pose a security risk. Can this be the reason why custom outputs are not inherited? Or is this another bug? Anyhow I replaced the similar test with my test because mine changes the stdout and stderr in the child processes so it does not depend on this, then extended my test with the Event check. Also the plain StringIO as a substitute for stdout and stderr is not good enough because it's flush method doesn't raise an exception if it was closed before. I have made the requested changes; please review again |
Thanks for making the requested changes! @pitrou: please review the changes made to this pull request. |
@poxthegreat thanks for the updates! Actually we need to test both cases: the one where stdout/stderr are mutated before launching the process, and the one where they are mutated inside the child process. I'll produce a new PR with the required changes. |
I'm closing this PR in favor of PR #6079. |
…it is None (pythonw)
If you start Python by pythonw.exe on Windows platform then sys.stdout and sys.stderr are set to None. If you also use multiprocessing then when the child process finishes BaseProcess._bootstrap calls sys.stdout.flush() and sys.stderr.flush() finally. This causes the process return code to be not zero (it is 1).
https://bugs.python.org/issue31804