-
-
Notifications
You must be signed in to change notification settings - Fork 30.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
distutils.spawn should use subprocess (hang in parallel builds on QNX) #83944
Comments
The attached code causes the child processes to hang on QNX. The hang is caused by the logging module trying to acquire the module lock while in an atfork() handler. In a system where semaphore state is kept in user mode and is thus inherited from the parent on fork() the semaphore may appear to have a value of 0, and thus will never be posted to in the single-threaded child. |
Note the original problem stems when you using "setup.py -j <NJOBS>". The attached reproducer is a bit of a red herring, because indeed doing fork() + exec() in pure Python is too fragile. That is why the standard subprocess does it in C. The problem here is that the So the solution to the OP's actual problem should be to recode Original mailing-list discussion at: |
I'm not convinced that a multi-threaded fork()+exec() from C would be any better, unless the Python code goes to great lengths to avoid any non-async-signal-safe operations between the fork() and the exec(). |
@antoine: I can sort of see why you consider the small example a red herring, since it mixes threads and fork. Nevertheless, a simpler version (not using an executor) would probably work just fine? It's the edge cases where the mixture becomes explosive. So I still think there's something that could be improved in ThreadPoolExecutor. @elad: Your script contains a bug (there is no definition of 'exec_fn'). But anything that returns or raises from the child process does indeed hang. Since I can repro this on macOS I do QNX is a red herring here. |
"Your script contains a bug (there is no definition of 'exec_fn')" Told you I wasn't much of a Python developer ;-) Your last comment wasn't clear - do you see this problem on macOS? Is it only after you fixed the example? For me it hangs with the bug, as the problem occurs before the call to exec_fn(). |
When I run this on macOS, either with the bug, or with the buggy line commented out, I get the same hang. When I interrupt it, I get Parent 78918
Parent 78919
Child 78918
Parent 78920
Child 78919
Parent 78921
Child 78920
Child 78921
C-c C-cTraceback (most recent call last):
File "tt.py", line 18, in <module>
fut.result()
File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 434, in result
self._condition.wait(timeout)
File "/usr/local/lib/python3.8/threading.py", line 302, in wait
waiter.acquire()
KeyboardInterrupt
Exception in worker
Traceback (most recent call last):
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 78, in _worker
work_item = work_queue.get(block=True)
KeyboardInterrupt
Exception in worker
Traceback (most recent call last):
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 78, in _worker
work_item = work_queue.get(block=True)
KeyboardInterrupt
Exception in worker
Traceback (most recent call last):
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 78, in _worker
work_item = work_queue.get(block=True)
KeyboardInterrupt
Exception in worker
Traceback (most recent call last):
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 78, in _worker
work_item = work_queue.get(block=True)
KeyboardInterrupt
Done 78918 78917
Done 78921 78917
Done 78919 78917
Done 78920 78917 (I added some prints to the script. Here's the script I ran:) import os
from concurrent.futures import ThreadPoolExecutor
def new_process(arg):
pid = os.fork()
if pid == 0:
print("Child", os.getpid())
## os.execl("/usr/bin/true", "/usr/bin/true")
else:
print("Parent", pid)
pid, status = os.waitpid(pid, 0)
print("Done", pid, os.getpid())
executor = ThreadPoolExecutor(max_workers=4)
futures = [executor.submit(new_process, None)
for i in range(0, 4)]
for fut in futures:
fut.result() When I uncomment the os.execl() line, the program runs and completes. |
"When I uncomment the os.execl() line, the program runs and completes." In that case I'm not sure it is the same issue. The child processes in your case executed their part of the new_process function, which then returned. Nevertheless from the trace it looks like none of the children exited. So you are not hitting the logging lock issue, but may be hitting a similar problem in another module. Is the logging module always enabled, or does it depend on the environment? |
The key is that os.execl() never returns. My understanding is that the hang happens when the child code returns (or raises) in the forked child process, because the semaphore state is cloned by the fork. There is no logging here. I just added some print() calls. (See the script I posted.) |
OK, but that's not the problem I see. The parent calls fork(), creates a child that then runs the atfork() handlers *before* returning from the os.fork() call (which is the expected behaviour). At least one of those atfork() handlers is the one registered by the logging module (Lib/logging/init.py) and that one attempts to acquire a lock. So in my case the child never gets to the point of calling exec(), which is why I didn't notice the bug in the script. |
And you're sure that your "toy" example hangs in the at-fork handler. In that case I can't help you further, I don't have the resources or time to try out QNX. Your best bet is indeed to change distutils.spawn to not use fork+exec -- do you think you can come up with a PR for that? |
As sure as I can be given my limited experience debugging Python... Luckily I do know my way around the QNX kernel ;-) The stack trace for the child shows it stuck on a semaphore with a count value of 0. A print in the logging module shows that the child gets to the _acquireLock() call on line 245 of logging/init.py but a print after that line is never displayed. I am basing my analysis on the combination of the two observations. I'll try to come up with a change to distutils.spawn, as well as a change to subprocess to use posix_spawn(). Stack trace: |
I may be missing something, but subprocess depends on _posixsubprocess, which doesn't get built until setup.py is run, which results in a circular dependency. |
setup.py doesn't use multiprocessing. multiprocessing is super complex. Would it be possible to write a reproducer which doesn't use multiprocessing?
Can you try to get the Python traceback? Try py-bt command in gdb, or attempt to use faulthandler.dump_traceback_later(). |
"setup.py doesn't use multiprocessing. multiprocessing is super complex. Would it be possible to write a reproducer which doesn't use multiprocessing?" But the problem is with the handling of fork() by Python modules, and specifically with multi-threaded fork()s. Without multiple processes there is no issue. And setup.py does use use multiple processes via a thread pool, where each thread calls fork(), if it detects that "make" was invoked with the parallel (-j) option. |
That's exactly what the C code in the C subprocess module for POSIX does, though: https://github.com/python/cpython/blob/master/Modules/_posixsubprocess.c#L828-L854 That's why I hope that using subprocess instead of a fork()+exec() sequence naively coded in pure Python would solve the issue.
Perhaps "red herring" was the wrong expression. What I mean is that it's no surprise that fork()+exec() sequence written in pure Python would be unsafe in multi-thread settings. The solution, though, is not to avoid threads or try to workaround the issue in ThreadPoolExecutor, but rather to avoid doing fork()+exec() in pure Python. |
Oh ok, now I get it :-) setup.py uses distutils to build extensions and distutils.command.build_ext uses concurrent.futures for parallelism: if self.parallel:
self._build_extensions_parallel()
else:
self._build_extensions_serial() with: def _build_extensions_parallel(self):
workers = self.parallel
if self.parallel is True:
workers = os.cpu_count() # may return None
try:
from concurrent.futures import ThreadPoolExecutor
except ImportError:
workers = None
if workers is None:
self._build_extensions_serial()
return
with ThreadPoolExecutor(max_workers=workers) as executor:
futures = [executor.submit(self.build_extension, ext)
for ext in self.extensions]
for ext, fut in zip(self.extensions, futures):
with self._filter_build_errors(ext):
fut.result() The problem is not concurrent.futures but the job submitted to the executor: build_extension() uses distutils.spawn() which is unsafe. distutils.spawn() is the root issue if I understood correctly. It must be rewritten with subprocess. |
I wrote PR 18743 to reimplement distutils.spawn.spawn() function with the subprocess module. It also changes setup.py to use a basic implementation of the subprocess module if the subprocess module is not available: before required C extension modules are built. |
Attached fork_mt.py example uses exec_fn() function... which is not defined. Is it on purpose? |
"Attached fork_mt.py example uses exec_fn() function... which is not defined. Is it on purpose?" That was my mistake - a copy/paste of existing code from distutils. Since the example hangs before the script gets to exec_fn() I didn't notice the problem. Just change it to os.execl |
The PR for 3.9 breaks AIX build (since https://buildbot.python.org/all/#/builders/119/builds/384) reporting the following: Traceback (most recent call last):
File "/data/prj/python/python3-3.9/../git/python3-3.9/setup.py", line 69, in <module>
from distutils.command.build_ext import build_ext
File "/data/prj/python/git/python3-3.9/Lib/distutils/command/build_ext.py", line 33, in <module>
class build_ext(Command):
File "/data/prj/python/git/python3-3.9/Lib/distutils/command/build_ext.py", line 63, in build_ext
"(default: %s)" % get_platform()),
File "/data/prj/python/git/python3-3.9/Lib/distutils/util.py", line 107, in get_platform
return get_host_platform()
File "/data/prj/python/git/python3-3.9/Lib/distutils/util.py", line 83, in get_host_platform
return aix_platform()
File "/data/prj/python/git/python3-3.9/Lib/_aix_support.py", line 83, in aix_platform
vrmf, bd = _aix_bosmp64()
File "/data/prj/python/git/python3-3.9/Lib/_aix_support.py", line 53, in _aix_bosmp64
out = subprocess.check_output(["/usr/bin/lslpp", "-Lqc", "bos.mp64"])
AttributeError: module 'subprocess' has no attribute 'check_output' Will post a PR for this. The cause: |
I wrote PR 18872 to implement it for setup.py. Michael Felt: Can you please test it on AIX? |
Almost. The command is run, but not enough of the bootstrap is finished - it seems. File "/data/prj/python/git/python3-3.9/Lib/_aix_support.py", line 54, in _aix_bosmp64 Just skipping the subprocess bit (which is what the original does, waiting for better moments) is sufficient for the bootstrap phase. |
Oops, it should now be fixed by my second commit. Please retry PR 18872. |
Comes further. The build finishes, but socket and asyncio are missing... At least a build will proceed, and I can look into this new, perhaps unrelated issue re: socket later. *** WARNING: renaming "_asyncio" since importing it failed: No module named '_socket' |
Do you see any error when the _socket module is built? Please open a new issue with full logs. |
re: _socket - I'll start researching now, but it may still be related to this. a) if it looks like related to this issue I'll add a some report here |
OK. BEFORE I open a new issue, I am going to guess that something is wrong with finding _socket.so - because it exists, but "something" is not finding it. Following modules built successfully but were removed because they could not be imported:^M NOTE: The module was built successfully, but not found. root@x065:[/data/prj/python/python3-3.9]find . -name _socket.so |
I am confused. The bot complains nearly immediately about missing _socket Following modules built successfully but were removed because they could not be imported:
_asyncio
running build_scripts
copying and adjusting /home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Tools/scripts/pydoc3 -> build/scripts-3.9
copying and adjusting /home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Tools/scripts/idle3 -> build/scripts-3.9
copying and adjusting /home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Tools/scripts/2to3 -> build/scripts-3.9
changing mode of build/scripts-3.9/pydoc3 from 600 to 755
changing mode of build/scripts-3.9/idle3 from 600 to 755
changing mode of build/scripts-3.9/2to3 from 600 to 755
renaming build/scripts-3.9/pydoc3 to build/scripts-3.9/pydoc3.9
renaming build/scripts-3.9/idle3 to build/scripts-3.9/idle3.9
renaming build/scripts-3.9/2to3 to build/scripts-3.9/2to3-3.9
./python -E -c 'import sys ; from sysconfig import get_platform ; print("%s-%d.%d" % (get_platform(), *sys.version_info[:2]))' >platform
./python ./Tools/scripts/run_tests.py -j 1 -u all -W --slowest --fail-env-changed --timeout=900 -j2 --junit-xml test-results.xml
Traceback (most recent call last):
File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/./Tools/scripts/run_tests.py", line 12, in <module>
import test.support
File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/test/support/__init__.py", line 6, in <module>
import asyncio.events
File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/asyncio/__init__.py", line 8, in <module>
from .base_events import *
File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/asyncio/base_events.py", line 23, in <module>
import socket
File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/socket.py", line 51, in <module>
import _socket
ModuleNotFoundError: No module named '_socket' while if I use my (now closed) pr18847 I do not get the results immediately - it starts the whole test series, and then fails - but not because _socket does not exist, instead: 0:00:48 [ 53/420/2] test_socket failed (30.4 sec) -- running: test_multiprocessing_fork (48.3 sec)
/data/prj/python/git/python3-3.9/Lib/test/test_socket.py:2557: RuntimeWarning: received malformed or improperly-truncated ancillary data
result = sock.recvmsg(bufsize, *args)
/data/prj/python/git/python3-3.9/Lib/test/test_socket.py:2648: RuntimeWarning: received malformed or improperly-truncated ancillary data
result = sock.recvmsg_into([buf], *args)
test test_socket failed -- Traceback (most recent call last):
File "/data/prj/python/git/python3-3.9/Lib/test/test_socket.py", line 1481, in testGetaddrinfo
self.assertEqual(socktype, socket.SOCK_STREAM)
AssertionError: 0 != <SocketKind.SOCK_STREAM: 1> In short, stumped... |
Did the build worked on AIX before the commit 1ec63b6 which changed distutils to use subprocess? Try commit dffe4c0 (parent of the other change). |
This was already confirmed by both bots... The last bot build that worked (and for both AIX bots) is: dffe4c0 The first build that failed (for both) is: 1ec63b6 See (pass) https://buildbot.python.org/all/#/builders/119/builds/383 and https://buildbot.python.org/all/#/builders/227/builds/369; fail https://buildbot.python.org/all/#/builders/119/builds/384 I'll checkout dffe4c0 to verify it still works successfully. |
After a checkout the build finishes (successfully) with: The following modules found by detect_modules() in setup.py, have been running build_scripts root@x065:[/data/prj/python/python3-3.9]./python
|
I created bpo-39936 "Python fails to build _asyncio on module on AIX" to debug the AIX build failure. |
I've flagged bpo-41207 as a regression stemming from this effort. |
The linked PRs and two follow-up issues have all been closed. A |
subprocess.check_output
during setup.py build phase #18847Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: