-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with PyPy on Travis #1730
Comments
You can try |
@pganssle it's very poorly advertised, but you can ask Travis support to enable debug on repo: https://docs.travis-ci.com/user/running-build-in-debug-mode/ |
With @webknjaz's suggestion fixing the CI, I have downgraded this from "blocker" to "minor". I'm leaving it open a bit longer because this seems to be a real issue that might bite us later, and apparently has other manifestations on |
|
@webknjaz To be honest I don't know, but |
@pganssle can you show me the Travis job? I didn't see any |
@webknjaz It's not a Travis job. @gaborbernat and I both ran |
Oh, I thought you were implying that it's possible on travis :) |
pypy3 --version 9.5s Wed 3 Apr 10:51:14 2019
Python 3.6.1 (dab365a465140aa79a5f3ba4db784c4af4d5c195, Feb 18 2019, 10:53:27)
[PyPy 7.0.0-alpha0 with GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.11.45.5)] |
Doing a bit more debugging for this. With PyPy 3.5.3-7.0.0, I ran:
And when it was in the breakpoint, in another window I ran:
Just as it was running I was seeing numbers as high as 469, and when it was paused in the debugger, there were 351 processes open. I think this may be causing the various resource exhaustion problems. With CPython, I'm only seeing around 30-40 processes opened. I'm not really sure why so many processes are being opened at all here. PEP 517 does require that the backend be run in a fresh process, but I got the impression that in each one of these builds, we're actually waiting for the execution to finish before moving on to the next test, so why are we seeing anything more than 2-3 processes at all? Anyone who knows more about concurrency / multiprocessing can weigh in? The relevant code is here. |
As a mitigation for pypa#1730, this commit limits the number of workers in the ProcessPoolExecutor to 1 (default is the number of CPUs). On PyPy, having a higher number of available workers dramatically increases the number of concurrent processes, leading to some resource exhaustion issues. This does not address the root issue, but should improve the situation until the root issue is addressed.
As a mitigation for pypa#1730, this commit limits the number of workers in the ProcessPoolExecutor to 1 (default is the number of CPUs). On PyPy, having a higher number of available workers dramatically increases the number of concurrent processes, leading to some resource exhaustion issues. This does not address the root issue, but should improve the situation until the root issue is addressed.
@pganssle FTR now you can use just |
All jobs are now failing on
pypy
, see this job. Locally I cannot replicate this withpypy2.7-X
, but the same thing does fail with "Too many open files" inpypy3.6-7.1.0
. It's very hard to debug this. @gaborbernat replicated it on a macbook, he similarly can't replicate the 2.7 failure, and the 3.6 failure is a different error message than the one I have.There may be two problems here, possibly with similar causes. I think it's possible that Travis has scaled down the amount of memory available and that's biting us in the
pypy2.7
job? Hard to tell.If we want a short-term mitigation, we can maybe try:
pypy
version in the job.pypy
job in Azure Pipelines (though that would require getting Add Azure Pipelines #1721 working.This is a major blocking issue IMO.
The text was updated successfully, but these errors were encountered: