-
-
Notifications
You must be signed in to change notification settings - Fork 718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workers stuck, increased memory usage while processing large CSV from S3. #1467
Comments
It's odd that the worker process would be observing a 7GB memory use in this situation. It is evaluating memory use with the following code:
Do you have any thoughts about why that code would return 7GB in your situation? You can disable this functionality if you want by setting the following in your config.yaml
|
No clear idea. I will be looking further into this, anything in particular that I could check? PS. for debugging purpose I'm running scheduler and single worker locally on my development host. |
You might consider running your computation locally and tracking memory use. You might do this with the single-machine scheduler diagnostics or, if you can run your functions independently, with a memory line-profiler. |
Thanks, I will try to do that. Currently I'm running locally in the sense that all processes run on the same host. If I switch to default local scheduler the job does not block and memory consumption is also significantly lower. I tried to tweak read_csv() parameters and figure out what might be wrong with our input data, but I haven't found anything yet. |
I have tried guppy, memory-profiler and cprofile, but these didn't tell me much about nature of problem. We seem to have single read_block task that hangs and consumes all memory. Unfortunately no success in running dask diagnostics so far, I'm getting warnings: |
update: Next, I have generated test input data with the same file size, but shorter records, no unicode, quotes, escaping etc. - to check the possibility of funny data being the cause, but it doesn't seem to be the problem. I have also taken a look at computation graph visualizations, but to me all looks as expected (I'm attaching example graph from a smaller dataset) Might that be caused by huge size of task graph? (for development I have only 1 worker running currently) |
Further observation - problem seems to be related to blocksize. |
Moreover, after the job is killed, worker does not release the memory, but running the job again does not increase initial memory consumption - starts still around 4GB. |
Do you see abnormal memory consumption if you try with a subset of the file (say 1/10th the size, but with a similar structure)? Also, can you also reproduce if the file is stored locally? |
If I store the file locally while still using distributed scheduler, I do not observe this problem - memory consumption is very low. If I use smaller input data the memory consumption is proportionally smaller, e.g. for 280MB (roughly 1/4) it would consume almost 2GB (roughly 1/4) - in such cases worker would not get stuck because it has enough memory, but the consumption is still unexpectedly high. |
Hmm... so there would be something in the S3 layer consuming undue amounts of memory? Is it possible for you to give us access to that file (preferably the smaller version) and post a standalone script showing how to reproduce? |
The code is in my initial comment on top (minus the imports). Unfortunately I cannot share the file, but I have already confirmed that this is not a problem with specific data pattern - I have generated simple data with short records and it behaves the same. If I use default local scheduler but consume data from S3 I can see that it launches multiple (8) processes and each one of them consumes almost 1GB of memory - so this might be related to S3 access. |
Could you then share the generated data file? |
I have used this small script to generate data file
|
I have also removed bokeh from installation just to verify whether dashboards may be causing problems, but problem still persists. |
Additional info for reproduction, I am running Python 2.7.13. |
(sorry, wrong issue) |
Problem does not seem to occur with synchronous single-threaded scheduler. |
No problem also in the case of local multiprocessing (process pool) scheduler. |
I have run the IPython kernel on a worker in attempt to inspect what's going on.
What I got from IPython kernel:
|
Attaching the worker.log
|
I'm having a similar issue, with google cloud storage as the file service.
Not sure it's a memory issue on my end, since the memory usage reported by htop for the worker is <1% of total system RAM, and I'm running with -memory-limit=0 (no difference when setting it to 'auto' or any other value). The worker CPU usage is consistently at 100% though, despite not loading the file even after an hour. For reference, I'm able to load and compute on my laptop locally in approximately one minute. |
You might consider looking at the call stack of the worker or task in
question to see what it's churning on. See
http://distributed.readthedocs.io/en/latest/api.html#distributed.client.Client.call_stack
This is also available through the web API on the Info tab
…On Thu, Jan 25, 2018 at 2:49 PM, Jeffrey Liu ***@***.***> wrote:
I'm having a similar issue, with google cloud storage as the file service.
Everything works fine when running a local scheduler, but when running
distributed, the worker hangs during pandas_read_text.
I'm reading in a csv that's ~380 MB in size. The file does seem to get
downloaded, since the dashboard reports bytes-stored equal to the file
size, but it hangs when trying to parse the file.
Here's the stderr debug output for the worker
distributed.worker - DEBUG - future state: read-block-0-612c3bae53a58b03ed1fd3278fe778fb - RUNNING
distributed.worker - DEBUG - Heartbeat: tls://{workerhost}:{port}
distributed.core - WARNING - Event loop was unresponsive for 82.71s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.worker - DEBUG - Heartbeat: tls://{workerhost}:{port}
distributed.worker - DEBUG - future state: read-block-0-612c3bae53a58b03ed1fd3278fe778fb - FINISHED
distributed.worker - DEBUG - Send compute response to scheduler: read-block-0-612c3bae53a58b03ed1fd3278fe778fb, {'op': 'task-finished', 'status': 'OK', 'nbytes': 383309713, 'type': <class 'bytes'>, 'start': 1516905426.090314, 'stop': 1516905543.1502976, 'thread': 139770478372608, 'key': 'read-block-0-612c3bae53a58b03ed1fd3278fe778fb'}
distributed.worker - DEBUG - Heartbeat: tls://{workerhost}:{port}
distributed.worker - DEBUG - Execute key: pandas_read_text-767ee6f2c1ce2c75f9125adfb1c32975 worker: tls://{workerhost}:{port}
Not sure it's a memory issue on my end, since the memory usage reported by
htop for the worker is <1% of total system RAM, and I'm running with
-memory-limit=0 (no difference when setting it to 'auto' or any other
value). The worker CPU usage is consistently at 100% though, despite not
loading the file even after an hour. For reference, I'm able to load and
compute on my laptop locally in approximately one minute.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1467 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AASszAZA131X-Er9c7wyxANYGad-CEPeks5tONq3gaJpZM4P13uM>
.
|
I'm able to get call stack only during the The calls I'm making look like:
I thought maybe it was an issue with the worker in general, but testing with I also tried loading a test dataframe filled with random numbers to see if it was something weird about the dataframe, but the same issue occurs. The scheduler and worker is running on an ubuntu 14.04 machine, client is running on macos high sierra. Both machines are running dask 0.16.1, distributed 1.20.2, gcsfs 0.0.4 on python 3.6 |
This is a sign that this part of the pandas call doesn't release the GIL, which is why other parts of the system can't respond in a timely manner. Probably it's creating many string objects in your pandas dataframe. |
Yes, that seemed to be the issue. This thread helped get it to work. Turning off the pandas option: |
Just commenting, disabling the chain assigment pandas option made my ETL job go from running out of memory after 90 minutes to taking 17 minutes! I think we can close this issue since its related to pandas (and thanks @jeffreyliu a year and a half later for your comment!) |
Huh, that surprises me about chained_assignment. I'll look into the pandas side of things. |
It seems like this performance issue was fixed on the pandas side in pandas-dev/pandas#27031 (pandas 0.25). At least, I can't reproduce the slowdown reported in pandas-dev/pandas#18743 (comment) anymore. |
Ok, closing |
I'm processing a dataframe stored as a (relatively) large CSV on S3.
Using distributed scheduler with multiprocessing (1 thread per 1 worker process, --no-nanny).
Workers seem to be accumulating data and getting stuck, in some cases this also leads to failure of whole job.
I came up with minimal reproducing example as below (only read/write CSV)
This would hang forever with progress at 0%.
In worker log:
distributed.worker - WARNING - Memory use is high but worker has no data to store to disk. Perhaps some other process is leaking memory? Process memory: 7.16 GB -- Worker memory limit: 8.02 GB
The file itself is only 1.2GB though.
Using distributed 1.19.2 and dask 0.15.4
The text was updated successfully, but these errors were encountered: