-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bulkload OOM when loading big dataset #5574
Comments
here is the heap
|
Might be related to #5361 |
@ashish-goswami v1.1.1 is still oom, but it oom later compared with this version where reduce progress is much greater. |
Can you try increasing the number of shards (both map and reduce) and check again? It would be really helpful. |
Github issues have been deprecated. |
What version of Dgraph are you using?
Dgraph version : v20.03.3
Dgraph SHA-256 : 08424035910be6b6720570427948bab8352a0b5a6d59a0d20c3ec5ed29533121
Commit SHA-1 : fa3c191
Commit timestamp : 2020-06-02 16:47:25 -0700
Branch : HEAD
Go version : go1.14.1
Have you tried reproducing the issue with the latest release?
yes
What is the hardware spec (RAM, OS)?
128G mem & 1.8T SSD
Linux version 3.10.0-1062.9.1.el7.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) ) #1 SMP Fri Dec 6 15:49:49 UTC 2019
Steps to reproduce the issue (command/config used to run Dgraph).
Bulkload big dataset
Expected behaviour and actual result.
Log is as followings:
is seems to be caused by loading all the predicate keys into mem at reduce phase
The text was updated successfully, but these errors were encountered: