You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched in the issues and found no similar issues.
What would you like to be improved?
Currently in org.apache.spark.shuffle.writer.WriteBufferManager#insertIntoBuffer, if the memoryUsed of WriterBuffer larger than bufferSize, the WriterBuffer will be flush. But if the memoryUsed of the first record larger than bufferSize, the WriterBuffer will not be flush.
How should we improve?
Flushing buffer if the memoryUsed of the first record of WriterBuffer larger than bufferSize
Are you willing to submit PR?
Yes I am willing to submit a PR!
The text was updated successfully, but these errors were encountered:
…first record of `WriterBuffer` larger than bufferSize (#1485)
### What changes were proposed in this pull request?
Flushing buffer if the memoryUsed of the first record of `WriterBuffer` larger than bufferSize.
### Why are the changes needed?
More accurate. fix#1497
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
UT.
Code of Conduct
Search before asking
What would you like to be improved?
Currently in
org.apache.spark.shuffle.writer.WriteBufferManager#insertIntoBuffer
, if the memoryUsed of WriterBuffer larger than bufferSize, the WriterBuffer will be flush. But if the memoryUsed of the first record larger than bufferSize, the WriterBuffer will not be flush.How should we improve?
Flushing buffer if the memoryUsed of the first record of WriterBuffer larger than bufferSize
Are you willing to submit PR?
The text was updated successfully, but these errors were encountered: