You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched in the issues and found no similar issues.
What would you like to be improved?
In our production cluster, we found that although the amount of shuffle data did not change much, the amount of writing to HDFS continued to increase every day. The main reason is that currently we determine whether the local disk is writable based on metadata.
Related to #1678#1247
How should we improve?
Use the disk size obtained from periodic check to determine whether the disk can be written.
Are you willing to submit PR?
Yes I am willing to submit a PR!
The text was updated successfully, but these errors were encountered:
xianjingfeng
changed the title
[Improvement] add a switch to skip determine whether it is writable by using the disk size in metaData
[Improvement] use the disk size obtained from periodic check to determine whether the disk can be written
May 9, 2024
… determine whether is writable (#1685)
### What changes were proposed in this pull request?
Use the disk size obtained from periodic check to determine whether the disk can be written.
### Why are the changes needed?
The disk size obtained from metadata is unreliable.
Fix: #1684
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing UTs
… determine whether is writable (#1685)
### What changes were proposed in this pull request?
Use the disk size obtained from periodic check to determine whether the disk can be written.
### Why are the changes needed?
The disk size obtained from metadata is unreliable.
Fix: #1684
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing UTs
(cherry picked from commit 40bd14b)
Code of Conduct
Search before asking
What would you like to be improved?
In our production cluster, we found that although the amount of shuffle data did not change much, the amount of writing to HDFS continued to increase every day. The main reason is that currently we determine whether the local disk is writable based on metadata.
Related to #1678 #1247
How should we improve?
Use the disk size obtained from periodic check to determine whether the disk can be written.
Are you willing to submit PR?
The text was updated successfully, but these errors were encountered: