Description
Version
v14.17.0
Platform
Linux zip-validator 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Subsystem
fs
What steps will reproduce the bug?
fs.rmDirSync
leaves files on the file system in a "deleted" state until script execution has completed. This is troublesome for very long-running scripts, and results in "out of space" disk errors in our application.
When files are "deleted," they still consume disk space, but they are not visible through either ls
or du
Here, df -h
shows the errantly consumed disk space (files in question are in /mnt/zippera
):
:/mnt/zippera$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 0 7.9G 0% /dev
tmpfs 1.6G 1020K 1.6G 1% /run
/dev/vda1 97G 36G 62G 37% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/vda15 105M 9.2M 96M 9% /boot/efi
/dev/loop1 71M 71M 0 100% /snap/lxd/16922
/dev/loop2 31M 31M 0 100% /snap/snapd/9607
/dev/loop3 33M 33M 0 100% /snap/snapd/12704
/dev/loop4 56M 56M 0 100% /snap/core18/2074
/dev/loop5 71M 71M 0 100% /snap/lxd/21029
/dev/sda 992G 537G 406G 57% /mnt/zippera
/dev/loop6 56M 56M 0 100% /snap/core18/2128
tmpfs 1.6G 0 1.6G 0% /run/user/1000
However, du
shows far less space consumed on that partition (75G is actually in use, not 537G as reported above):
:/mnt/zippera$ du -ah /mnt/zippera/ | sort -h
... (lines omitted)
9.8G /mnt/zippera/work/3c4dbda7/Snowfall_long_play_motions_U960.zip
11G /mnt/zippera/work/3c4dbda7/Snowfall_motions_4k.zip
25G /mnt/zippera/work/3c4dbda7/Snowfall_long_play_motions_4k.zip
75G /mnt/zippera/
75G /mnt/zippera/work
75G /mnt/zippera/work/3c4dbda7
And lsof
shows us what is consuming the space:
:/mnt/zippera$ sudo lsof | grep delete | grep zipper
node 191964 storyloop 19r REG 8,0 18115 13434882 /mnt/zippera/work/b59a75bb/Color_Smoke_after_effects_title_template_4k.zip (deleted)
node 191964 storyloop 20r REG 8,0 18115 13434883 /mnt/zippera/work/b59a75bb/Color_Smoke_after_effects_title_template_HD.zip (deleted)
node 191964 storyloop 22r REG 8,0 18115 13434884 /mnt/zippera/work/b59a75bb/Color_Smoke_after_effects_title_template_ProRes.zip (deleted)
... (lines omitted)
node 191964 191974 node storyloop 154r REG 8,0 2073195178 21561355 /mnt/zippera/work/893dc3e1/Christmas_Trivia_Countdowns_trivia_countdowns_HD.zip (deleted)
node 191964 191974 node storyloop 155r REG 8,0 13206057591 21561356 /mnt/zippera/work/893dc3e1/Christmas_Trivia_Countdowns_4k.zip (deleted)
node 191964 191974 node storyloop 156r REG 8,0 4808803233 21561357 /mnt/zippera/work/893dc3e1/Christmas_Trivia_Countdowns_HD.zip (deleted)
Best-case scenario: Available disk space is erroneously reported during script execution.
Actual impact: Long-running scripts that must create and delete large files will deplete all disk space, despite the developer's best efforts to keep the file system trimmed during script execution.
How often does it reproduce? Is there a required condition?
Reproducible every time.
Requires a long-running script that will create and delete files that are > the amount of available disk space (if all files were downloaded at the same time, they would consume all available space).
What is the expected behavior?
I would expect for fs.rmDirSync()
to not return until the disk space is available to be used by the Operating System.
What do you see instead?
df
reveals continually depleting disk space, despite the fact that files that are expected to be deleted are not present or accounted for by ls
or du
We are eventually met with this exception:
[Error: ENOSPC: no space left on device, write] {
errno: �[33m-28�[39m,
code: �[32m'ENOSPC'�[39m,
syscall: �[32m'write'�[39m
}
Additional information
We would not have noticed this if it weren't for the fact that we need to run a script over several days that downloads thousands of files to verify their contents. According to ls
and du
we've done everything correctly (the file system is properly maintained during script execution), but df
and lsof
reveal that fs.rmDirSync()
is failing to complete the final step in making the space available to use.