-
-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"no such file or directory" when copying large amounts of data #72
Comments
Thank you, @m90 Let me clarify that you think there are two possible cause of this problem: a. Copy failed because the src dir is too large. Tell me why you think of |
My line of thinking (without knowing too much about what Or is there a flaw in that? |
For example here: Lines 142 to 166 in 9aae5f7
we could run into a situation where |
Fair enough. Worth thinking. The core issue is neither size nor time, imo. That is "should we lock what we wanna copy till it's done?". Let me think about it to make the best interface for us. |
This sums it up perfectly :) If it's possible to add such an option that would definitely be of much help. |
I don't think there is any point in trying to do that from this go module. You will get the same race condition when trying to lock the file because file can get deleted after directory is read. So the only way to do this is to lock the file system before reading directory, and that can only be done either by "locking" the entire filesystem (eg filesystem snapshot, or lvm snapshot), or by pausing the docker container in the docker-volume-backup use case. I think it would be enough to simply ignore
If you don't stop or pause the container before copying you will always risk that files are deleted while you copy. |
Files, symlinks and directories may be deleted while or after directory list is read. Add test to simulate this so we can fix the desired behavior. ref otiai10#72
Thank you, and agree with your idea @ncopa, locking is not what this package should provide. |
I'm using this package in a tool for backing up Docker volumes: https://github.com/offen/docker-volume-backup
Users that do not want to stop their containers while taking a backup can opt in to copying their data to a temporary location before creating the tar archive so that creating the archive does not fail in case data is being written to a file while it's being backed up. To perform this copy, package
copy
is used (thanks for making it public, much appreciated).This seemed to work well in tests as well as the real world, however recently an issue was raised where
copy
would fail with the following error when backing up the data volume for a Prometheus container:The dataset that is being copied seems to be a. very large and b. pretty volatile which has me thinking this file might actually have been deleted/moved before copy finds the resources to actually copy it. This is the downstream issue: offen/docker-volume-backup#49
Is this issue somehow known? Is there a way to fix it by configuring
copy
differently?This is the part where I use
copy
in code and also where the above error is being returned:The text was updated successfully, but these errors were encountered: