-
Notifications
You must be signed in to change notification settings - Fork 620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@std/tar hangs indefinitely unlike @std/archive #6019
Comments
Might be useful to give for await (using entry of entries) {
console.log(entry.path, entry.header.size)
} |
Maybe. That user error bug would then be somebody trying to consume the readable stream, not awaiting its consumption in the scope and getting thrown a bad resource ID. |
Is blocking the next entry until previous readable is either discarded or consumed a limitation or is it by design ? If the latter I feel like maybe it is slightly inconvenient. In any case it's easily missable from the docs, especially if you're the one reviewing code from someone else, you're probably going to miss this specificity |
It's a limitation of tar. Each file in tar is appended one after the other. Tar isn't designed to be extracted in parallel, but one after the other. A parallel version could be made specifically only for Deno but that would require the file to be saved to disk and not compressed, so it could then seek around the file. But I don't know many people who would be writing non-compressed tar files to disk. Streams are also designed to work linearly, meaning one must work with the data from left to right. Streams are great for working linearly over large amounts of data while keeping a small memory footprint. |
Going back to this idea on implementing We could implement it to solve the above issue of hanging indefinitely when Some considerations to take into account though is if the user did this, forgetting to await the pipping, the code would then throw a bad resource ID as the scope ends and the readable tries to continue. This may be more desirable behaviour than hanging indefinitely. for await (using entry of readable) {
console.log(entry.path, entry.header.size)
entry?.readable.pipeTo((await Deno.create(entry.path)).writable)
} Another consideration to take into account though is if the user is properly handling the for await (const entry of readable) {
console.log(entry.path, entry.header.size)
if (condition) await entry?.readable.pipeTo((await Deno.create(path)).writable)
else await entry?.readable.cancel()
} When using |
Oh yeah, I guess it should be With Edit: nvm, |
Thinking about it, in the example where forgetting to await the pipping, I think it may be more likely that the stream becomes corrupt as some of it is deleted through |
Describe the bug
Given the following valid archive:
Steps to Reproduce
Using new std/tar result in a code stuck (or if it isn't it's way longer than minutes):
Expected behavior
Using old std/archive/untar works without issues:
References
@std/tar
lowlighter/libs#76@std/archive
#5986The text was updated successfully, but these errors were encountered: