-
Notifications
You must be signed in to change notification settings - Fork 29.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Making blob.stream()
more stream friendly
#42264
Comments
There used to be an array copy while reading blob stream, maybe this is the reason. Can you try on the master branch? There is still a full copy when creating a stream though. |
Here is where it reads the hole blob into arrayBuffer before it starts streaming the data in chunks Lines 325 to 328 in a7164fd
|
the pull should rather try to fetch the needed chunked arrayBuffer() maybe do something like this psudo code? pull(ctrl) {
slice = await this.slice(start, end).arrayBuffer()
ctrl.enqueue(slice)
} |
it's somewhat unexpected that all chunks use the same underlying buffer and that each chunk has a byte offset. chunks[0].buffer === chunks[1].buffer
// Should rather be false
chunks[0].buffer !== chunks[1].buffer
chunks[0].byteOffset === 0 |
Version
17.5
Platform
mac
Subsystem
No response
What steps will reproduce the bug?
How often does it reproduce? Is there a required condition?
it reproduce every time, it's not clear to everyone that stream() will internally use blob.arrayBuffer()
What is the expected behavior?
to yield small chunks of data to not allocate as much memory as blob.size
What do you see instead?
I see memory spikes that when i create large blobs and then later tries to read them that memory goes way up.
this is one of the reason we are sticking to using fetch-blob instead of node's own built in blob implementation instead. I wish i did not have to use this package anymore and that i could also take adv of transfering blob's over workers with the built in transferable blob.
Additional information
No response
The text was updated successfully, but these errors were encountered: