Description
in this video https://youtu.be/6EDaayYnw6M?t=1202 he talks about returning a blob from the fetch api.
in theory you can return a blob early if you know the content-length or size of the blob. The content dose not have to be known immediately.
you could for example make a request to a 4gb large file and have the blob returned just right after you get the http response without having all data at hand. That's it to say: the response has a content-length and isn't compressed.
This idea was brought up way long before in NodeJS by jasnell about 4y ago
For Blob in general, it is really nothing more than a persistent allocated chunk of memory. It would be possible to create a Blob from one or more TypedArray objects. I'm sketching out additional APIs for the http and http2 modules that would allow a response to draw data from a Blob rather than through the Streams API. There is already something analogous in the http2 implementation in the form of the respondWithFile() and respondWithFD() APIs in the http2 side. Basically, the idea would be to prepare chunks of allocated memory at the native layer, with data that never passes into the JS layer (unless absolutely necessary to do so), then use those to source the data for responses. In early benchmarking this yields a massive boost in throughput without the usual backpressure control issues.
I'm still interested in this idea also, but i have no ide/clue of how to sketch this up or how to best implment it.
I mean i built this HTTP File-like class that operates on byte-range, partial request and having a known content-length
The goal of it all was to have a zip from a remote source, passing it to a zip parser that could slice and read the central directory so it could retrieve a list of all the files that was included and jump/seek within the blob for the stuff you needed without having to download the hole zip file. This meant that it would make multiple partial http request for each file later on
it's a pretty cool concept of optimizing