-
Notifications
You must be signed in to change notification settings - Fork 332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Streams based uploading with Content-Length (not chunked encoding) #95
Comments
If we want to eventually support fixed length content for streams like this it might be nice to let the stream specialize for it. The internal source could provide a "total length" getter that the stream can then expose. So things like file streams and fixed-string streams could then expose their total length. Fetch could then just DTRT if the total length is available. |
Do we really need this in a world that's moving towards H2? |
|
I guess then we'll hear this come up a lot when folks start using streams... |
If the |
@jakearchibald I'm not sure I understand the question. The setup is supposed to be such that anything but streams uses |
I couldn't figure out why this was complicated, I realise now it's because |
The problem is that when it's set (we can make that work somehow) there's no guarantee the stream will meet the requirements. There are some solutions to that (padding 0x00 bytes, stop when you hit the limit, etc.), though it's unclear how those should interact with timeouts. Furthermore, given that H/2 doesn't have this problem at all, it's not clear to me it's worth solving unless many folks find H/1 chunked a problem and cannot migrate to H/2. |
Having a |
That's fair. If we really need it then we'll need to address the questions above. Perhaps if a custom |
We could allow the setting of I think if the stream closes before enough content is provided, we abort the request. Once the stream provides content-length bytes, the stream can be cancelled is it tries to provide more data. |
Making such restrictions (or forbidding it from being set more than once) are actually somewhat tricky given that all the header APIs are generic. I wonder if a nicer alternative could be wrapping a ReadableStream somehow: |
Allowing |
And then have validation for the value and switching between that and chunked in the fetch layer somewhere? That seems rather unpredictable. |
Do we need validating if it's been given the all-clear via a preflight? As for the switching, I'm used to it since nodejs does the same thing with responses. From https://nodejs.org/api/http.html#http_http_request_options_callback:
Of course that doesn't mean it's the right thing for fetch to do. |
Dunno, what's the processing model if I pass |
I'd also like to stress again that this seems like a lot of complexity to make H/1 scenarios work a little better. I guess it also helps with the case of letting the server reject early in H/2, but usually those kind of limits are already enforced on the client. |
/me wonders if we need a new, request-specific header with the semantics "I will send at most this many bytes, but only use that to make policy decisions, not to delimit the message." |
@yutakahirano @yoichio is this still something you want to pursue? |
I don't have. |
Im running into this issue regarding the lack of The end result is a download is non-resumable, since there is no Unless there are some other headers you can send when doing |
This comment was marked as duplicate.
This comment was marked as duplicate.
Closing this per the earlier discussion. Given that |
@matthewjumpsoffbuildings and anyone else using cloudflare workers. You can use |
For the first release of Streams/Fetch integration, we've decided to choose the always-chunked approach. See this minutes: https://etherpad.mozilla.org/streams-f2f-july
There're still some issues to resolve for better user experience. E.g. Domenic pointed out that it would be a pain for Amazon S3 users at https://lists.w3.org/Archives/Public/public-webapps/2014OctDec/0420.html . This issues is for reminder and place for discussion to happen in the future.
Future plan described by @domenic (moved from whatwg/streams#378)
The text was updated successfully, but these errors were encountered: