-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf(ext/fetch): consume body using ops #16038
perf(ext/fetch): consume body using ops #16038
Conversation
ci/bench failure is fixed by #16055 |
Did a huge refactor, code is much cleaner now and it also grealty improves performance for requests where fetchbody size: 2gb
measured the time of console.time('body');
await res.arrayBuffer();
console.timeEnd('body'); |
bdc7c9c
to
e572aae
Compare
Towards #16046 |
Thank you for starting work on this. I think this is a good direction, but I would like to see some changes to the implementation to make it align with #16046 more closely. Namely, I think this optimization should be decoupled from Specifically:
With these changes the improvements would be noticeable for all resource backed streams, not just const file = await Deno.open("./my_very_large_file.txt");
const resp = new Response(file.readable);
const txt = await resp.text(); |
75c5f0d
to
3e07a56
Compare
992570c
to
e4ff608
Compare
839b7b6
to
3723ece
Compare
@lucacasonato PTAL |
@@ -168,6 +169,26 @@ async fn op_read( | |||
resource.read_return(buf).await.map(|(n, _)| n as u32) | |||
} | |||
|
|||
#[op] | |||
async fn op_read_all( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. #16115 will allow us to get rid of multiple allocations in this op in a follow up.
ext/http/lib.rs
Outdated
|
||
fn size_hint(&self) -> (u64, Option<u64>) { | ||
// TLS max packet size is 16kb, so we use that as a default. | ||
(16 * 1024 + 256, self.size) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use the Request::size_hint()
here, as described above.
Very close. Great refactor @marcosc90 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just realized there was one additional comment that I forgot to press "submit" on last time 🤦
This is also what my earlier comment regarding Request::size_hint
was referring to. Sorry!
ext/http/lib.rs
Outdated
let content_length = if request | ||
.headers() | ||
.get(hyper::header::CONTENT_ENCODING) | ||
.is_none() | ||
{ | ||
request | ||
.headers() | ||
.get(hyper::header::CONTENT_LENGTH) | ||
.and_then(|v| v.to_str().ok()) | ||
.and_then(|v| v.parse::<u64>().ok()) | ||
} else { | ||
None | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be replaced with a call to Request::size_hint()
. This will return the lower and upper bound for the body size. You can return this upper and lower bound from HttpStreamResource::size_hint()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done @lucacasonato, PTAL
|
||
let mut buffer = Vec::with_capacity(size); | ||
loop { | ||
let tmp = ZeroCopyBuf::new_temp(vec![0u8; 64 * 1024]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I presume the tmp could be moved out of the loop and only allocated once?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
error[E0382]: use of moved value: `tmp`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM to me too, massive effort @marcosc90!
This PR improves performance of
InnerBody.consume
:*read_all
ops to read full request/response bodyReadableStream
when possiblecontent-length
header on HTTP requests to allocate the full buffer128kb binary POST (ext/http)
POST JSON (ext/http)
Fetch
Consuming a 2gb
ArrayBuffer
throughfetch
Node / Undici
Main
This patch
Measured with:
/usr/bin/time --verbose {command}