Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use chunked REST serialization for large REST responses #88311
Use chunked REST serialization for large REST responses #88311
Changes from 60 commits
63dc225
d542b74
b323fd9
ba7cdea
fa63c8c
f9a3215
1ad294d
f022190
a87cfb3
75b036d
299cdba
333da0d
d9b1f8a
84dd503
6077797
a95590e
07bfaab
21cdd21
00ee6a0
6c3b963
9f86863
5b9219a
582dfe8
5f50d87
8a28b04
d327f2d
649481e
144c227
c061b73
a087a5b
7f85c19
53c5047
7358a38
2e9e5d5
c313eb6
9b7281b
d67bb7f
6756bd5
d33bc0b
e7d7893
eb7bd9b
c75f73b
1501636
fff84b6
13b015d
d28a31b
2214a31
b4091d7
00f97e5
fbb56fd
260b474
fa93925
62bd60f
007aa02
fc3897a
0368b58
b319e1b
aee391f
19724aa
19306e2
99cea44
5479a8e
605a867
f13702b
9d0379c
79c0fe2
c29de50
2ee3615
3ec4782
82086c0
34a1657
a7e2e5d
35ad972
e3073b5
a3d7b97
e3a979e
9b06800
ee09bb9
f422312
84940fc
16c9856
1532b41
e6ddf04
857be16
1a4b364
1a76fed
7e6cd61
71dc538
f874c72
cc312d7
1154131
6516482
dcf73e4
911e91d
98ff648
2943f1b
ca30f8c
9743942
99a5aa1
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we assert this in the outer
doWrite
, ensuring that no other writes occur when a chunked write is active?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks slightly odd to me, might be me not fully understanding if a write can happen between the channel becoming writable and the
channelWritabilityChanged
method being called. But ifenqueueWrite
ever enqueues thereadyResponse
, it seems wrong to start writing the chunk now? I suppose you assume thatqueuedWrites
is empty, but I am not sure that is always the case. If a split write happens first and then a chunked write (due to pipelining), would we then not risk the chunk being written here first?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤦 again, you're right this might be an issue we could run into. I pushed ee09bb9 to as you point out, cover the case where we only queue the first chunk because there's a pending split write of a huge non-chunked message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think netty does catch all here, but would feel safer to clear
currentChunkedWrite
and incrementwriteSequence
either before calling finish or in a finally in case any of the listeners throws.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea unless we have a bug where we finish the same thing twice we shouldn't get any throwing here, but makes sense to me still -> put a finally here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd think we would want to continue rather than break also if
writeChunk
returns false?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right 🤦 that was a real bug, thanks for catching. I added a specific test with a message that outputs small chunks to reproduce and cover now (see 9b06800).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we assert that:
to ensure progress of chunking in the wide variety of implementations we will eventually see?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can be more extreme IMO and just assert
length > 0
anddone == false
before that. We shouldn't be calling this once we're done and should never emit an empty buffer not even for the last invocation because that'll just be needless overhead for an empty last chunk. Adding those assertionsThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure we can always have that guarantee for the last chunk, but as long as we emit json it sounds plausible so ok by me for now.