Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trace payload chunking #840
Trace payload chunking #840
Changes from 1 commit
2dc844e
78942f3
62b5fd3
c5369cd
e252a01
80480f7
b94aef8
1484b2f
c28c768
db963d8
a16940b
74c3924
11e122f
af5e690
c8e107c
7ecf8e4
abc49c3
6eccad1
31935c9
15b2548
d141ca2
23c694e
d57498e
d630fc2
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to call the client multiple times for each request in the batch instead of changing the client to do batching itself? I think it's unwise to make the client handle multiple requests simultaneously as it will greatly increase the complexity of the transport code.
It will also make this batching feature more brittle and tightly coupled to how HTTP works instead of being agnostic to the means of transport, which will make it difficult (if not impossible) to adopt new means of transport in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to know what encoder we are using in order to break down the
traces
being flushed into multiple chunks.The client is currently responsible for such information, in the form of
client.current_api.spec.traces.encoder
.Also, when downgrading, the encoder might change. Downgrading is currently handled by the client.
I tried to prototype a different aproach just now, moving the chunking logic as far up the call chain as I believe it makes sense: feat/subdivide-payloads...tmp-feat/subdivide-payloads
I still don't like this one, too many layers are mixed together.
The main issue is that chunking, in a perfect scenario, would be done before we start calling the client. But the fact that we need the encoder, which is 2 levels down (inside the current
Spec
instance) and that the encoder can change if we need to downgrade the API, seem to make it quite tricky.Next, I'm going to try to move
current_api
into thetransport
instance, and handle API versioning there, including the downgrading logic.I'll report back on those results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, that context is helpful, thanks for that explanation.
The existing design had certain assumptions about encoding before, hence why it was buried down lower in the transport, as it was considered a detail of the current API, which I think still holds true to a great extent.
I think it brings up some legitimate questions about how the design could change to accommodate batching though. Some possible paradigms I can think of might be:
Batcher
, then have the batcher encapsulate this logic entirely, and use the client to drive individual requests.Batching
could be its own moduleBatching
that can be composed into the existingHTTP::Client
.There might be more ways of handling this, but the key differences between these basically is option 1 is explicit in
Client
usage (one request, one response) and option 2 is auto-magic "don't worry about the details, we'll figure it out."Personally I'm in favor of number 1, because it keeps the responsibilities of the API/Client as small as possible (less complexity), and doesn't get us into weird scenarios where we have to handle a request that was forked into multiple requests in code that isn't concerned with batching (e.g.
Client#send_request
). Instead, we can keep all this batching code (hopefully) in a neat little module that knows how to deal with multiple requests and extends the capability of theClient
in a compartmentalized way. (We could even go a step further to extract the "retry" functionality into a similar module for consistency, something I might want to undertake anyways.)Let me know your thoughts or if you have some alternative paradigms to suggest!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if there are no successful responses?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see
update_priority_sampling_sampler
might be handling this?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are correct, this just a matter of extracting a somewhat complex logic block into its own method.