-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trace payload chunking #840
Conversation
39a7ce3
to
a4a1da8
Compare
a4a1da8
to
2dc844e
Compare
lib/ddtrace/transport/http/client.rb
Outdated
# Get response from API | ||
response = yield(current_api, env) | ||
# Get responses from API | ||
responses = yield(current_api, env) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to call the client multiple times for each request in the batch instead of changing the client to do batching itself? I think it's unwise to make the client handle multiple requests simultaneously as it will greatly increase the complexity of the transport code.
It will also make this batching feature more brittle and tightly coupled to how HTTP works instead of being agnostic to the means of transport, which will make it difficult (if not impossible) to adopt new means of transport in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to know what encoder we are using in order to break down the traces
being flushed into multiple chunks.
The client is currently responsible for such information, in the form of client.current_api.spec.traces.encoder
.
Also, when downgrading, the encoder might change. Downgrading is currently handled by the client.
I tried to prototype a different aproach just now, moving the chunking logic as far up the call chain as I believe it makes sense: feat/subdivide-payloads...tmp-feat/subdivide-payloads
I still don't like this one, too many layers are mixed together.
The main issue is that chunking, in a perfect scenario, would be done before we start calling the client. But the fact that we need the encoder, which is 2 levels down (inside the current Spec
instance) and that the encoder can change if we need to downgrade the API, seem to make it quite tricky.
Next, I'm going to try to move current_api
into the transport
instance, and handle API versioning there, including the downgrading logic.
I'll report back on those results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, that context is helpful, thanks for that explanation.
The existing design had certain assumptions about encoding before, hence why it was buried down lower in the transport, as it was considered a detail of the current API, which I think still holds true to a great extent.
I think it brings up some legitimate questions about how the design could change to accommodate batching though. Some possible paradigms I can think of might be:
- Expose the encoder, wrap the client with some kind of
Batcher
, then have the batcher encapsulate this logic entirely, and use the client to drive individual requests.Batching
could be its own moduleBatching
that can be composed into the existingHTTP::Client
. - Assert that encoding requests is detail of the API and that its acceptable for the API to split requests on the client's behalf. Consequently, you'd make the API spec responsible for batching and splitting one large request into smaller ones (which is what I think you were effectively doing.)
There might be more ways of handling this, but the key differences between these basically is option 1 is explicit in Client
usage (one request, one response) and option 2 is auto-magic "don't worry about the details, we'll figure it out."
Personally I'm in favor of number 1, because it keeps the responsibilities of the API/Client as small as possible (less complexity), and doesn't get us into weird scenarios where we have to handle a request that was forked into multiple requests in code that isn't concerned with batching (e.g. Client#send_request
). Instead, we can keep all this batching code (hopefully) in a neat little module that knows how to deal with multiple requests and extends the capability of the Client
in a compartmentalized way. (We could even go a step further to extract the "retry" functionality into a similar module for consistency, something I might want to undertake anyways.)
Let me know your thoughts or if you have some alternative paradigms to suggest!
0b3d132
to
62b5fd3
Compare
Successfully tested locally with a few real example applications. |
4829e20
to
85e7609
Compare
85e7609
to
e252a01
Compare
Updated with master, ready for review. |
9d639d7
to
db963d8
Compare
There were non-trivial changes to But otherwise, there is no part of this PR that was touched only for the sake of refactoring: all components touched required changes for correctness. |
I didn't read all of this, but note that if you split a trace in two it will break stats. I suspect the "payload too big" thing happens anyway with big traces so it would be necessary to do that. An implementation like that will complicate tracer code a lot I suspect, and it happens not only in Ruby but other languages too. Instead, I think it might be a better idea to explore "span batching" in the agent again, to add an endpoint which receives a set of random spans, and reconstructs traces on the agent side. This brings many benefits:
This will of course move the memory problem into the trace-agent, from not one client, but multiple ones which may be sending to the same endpoint. It should be acceptable, but it's bound to bring new problems and complications and needs to be explored (again). |
@gbbr this particular change is to break payloads into smaller sizes by separating individual traces. It is not for breaking traces into smaller pieces. e.g. if one flush interval passes and we have 10mb of traces, we'll send 2 payloads of traces to the agent instead of trying to do it in one. This is what we do in a few of the languages now. Span streaming is a great idea, but will require significant development/coordination between the tracers and the agent. This change should unblock us for now while we schedule span streaming investigation/work. |
Alright, carry on :) Never mind me then. |
return send_traces(traces.lazy) | ||
end | ||
end | ||
end.force |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's force
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forces a lazy enumerator to eagerly resolve: https://ruby-doc.org/core-2.6.1/Enumerator/Lazy.html#method-i-to_a
I could use #to_a
here too (#force
is an alias to #to_a
), but the #force
method only exists for a lazy enumerators, which makes it more explicit that we don't want to simply call #to_a
on a simple Array here, as that would not accomplish our goal of streaming requests.
data.length | ||
attr_reader :trace_count | ||
|
||
def initialize(data, trace_count) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this makes sense with the current design, and altering this has a lot of side effects.
Traces::Parcel
in the current design is supposed to be a protocol agnostic package of trace data, to be created by something that doesn't have knowledge of how the transport works. By requiring the Parcel
to be given encoded data like this (along with its trace count), it implicitly requires a knowledge of the transport and its current API state to properly construct, rendering this an object with strong coupling to internal transport behavior.
That said, I think there's an argument to be made that we should change the design, and that to support encoding in chunks, it might also require some different kind of construct.
Short term, maybe we can remedy this by leaving the existing Traces::Parcel
as is, but creating a new Traces::EncodedParcel
which results from encoding traces from a parcel during chunking.
Long term, I think perhaps it shouldn't create parcels at all; the transport should only receive generic requests, and return generic responses... any trace specific behavior should be in some kind of Traces::Transport
that wraps a generic transport (HTTP/IO/UDS, etc.) Then we shouldn't have a need for any parcels.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason I changed this Parcel is because this Parcel is actually a subtype, a Traces::Parcel
, it inherits behaviour from the generic parcel Transport::Parcel
, which is agnostic.
I did have the additional information I needed (trace_count
) under another carrier object, which I believe was Traces::Request
, but the parcel seemed like a better carrier for it. To be fair, I'm not 100% sure on the role of Parcel after the changes, so I'm very much open to changing this design.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
During serialization, we now break down large collections of traces into smaller batches.
This is necessary because sending large payloads can cause the receiving server to reject them.
Currently, we send traces to the Datadog agent which has a limit of 10 MiB (as of v6.14.1) per payload.
We then break down traces into chunks that are smaller than that limit.
We also discard single traces that exceed that limit, as these cannot be broken down any further.