-
Notifications
You must be signed in to change notification settings - Fork 495
[o11y] Drop tail stream events when reaching excessive queue size #5637
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
3a09407 to
7692d3e
Compare
| tracing::SpanOpen(span.spanId, span.operationName.clone()), span.startTime, spanNameSize); | ||
| // If a span manages to exceed the size limit, truncate it by not providing span attributes. | ||
| if (span.tags.size() && messageSize <= MAX_TRACE_BYTES) { | ||
| if (span.tags.size() && spanTagsSize <= MAX_TRACE_BYTES) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the watchful reviewer will notice, this is a minor functional change: We're no longer including the size of the span name in this check since it is included in a different tail event, so we're slightly relaxing the size limit for emitting span tags here.
src/workerd/io/trace-stream.c++
Outdated
| event.sequence, | ||
| tracing::Log(event.timestamp, LogLevel::WARN, | ||
| kj::str( | ||
| "Dropped ", active->droppedEvents, "tail events due to excessive queueing")), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Open to a different message here if requested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Injecting a synthetic log seems deceptive. Adding droppedEventCount on the Outcome event seems more natural.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jmorrell-cloudflare any thoughts here? I think adding a log makes more sense since we have precedent for that in BTW and since it is a lot harder to miss than just having a droppedEventCount, but if you disagree it's also easy to change.
src/workerd/io/trace-stream.c++
Outdated
| // The TailStreamWriterState holds the current client-side state for a collection | ||
| // of streaming tail workers that a worker is reporting events to. | ||
| struct TailStreamWriterState { | ||
| // The maximum size of the queue, in bytes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Open to different names for these constants/defining them somewhere else.
Note that testing that these checks take effect at some point will be provided in a downstream PR (to follow). This will also include a discussion of why they are sufficient for almost all use cases.
|
This is now feature-complete. As noted in a PR comment – tests for this and rationale will be provided in a downstream PR, but I think we can already discuss the merits of the code changes here. |
| Event event; | ||
|
|
||
| // The approximate size of the event, in bytes. | ||
| size_t sizeHint; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could probably restructure this to pass sizeHint as a parameter to reportImpl() instead of storing it in TailEvent directly? Might be cleaner.
src/workerd/io/trace-stream.c++
Outdated
| auto builder = KJ_ASSERT_NONNULL(current->capability).reportRequest(); | ||
| auto eventsBuilder = builder.initEvents(current->queue.size()); | ||
| size_t n = 0; | ||
| // KJ_LOG(WARNING, "queue: sending", current->queue.size(), "events", current->queueSize); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be removed before merge – still using this while tuning test cases
|
Add tests |
7692d3e to
70cfa7f
Compare
This will avoid excessive memory overhead in a few edge cases.
70cfa7f to
51d612b
Compare
mar-cf
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My biggest concern is if it's possible to produce inconsistent traces due to dropping.
| // The maximum size of the queue, in bytes. | ||
| const size_t maxQueueSize = 2 * 1024 * 1024; | ||
| // The estimated overhead of TailEvent wrapping per message. This does not need to be very | ||
| // accurate, but should be enough to avoid allocating too much memory/hitting capnp RPC message | ||
| // size limits when sending many tiny events. | ||
| const size_t tailSerializationOverhead = 64; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should these limits be configurable rather than hardcoded? Would help to have visibility into drop frequency to validate whether these values are appropriate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please see the internal PRs for rationale for having the current limits, for why I think the current approach generally won't result in events being dropped for real-world use cases. We can increase them later if needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a warning log if this is ever hit, and/or coordinate with WOBS to expose a metric.
Please see the internal PRs for rationale for having the current limits
I'd prefer to see some type of empirical evidence, but that works.
| tailStreamWriter->report(spanComponentContext, kj::mv(attr), span.startTime, spanTagsSize); | ||
| } | ||
| tailStreamWriter->report(spanComponentContext, tracing::SpanClose(), span.endTime); | ||
| tailStreamWriter->report(spanComponentContext, tracing::SpanClose(), span.endTime, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the queue crosses the limit mid-span, SpanOpen could be delivered without its SpanClose, leaving an unclosed span in the trace.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My rationale here was that the tail producer may drop any event type except for Onset/Return/Outcome, and that the tail consumer is expected to handle any kind of event being omitted – it may choose to report spans as truncated or omit them. I think as user tracing evolves, it will be hard to provide guarantees that specific events are always present without having to worry about excessive queue size in edge cases. SpanClose itself might be easier to support since its size is fixed, but we could end up with the opposite problem too: A SpanClose could be delivered without its SpanOpen if that was dropped earlier, which would result in a span being reported for which only the endTime is known.
| active->queueSize += tailSerializationOverhead + event.sizeHint; | ||
| } else { | ||
| active->queue.push(event.clone()); | ||
| active->droppedEvents++; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider reporting which event types were dropped rather than just a count.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that scale in case we keep adding more event types in the future? If we see a need for it in the future we can always add it later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like pretty light, I wouldn't be concerned.
This will avoid excessive memory overhead in a few edge cases – usually the tail worker will be able to keep up with reporting events but we still need to put a limit to the queue size.