Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[experimental] add simplified bsp and grpc exporter #3465

Closed
wants to merge 7 commits into from

Conversation

pmcollins
Copy link
Member

@pmcollins pmcollins commented Oct 6, 2023

While looking through the BatchSpanProcessor and OtlpSpanExporter, as well as implementing JSON over HTTP, I was curious to see what these classes might look like if we re-thought how they were designed with a view towards making them smaller and easier to understand.

To get feedback from this SIG about whether this direction looks promising, I'm creating a draft PR to discuss.

This method must be extremely fast. It adds the span to the accumulator for later sending and pokes the timer
if the number of spans waiting to be sent has reached the maximum batch size.
"""
size = self._accumulator.push(span)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This size could be wrong around the time that the batch fills up if another thread flushes first. Is that intentional, like a best effort?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question -- this implementation means that if you try to push a lot of spans through simultaneously with multiple threads, exports may contain batches of somewhat more or fewer than the max_batch_size number of spans. I figure this is okay at this stage of development.

But we may end up wanting to do something like: lock, append the span, test for length, generate something like a flush request, then release the lock. This is a fair amount of complexity though, which I'm trying to avoid. I'm hoping we can arrive at a solution that's maximally simple (suggestions always welcome).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated the Accumulator to do batching, so we should have uniform batch sizes now.

def _export(self) -> SpanExportResult:
batch = self._accumulator.batch()
if len(batch) > 0:
out = self._exporter.export(batch)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure how we'd do it, but this still has the issue that there is no way to interrupt the exporter if it is in a backoff loop.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we should figure out how we want this to behave. I'm thinking the sleep during backoff should be interruptible (which can be done with an Event object), but the actual sending should not be (within some reasonable timeout).

@pmcollins
Copy link
Member Author

Spring cleaning. Hope to pick this up again later.

@pmcollins pmcollins closed this May 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants