-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a skeleton for the 'blobuploadprocessor'. #35966
base: main
Are you sure you want to change the base?
Create a skeleton for the 'blobuploadprocessor'. #35966
Conversation
… the initial mailing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pinging @dashpole as the sponsor for this component
Ack. My availability is limited until after kubecon, but I will try to get to this when I can. |
scope_name: otelcol/blobuploadconnector | ||
|
||
status: | ||
class: connector |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
connectors are generally only used when we need to convert between different telemetry types (e.g. traces to logs). Consider making this a processor unless we have use-cases where we need to convert between the signal types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the review @dashpole .
My initial approach was to use a processor, but I ran into issues around data ownership (because I wanted to modify the data). Additionally, since this doesn't do pure computation but also does I/O to upload the data, it wasn't clear that a processor was the right fit. @jsuereth had suggested a connector as an approach to consider.
You can see some of the relevant history in #33737 ; in particular, #33737 (comment) from July 12th shows where I was working on it as a processor (in a development branch named blob_writer_span_processor
) before switching to the connector approach.
Is it OK with you if we proceed with the connector solution? [It sounds like this feedback was an optional rather than mandatory piece of feedback, but wanted to confirm].
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Processors can modify data (just make sure to set the "mutates data" capability to true). If you do not have a cross-signal use case then this needs to be a processor. This is blocking feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. I've updated the code to use a processor.
Description
Create a skeleton for the
blobuploadprocessor
component.A more fleshed out implementation exists in:
blobuploadconnector
- span + span events implementationblobuploadconnector_logs
- start of implementation for logs, branching from the span + span events oneHowever, the size of the entire thing is a bit unwieldy and likely to become a barrier for an effective code review.
In order to make progress upstreaming this and to ensure a reasonable implementation path, I'm now trying to break out little bits of the implementation into separate, smaller, more focused PRs. And, in the process of this, attempting to simplify the code, improve testing, etc. over the reference proof-of-concept.
Link to tracking issue
#33737 : "New component: Blob Upload Processor"
Testing
Contains a "smoke test" in
package_test.go
that validates the basic contract for the connector and ensures non-crashing.Also contains a few other unit tests and placeholders for additional unit tests.
As actual implementation pieces land in subsequent PRs, we will add additional corresponding tests.
Documentation
Contains a
README.md
that explains the purpose of this component and its intended trajectory.Also contains a
metadata.yaml
providing structured documentation for the component.We will update the
README.md
document as additional implementation lands to keep it up-to-date with actual state.