-
Notifications
You must be signed in to change notification settings - Fork 511
New httpcheck package based on the Otel collector httpcheck receiver #14315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
bb88189 to
32c8b33
Compare
|
| @@ -0,0 +1,20 @@ | |||
| receivers: | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are going to need a way to include processors and compose pipelines. Possibly we'll want other component types as well.
Thinking about this a bit more, could we do something like the following:
- Instead of input.yml.hbs we functionally will pipeline.yml.hbs to define the integration pipeline in the collector.
- Each pipeline.yml.hbs defines a receiver, the processors and other components for it, and composes a pipeline with them in the expected order.
- The final exporter is omitted from the pipeline as it is controlled by Fleet.
- Fleet would terminate the pipeline in this file with the forward connector and connect it to the configured exporter.
- I say "final exporter" to leave using things like the routing connector to split into two pipelines in a configuration here, if that's what someone needed to do.
The direction would be to move from templating just the receiver, to being able to template an a pipeline that is wired into the correct exporter by Fleet.
Edit: update to suggest we should just omit the final pipeline exporter so that Fleet can control it. It will probably be the forward connector most of the time, but we don't want to lock ourselves into this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is more or less aligned with what we were thinking.
Instead of input.yml.hbs we functionally will pipeline.yml.hbs to define the integration pipeline in the collector.
The name of the template is set by the developer in the manifest, so it can really have any name.
Each pipeline.yml.hbs defines a receiver, the processors and other components for it, and composes a pipeline with them in the expected order.
Yes, current files are expected to contain also processors, but by now we will focus on having a working receiver. Processing included in packages it not so relevant in the input use cases, but it will be in the integrations use cases.
In the integration receiver we also took the decision on allowing a single receiver per pipeline, I think we will end up with something pretty similar here.
The final exporter is omitted from the pipeline as it is controlled by Fleet.
Exactly, this is not expected to be defined in the input.
Fleet would terminate the pipeline in this file with the forward connector and connect it to the configured exporter.
+1
I say "final exporter" to leave using things like the routing connector to split into two pipelines in a configuration here, if that's what someone needed to do.
We may have multiple pipelines per package in the integrations use case, in a later stage. For inputs I would generate single pipelines.
On addition to what is defined in the package, we will also need to support user-defined custom processing. We will need to allow the user to somehow define processors in the policy, and append them in the correct place in the final pipeline.
We do this in general now with the @custom pipelines, and in some packages with a yaml setting for agent-side processing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks we are thinking in the same direction.
Processing included in packages it not so relevant in the input use cases, but it will be in the integrations use cases.
On addition to what is defined in the package, we will also need to support user-defined custom processing. We will need to allow the user to somehow define processors in the policy, and append them in the correct place in the final pipeline.
IMO our goal is to enable an OTel equivalent of integrations, which is going to require pairing generic inputs and processors together to get a complete solutions. There has to be a place for the processing to go in the first iteration of OTel packages/integrations we introduce or they'll be viewed as incomplete by users.
We also know we want the processing to be able to move between the edge and managed OTLP service though, so perhaps we'll need a way to define them separately so they can be moved.
I like the idea of OTel "input" packages actually being "pipeline" packages that can define entire pipelines that terminate in a forward connector. This wouldn't give us a nice way to separate out processors though. The easiest solution I can think of is a "processors" package type where we allow connectors+receivers in one and processors in the other but that feels like kind of arbitrary division, and the other extreme is package per component which feels like too much.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fleet could know to take the processors section of an input/package out and provide it to a separate API to put processing in the managed OTLP input.
There is no detailed design yet for service side OTTL so it is hard to predict what the right choice on the package side is. I do know we'll want processing on the edge to match current integration capabilities and how to do this is a lot more obvious.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also know we want the processing to be able to move between the edge and managed OTLP service though, so perhaps we'll need a way to define them separately so they can be moved.
Fleet could know to take the processors section of an input/package out and provide it to a separate API to put processing in the managed OTLP input.
I brought this up with @axw and it's not worth doing anything in the package spec to support moving processing around yet. It's too hard to know what the right choice is.
Let's direct the thinking into the best way to allow OTel integrations to include processors for processing that will happen on the edge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree the goal is to have an equivalent to integration packages, but we considered having first the equivalent to input packages would help progressing, while giving something that is already useful. To support input packages we need to solve OTel collector config generation from policies, what will be also useful for integrations.
If we provide the same functionality as we have now for input packages, we will be able to use @custom pipelines for custom processing, though not on the edge.
Once we have input packages for OTel working at a similar level to our current input packages, we can elaborate on edge processing, and on full integrations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's direct the thinking into the best way to allow OTel integrations to include processors for processing that will happen on the edge.
Processors included in packages, and running in the edge, can be included in the template file, as we do now with beats-based inputs.
| - auto_configure | ||
| - create_doc | ||
| receivers: | ||
| httpcheck/componentid: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handling of ids in policy tests will need changes in elastic-package, something like elastic/elastic-package#2799.
…227673) Closes #224472 ## Summary Introduce basic support for OTEL input integrations in Fleet. - Using the test package in elastic/integrations#14315 - Resulting configuration based on work done in elastic/elastic-agent#5767 ### Testing - Compile the integration in elastic/integrations#14315 with elastic-package - Add the feature flag `EnableOtelIntegrations` to` kibana.dev.yaml` - Run local kibana - Load the package registry locally or upload the generated integration to kibana - Install `simple HTTP check` and view the full agent policy **IMPORTANT**: to actually send the configuration to the agent it's also needed an additional change to the fleet server, that parses the policy and gets only those fields that are declared inside an allowlist. PR: elastic/fleet-server#5169 ### Generated policy <img width="797" height="1339" alt="Screenshot 2025-07-18 at 10 14 07" src="https://github.com/user-attachments/assets/90026287-0889-46ed-b958-be2ffad93f50" /> ### Checklist - [ ] [Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html) was added for features that require explanation or tutorials - [ ] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios --------- Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com> Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
…lastic#227673) Closes elastic#224472 ## Summary Introduce basic support for OTEL input integrations in Fleet. - Using the test package in elastic/integrations#14315 - Resulting configuration based on work done in elastic/elastic-agent#5767 ### Testing - Compile the integration in elastic/integrations#14315 with elastic-package - Add the feature flag `EnableOtelIntegrations` to` kibana.dev.yaml` - Run local kibana - Load the package registry locally or upload the generated integration to kibana - Install `simple HTTP check` and view the full agent policy **IMPORTANT**: to actually send the configuration to the agent it's also needed an additional change to the fleet server, that parses the policy and gets only those fields that are declared inside an allowlist. PR: elastic/fleet-server#5169 ### Generated policy <img width="797" height="1339" alt="Screenshot 2025-07-18 at 10 14 07" src="https://github.com/user-attachments/assets/90026287-0889-46ed-b958-be2ffad93f50" /> ### Checklist - [ ] [Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html) was added for features that require explanation or tutorials - [ ] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios --------- Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com> Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
…lastic#227673) Closes elastic#224472 ## Summary Introduce basic support for OTEL input integrations in Fleet. - Using the test package in elastic/integrations#14315 - Resulting configuration based on work done in elastic/elastic-agent#5767 ### Testing - Compile the integration in elastic/integrations#14315 with elastic-package - Add the feature flag `EnableOtelIntegrations` to` kibana.dev.yaml` - Run local kibana - Load the package registry locally or upload the generated integration to kibana - Install `simple HTTP check` and view the full agent policy **IMPORTANT**: to actually send the configuration to the agent it's also needed an additional change to the fleet server, that parses the policy and gets only those fields that are declared inside an allowlist. PR: elastic/fleet-server#5169 ### Generated policy <img width="797" height="1339" alt="Screenshot 2025-07-18 at 10 14 07" src="https://github.com/user-attachments/assets/90026287-0889-46ed-b958-be2ffad93f50" /> ### Checklist - [ ] [Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html) was added for features that require explanation or tutorials - [ ] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios --------- Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com> Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
1c9ed6e to
71411e7
Compare
…lastic#227673) Closes elastic#224472 ## Summary Introduce basic support for OTEL input integrations in Fleet. - Using the test package in elastic/integrations#14315 - Resulting configuration based on work done in elastic/elastic-agent#5767 ### Testing - Compile the integration in elastic/integrations#14315 with elastic-package - Add the feature flag `EnableOtelIntegrations` to` kibana.dev.yaml` - Run local kibana - Load the package registry locally or upload the generated integration to kibana - Install `simple HTTP check` and view the full agent policy **IMPORTANT**: to actually send the configuration to the agent it's also needed an additional change to the fleet server, that parses the policy and gets only those fields that are declared inside an allowlist. PR: elastic/fleet-server#5169 ### Generated policy <img width="797" height="1339" alt="Screenshot 2025-07-18 at 10 14 07" src="https://github.com/user-attachments/assets/90026287-0889-46ed-b958-be2ffad93f50" /> ### Checklist - [ ] [Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html) was added for features that require explanation or tutorials - [ ] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios --------- Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com> Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
…lastic#15229) * remove default_pipeline readme instruction * remove more default_pipeline references * update PR link
|
To mention the non-own tenant case and include a troubleshooting tip. --------- Co-authored-by: Dan Kortschak <dan.kortschak@elastic.co>
* Remove unused field mappings for the jvm dataset in Kafka package * Removed unused mapping fiel mappings * Removed commented field mappings * Updated PR link
* Update queue.filled.pct.events to queue.filled.pct * add changelog * fix manifest.yml
…ian 13 (elastic#15142) * Use journald input by default when running system integration for Debian 13 * Fix description * Add link for journald input enhancement in changelog * Bump version from 2.5.4 to 2.6.0
…c#15162) The initial release includes device data stream, associated dashboards and visualizations. Island Browser fields are mapped to their corresponding ECS fields where possible. Test samples were derived from documentation and live data samples, which were subsequently sanitized.
… process.command_line (elastic#15226) * m365_defender: updated process.name ECS mapping in alert, event, and incident data streams to extract the process name from process.command_line instead of relying on file.name. * microsoft_defender_endpoint: updated process.name ECS mapping in log data stream to extract the process name from process.command_line. M365 Defender: * Alert – If process.name already exists, leave it as is. Otherwise, extract it from process.command_line(since process.executable is not available here). * Event – Some pipelines already contain logic to parse process.executable and process.name. The script to set process.name from command_line will only be used when either of these fields is missing. * Incident – Both process.name and process.executable are not available. Therefore, the script must be used to parse and populate process.name. Microsoft Defender Endpoint: * log - Both process.name and process.executable are not available. Therefore, the script must be used to parse and populate process.name.
* Update structure and links * Update changelog and manifest
…ic#15642) Test sample provided by user with sanitisation.
…lastic#15700) * refactor(transform): update unique keys for latest issues and bump version to 2.0.0 * refactor(transform): bump version to 2.17.0 and update changelog with bugfix details * chore(manifest): bump version to 2.18.0 * chore: update version to 2.17.1 and enhance changelog description for unique keys transformation
…lastic#15667) Add support for the oauth_endpoint_params configuration parameter for all available data streams. Log data stream still works under httpjson so the option has been added under data stream level along with all the OAuth2 options for this data stream. For the another data streams, as they work under the CEL input, it has been added at input level so adding any value to this option will affect all data streams that rely on CEL (machine, machine_action, and vulnerability). Finally, the auth logic for the vulnerability data stream is implemented in the CEL program instead of delegate in the CEL auth options for the input. Therefore, the oauth endpoint params in this case are added manually in the program as well.
* Update codeowners for ess_billing
…ic#15647) * feat: expose sasl mechanism configuration in kafka_log package * update PR id in changelog * CI failure fix
…lastic#15701) * Document enabling auto-install for content packages * Update docs/extend/auto-install-content-packages.md Co-authored-by: Colleen McGinnis <colleen.mcginnis@elastic.co> --------- Co-authored-by: Colleen McGinnis <colleen.mcginnis@elastic.co>
…astic#15727) * refactor(transform): update unique keys for latest issues and bump version to 2.0.0 * refactor(transform): bump version to 2.17.0 and update changelog with bugfix details * chore(manifest): bump version to 2.18.0 * chore: update version to 2.17.1 and enhance changelog description for unique keys transformation * Remove updated_at field from latest transform unique key * changelog: add bugfix entry for removing updated_at field; update version to 2.17.2 in manifest
o365: add policy tests and benchmarks for integration quality checks
Improvements to the extraction of logfiles from the pleasant password server. Added new proprietary pps fields for password entry information, for example username in a password entry changed (not to be confused with the username of a person who initiated the change)
…ic#15401) * add content pack * fix pr id * add codeowners entry * fix codeowners * rename integration to aws_elb_otel use esql queries in dashboard * fix codeowners entry * update logo * update dashboard title fix field names * update docs * add dashboard datastream filter * fix dashboard filter * address comments * use bytes for y axis * Update packages/aws_elb_otel/changelog.yml Co-authored-by: Mykola Kmet <mykola.kmet@elastic.co> * update dashboard * update dashboard * remove content pack from title * Update packages/aws_elb_otel/manifest.yml Co-authored-by: Mykola Kmet <mykola.kmet@elastic.co> * update data stream filter * fix datastream filters * fix datastream filter * Update packages/aws_elb_otel/docs/README.md Co-authored-by: Michalis Katsoulis <michaelkatsoulis88@gmail.com> * remove datastream filter at lens lvl --------- Co-authored-by: Mykola Kmet <mykola.kmet@elastic.co> Co-authored-by: Michalis Katsoulis <michaelkatsoulis88@gmail.com>
elastic#15616) Set event.category to "process" and event.type to "start" when certain process fields are present in the alerts dataset. Previously, alerts were being sent without setting the event.category and event.type fields.
* add kafka 4.0.0 test variant * kafka 3.6.0 variant * add more variants * cover kafka 0.10.2.1 and kafka 1.1.0 in system test variants * yaml format fix * yaml format fix * yaml format fix * oprimize docker image building
* Remove beta note * Update changelog and manifest
Add alerting rule templates to the Elastic Agent package: * CPU usage spike * Excessive memory usage * High pipeline queue * Dropped events * Output errors * Excessive restarts * Unhealthy status
…otel-collector' into httpcheck-otel-collector
💔 Build Failed
Failed CI StepsHistory
|
This package implements a Docker Stats input using the OpenTelemetry Collector's dockerstats receiver, following the pattern established in PR elastic#14315. Key features: - Type: integration with otelcol input (not content package) - Configurable collection interval, endpoint, and filtering - Comprehensive field definitions for container metrics - Full documentation and test policy Resolves: elastic#15731 Co-authored-by: William Easton <strawgate@users.noreply.github.com>
|
Sorry for the noise! Closing, I will open another one. |






Related to elastic/kibana#224472.
This is an experiment about creating input packages that contain OTel collector configuration.
It uses what we have now, with some remarks:
input: otelcol.If this is enough to indicate Fleet that it should generate OTel collector configuration for Hybrid agent, we won't need changes in the package spec, but we may need to require a new spec version.