forked from open-telemetry/opentelemetry-collector-contrib
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
merge from upstream #1
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Add observer notification interface (k8s observer will be in separate PR) * Refactor receiver_creator to be more easily testable and organized * receiver.go mostly implements OT interface and delegates to the new files * observerhandler.go responds to observer events and manages the starting/stopping of receivers * rules.go implements rules evaluation (not currently implemented) * runner.go contains a runner interface that handles the details of how to start and stop a receiver instance that the observer handler wants to start/stop * Implement basic add/remove/change response in receiver_creator to observer events
This utilizes `host.GetExtensions` to find observers that have been configured using `watch_observers`. receiver_creator can be configured to watch zero or more observers. To support this the receiver templates have been moved under "receivers" key to be able to differentiate subreceiver config keys from config keys for the receiver_creator itself.
Since the k8s processor prefixes all labels with `k8s.*.`, adding the same prefix to the IP label. We'll still continue to look for the `ip` label on resource/node when we can't find the IP by other means but will only write the IP back to `k8s.pod.ip`.
Also fix SAPM exporter that was reporting incorrect dropped spans metric.
…eam (#193) Use new jaegertranslator.ProtoBatchesToInternalTraces function to convert multiple Jaeger proto batches at once
### Description Add `k8s_cluster` receiver. This receiver monitors resources in a cluster and collects metrics and metadata to correlate between resources. This receiver watches for changes using the K8s API. ##### Key Data Structures - `kubernetesReceiver`: Sends metrics along the pipeline - `resourceWatcher` : Handles events from the K8s API, collecting metrics and metadata from the cluster. This struct uses a `informers.SharedInformerFactory` to watch for events. - `dataCollector` : Handles collection of data from Kubernetes resources. The collector has a `metricsStore` to keep track of latest metrics representing the cluster state and a `metadataStore` (a wrapper around `cache.Store`) to track latest metadata from the cluster. ##### Workflow - **resourceWatcher setup** - setup SharedInformerFactory, add event handler to informers. The following methods:`onAdd`, `onUpdate` and `onDelete` on the `resourceWatcher` handle resource creations, deletions and updates. - **Event Handling** - _Add/Update_: On receiving an event corresponding to a resource creation or update, the latest metrics are collected by `syncMetrics` method on `dataCollector`. The collected metrics are cached in `metricsStore`. Methods responsible for collecting data from each supported Kubernetes resource type are prefixed with `getMetricsFor`. For example, `getMetricsForPod` collects metrics from Pod. - _Delete_: On deletion of resources, the cached entry will be removed from `metricsStore`. - Note that only metric collection is turned on right now. The metadata collection code is currently inactive (It is controlled by the `collectMedata` field). - **Metric Syncing**: Metrics from the `metricStore` are send along the pipeline once every `collection_interval` seconds. - **Metadata Syncing**: TODO (The metadata collection code is inactive) ### Testing - Unit tests for each resource type - Integration test for receiver - Manual testing with SignalFx exporter ### Documentation https://github.com/signalfx/opentelemetry-collector-contrib/blob/k8s-cluster/receiver/kubernetesclusterreceiver/README.md
This was left because k8sclusterreceiver was merged after the update.
Current performance with compression: ``` Test |Result|Duration|CPU Avg%|CPU Max%|RAM Avg MiB|RAM Max MiB|Sent Items|Received Items| ----------------------------------------|------|-------:|-------:|-------:|----------:|----------:|---------:|-------------:| Trace10kSPS/SAPM |PASS | 16s| 23.3| 25.0| 53| 66| 142300| 142300| ``` Current performance without compression: ``` Test |Result|Duration|CPU Avg%|CPU Max%|RAM Avg MiB|RAM Max MiB|Sent Items|Received Items| ----------------------------------------|------|-------:|-------:|-------:|----------:|----------:|---------:|-------------:| Trace10kSPS/SAPM |PASS | 15s| 20.1| 20.7| 53| 65| 145700| 145700| ```
We previously disabled compression when sending from testbed to Collector. This also disables compression when sending from Collector to testbed.
### Description The `prometheus_simple` receiver is a wrapper around the [prometheus receiver](https://github.com/open-telemetry/opentelemetry-collector/tree/master/receiver/prometheusreceiver). This receiver provides a simple configuration interface to configure the prometheus receiver to scrape metrics from a single target. This receiver would provide the `receiver_creator` a simplified configuration interface to collect prometheus metrics. An Example config, ```yaml receivers: prometheus_simple: endpoint: "172.17.0.5:9153" ``` The receiver can also be configured to use a Pod service account when running in a Kubernetes environment using `use_service_account`. ### Testing Unit tests Manual testing with SignalFx exporter ### Documentation https://github.com/signalfx/opentelemetry-collector-contrib/blob/simple-prometheus/receiver/simpleprometheusreceiver/README.md
* Add k8s observer This adds a k8s observer as well as reworking some of the observer data structures. Work is still ongoing to add an end to end test to validate it against a real k8s instance and dynamically start a real receiver. Before Endpoint was an interface with methods like `ID()`, `Target()`, etc. but these had a smell because they were acting as simple data getters for struct values. This changes Endpoint to be a struct itself that has members like `ID` and `Target`. It has a member `Details` that can currently either be a Pod (ie just has an IP address) or a Port (has a pod IP but also an associated port). Currently this `Details` is an `interface{}` as there are currently no methods in which to base an interface off of. This is basically due to a limitation in Go which doesn't have true class inheritance. It is no worse than before since when `Endpoint` was an interface the consumer would have to type switch to check what concrete type it was (pod, port, etc.). I have tried structuring it many ways and this is the least worse I've found thus far. * fix go.mod * remove e2e test that is not ready * Apply suggestions from code review Co-Authored-By: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com> * review updates * review updates * rename package * cleanup deps * fix type changes * fix linting * fix build Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com>
Jaeger process tags recently went from being represented as node attributes to now being represented internally as resource labels. The honeycomb exporter now looks for resource labels and adds them as fields to the resulting span / event.
* Fix redis crash Redis receiver was crashing on stop because the receiver was not a pointer so Shutdown was just being passed zero values. I looked through to see if there are any others that are missing pointer and just found one other. Would be good to have another set of eyes to look through. * review updates * make factory take ptr receiver (doesn't matter but for future-proofing) * rename receiver_factory.go to factory.go for consistency
dmitryax is already an approver on the Collector core: open-telemetry/opentelemetry-collector#895 This ensures he has the approving permission in this repo too.
…ync (#206) This changes the way we add new approvers, will file an issue and if enough "+1" then we close the issue and add the person to the README and team.
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
In preparing for a demo I noticed that some metric types were wrong (might have fallen through the cracks when I was dealing with merge issues on the Redis PR). Fixed those and added an optional server_name config value that will get applied as a Resource label, if present, for distinguishing between multiple Redis servers.
Collector panicked when it tried export of a span where attributes was nil. Testing: Added unit test with attributes set to nil
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
This adds rules support with some limitations in TODO. This allows dynamic discovery to work end to end with k8s pods and container ports. Will work on adding documentation but rules look like: ``` type.port && name == "http" && pod.labels["region"] == "west-2"` ``` Alternative for type check is to do: `type == types.Port` Or maybe some other way. First is shorter but second way is a bit more obvious. Thoughts welcome.
LINT defaults to [lint in make](https://www.gnu.org/software/make/manual/html_node/Implicit-Variables.html) so it was still running the old go program called `lint`. However this program isn't included in install-modules anymore. Set LINT in common so all includers get it.
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
This commit allows to extract k8s Pod UID and set it as "k8s.pod.uid" attribute according to OpenTelemetry conventions
The jaeger legacy and kubeletstats receivers were not updated to the latest change in the ReceiverFactoryOld interface open-telemetry/opentelemetry-collector@2f6d603
For the pods that are just created, some stats can be absent. And it causes crashing the otel agent with nil pointer exception.
* Add SignalFx demo configuration * collector.yaml * k8s.yaml * Update signalfx-k8s.yaml * Enable Zipkin for Istio Mixer Adapter * Update examples/signalfx/signalfx-collector.yaml Co-authored-by: Paulo Janotti <pjanotti@splunk.com> * Update examples/signalfx/signalfx-k8s.yaml Co-authored-by: Paulo Janotti <pjanotti@splunk.com> * Move to exporter directory Co-authored-by: Paulo Janotti <pjanotti@splunk.com>
Publish builds binaries for all supported platforms with the cross-compile job. Running the build job is redundant for this workflow as the amd64 linux binary is overwritten by cross-compile anyway. Also this can cause CI job to fail if build and cross-compile both persist the same files to CI workspace. However, using cross-compile for publish workflow and only build for regular PR workflow is a bit awkward and adds unnecessary complexity to the CI workflow definition. It is far simpler to just replace build with cross-compile. This means all PR builds will attempt to build binaries for all supported platforms even if we loadtest only linux amd64 right now. I think is a good outcome anyway as PR CI jobs can now catch issues that prevent the collector from building for any of the supported architectures. This also paves the way to enable functional/integration/load testing for all platforms and architectures.
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
…ce & added build step for generating Windows MSI (#408)
* Add logging on errors in Runnable Kubeletstats receiver was missing log statements on important errors. * Fix self reported metrics * Fix obsreport calls
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
Same load test scenarios have lower CPU limits in contrib repo than set in core. This commit adjusts the limits to be close to what's defined in core.
mxiamxia
pushed a commit
that referenced
this pull request
Oct 7, 2020
* Add DataDog exporter back from old fork commit 99129fb96e29e9c1a92da00b7e3f8efcae8a31e8 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Thu Sep 3 18:10:28 2020 +0200 Handle namespace at initialization time commit babca25927926a60c0c416294af3aadf784d41b9 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Thu Sep 3 17:23:53 2020 +0200 Initialize on a separate function This way the variables can be checked without worrying about the env commit 24d0cb4cc566fa5313a8650c904a27bea68bf555 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Thu Sep 3 14:30:35 2020 +0200 Check environment variables for unified service tagging commit 6695f8297ab8b1fcae71b05acb027c4a0992e3a0 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Wed Sep 2 14:57:37 2020 +0200 Add support for sending metrics through the API - Use datadog.Metric type for simplicity - Get host if unset commit c366603 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Wed Sep 2 09:56:24 2020 +0200 Disable Queue and Retry settings (#72) These are handled by the statsd package. OpenTelemetry docs are confusing and the default configuration (disabled) is different from the one returned by "GetDefault..." functions commit a660b56 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Tue Sep 1 15:26:14 2020 +0200 Add support for summary and distribution metric types (#65) * Add support for summary metric type * Add support for distribution metrics * Refactor metrics construction - Drop name in Metrics (now they act as Metric values) - Refactor constructor so that errors happen at compile-time * Report Summary total sum and count values Snapshot values are not filled in by OpenTelemetry * Report p00 and p100 as `.min` and `.max` This is more similar to what we do for our own non-additive type * Keep hostname if it has not been overridden commit c95adc4 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Thu Aug 27 13:00:02 2020 +0200 Update dependencies and `make gofmt` The collector was updated to 0.9.0 upstream commit 20afb0e Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Wed Aug 26 18:25:49 2020 +0200 Refactor configuration (#45) * Refactor configuration * Implement telemetry and tags configuration handling * Update example configuration and README file Co-authored-by: Kylian Serrania <kylian.serrania@datadoghq.com> commit fdc98b5 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Fri Aug 21 11:09:08 2020 +0200 Initial DogStatsD implementation (#15) Initial metrics exporter through DogStatsD with support for all metric types but summary and distribution commit e848a60 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Fri Aug 21 10:42:45 2020 +0200 Bump collector version commit 58be9a4 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Thu Aug 6 10:04:32 2020 +0200 Address linter commit 695430c Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Tue Aug 4 13:28:01 2020 +0200 Fix field name error MetricsEndpoint was renamed to MetricsURL commit 168b319 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Mon Aug 3 11:05:01 2020 +0200 Create initial outline for Datadog exporter (#1) * Add support for basic configuration options * Documents configuration options * go mod tidy * Address feedback from upstream PR we did not merge (#1) * Backport changes from upstream PR Remove `err` from MapMetrics * Remove usage of pdatautil * Fix tests * Use TCPAddr * Review which functions should be private * Remove DogStatsD mode (#2) * Remove DogStatsD mode * go mod tidy * Remove mentions to DogStatSD * Improve test coverage (#3) * Improve test coverage Added unit tests for - API key censoring - Hostname - Metrics exporter Renamed test and implementation files for consistency * Add one additional test * Remove client validation (#6) The zorkian API does not validate the API key unless you also have an application key, even though the endpoint works without it. I am removing this validation until this gets fixed on the zorkian library * Keep only configuration and factory methods Following the contribution guidelines we need to make a first PR with this * Use latest version of collector * Remove `report_percentiles` option It is not being used as of now until the OTLP metrics format stabilizes and we have a Summary type metric again * Correct configuration The API key is now a required setting * Remove test not relevant for this PR * Remove unnecessary imports after removing test * Address review comment * Apply suggestions from code review Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com> * Separate documentation into two examples One example with the minimal configuration, for sending to `datadoghq.com` and a second one for sending to `datadoghq.eu` Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com>
mxiamxia
pushed a commit
that referenced
this pull request
Jan 26, 2021
* Restructure buildCWMetric logic (#1) * Restructure code to remove duplicated logic * Update format * Improve function and variable names * Extract logic for dimension creation and add test * Implement minor fixes * Remove changes to go.sum * Implement tests for getCWMetrics * Implement tests for buildCWMetric * Format metric_translator_test.go * Run with gofmt -s * Disregard ordering of dimensions in test case * Perform dimension equality checking as a helper function
mxiamxia
pushed a commit
that referenced
this pull request
Oct 2, 2023
…emetry#24676) **Description:** The metadata.yml for the SSH check receiver currently documents a resource attribute containing the SSH endpoint but this is not emitted. This PR updates the receiver to include this resource attribute. **Link to tracking Issue:** open-telemetry#24441 **Testing:** Example collector config: ```yaml receivers: sshcheck: endpoint: 13.245.150.131:22 username: ec2-user key_file: /Users/dewald.dejager/.ssh/sandbox.pem collection_interval: 15s known_hosts: /Users/dewald.dejager/.ssh/known_hosts ignore_host_key: false resource_attributes: "ssh.endpoint": enabled: true exporters: logging: verbosity: detailed prometheus: endpoint: 0.0.0.0:8081 resource_to_telemetry_conversion: enabled: true service: pipelines: metrics: receivers: [sshcheck] exporters: [logging, prometheus] ``` The log output looks like this: ``` 2023-07-30T16:52:38.724+0200 info MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 2, "data points": 2} 2023-07-30T16:52:38.724+0200 info ResourceMetrics #0 Resource SchemaURL: Resource attributes: -> ssh.endpoint: Str(13.245.150.131:22) ScopeMetrics #0 ScopeMetrics SchemaURL: InstrumentationScope otelcol/sshcheckreceiver 0.82.0-dev Metric #0 Descriptor: -> Name: sshcheck.duration -> Description: Measures the duration of SSH connection. -> Unit: ms -> DataType: Gauge NumberDataPoints #0 StartTimestamp: 2023-07-30 14:52:22.381672 +0000 UTC Timestamp: 2023-07-30 14:52:38.404003 +0000 UTC Value: 319 Metric #1 Descriptor: -> Name: sshcheck.status -> Description: 1 if the SSH client successfully connected, otherwise 0. -> Unit: 1 -> DataType: Sum -> IsMonotonic: false -> AggregationTemporality: Cumulative NumberDataPoints #0 StartTimestamp: 2023-07-30 14:52:22.381672 +0000 UTC Timestamp: 2023-07-30 14:52:38.404003 +0000 UTC Value: 1 ``` And the Prometheus metrics look like this: ``` # HELP sshcheck_duration Measures the duration of SSH connection. # TYPE sshcheck_duration gauge sshcheck_duration{ssh_endpoint="13.245.150.131:22"} 311 # HELP sshcheck_status 1 if the SSH client successfully connected, otherwise 0. # TYPE sshcheck_status gauge sshcheck_status{ssh_endpoint="13.245.150.131:22"} 1 ```
mxiamxia
pushed a commit
that referenced
this pull request
Oct 2, 2023
) **Description:** Adding command line argument `--status-code` to `telemetrygen traces`, which accepts `(Unset,Error,Ok)` (case sensitive) or the enum equivalent of `(0,1,2)`. Running ```shell telemetrygen traces --otlp-insecure --traces 1 --status-code 1 ``` against a minimal local collector yields ```txt 2023-07-29T21:27:57.862+0100 info ResourceSpans #0 Resource SchemaURL: https://opentelemetry.io/schemas/1.4.0 Resource attributes: -> service.name: Str(telemetrygen) ScopeSpans #0 ScopeSpans SchemaURL: InstrumentationScope telemetrygen Span #0 Trace ID : f6dc4be32c78b9999c69d504a79e68c1 Parent ID : 4e2cd6e0e90cf2ea ID : 20835413e32d26a5 Name : okey-dokey Kind : Server Start time : 2023-07-29 20:27:57.861602 +0000 UTC End time : 2023-07-29 20:27:57.861726 +0000 UTC Status code : Error Status message : Attributes: -> net.peer.ip: Str(1.2.3.4) -> peer.service: Str(telemetrygen-client) Span #1 Trace ID : f6dc4be32c78b9999c69d504a79e68c1 Parent ID : ID : 4e2cd6e0e90cf2ea Name : lets-go Kind : Client Start time : 2023-07-29 20:27:57.861584 +0000 UTC End time : 2023-07-29 20:27:57.861726 +0000 UTC Status code : Error Status message : Attributes: -> net.peer.ip: Str(1.2.3.4) -> peer.service: Str(telemetrygen-server) ``` and similarly (the string version) ```shell telemetrygen traces --otlp-insecure --traces 1 --status-code '"Ok"' ``` produces ```txt Resource SchemaURL: https://opentelemetry.io/schemas/1.4.0 Resource attributes: -> service.name: Str(telemetrygen) ScopeSpans #0 ScopeSpans SchemaURL: InstrumentationScope telemetrygen Span #0 Trace ID : dfd830da170acfe567b12f87685d7917 Parent ID : 8e15b390dc6a1ccc ID : 165c300130532072 Name : okey-dokey Kind : Server Start time : 2023-07-29 20:29:16.026965 +0000 UTC End time : 2023-07-29 20:29:16.027089 +0000 UTC Status code : Ok Status message : Attributes: -> net.peer.ip: Str(1.2.3.4) -> peer.service: Str(telemetrygen-client) Span #1 Trace ID : dfd830da170acfe567b12f87685d7917 Parent ID : ID : 8e15b390dc6a1ccc Name : lets-go Kind : Client Start time : 2023-07-29 20:29:16.026956 +0000 UTC End time : 2023-07-29 20:29:16.027089 +0000 UTC Status code : Ok Status message : Attributes: -> net.peer.ip: Str(1.2.3.4) -> peer.service: Str(telemetrygen-server) ``` The default is `Unset` which is the current behaviour. **Link to tracking Issue:** 24286 **Testing:** Added unit tests which covers both valid and invalid inputs. **Documentation:** Command line arguments are self documenting via the usage info in the flag. Co-authored-by: Pablo Baeyens <pbaeyens31+github@gmail.com>
mxiamxia
pushed a commit
that referenced
this pull request
Oct 2, 2023
Co-authored-by: matianjun1 <mtj334510983@163.com>
mxiamxia
pushed a commit
that referenced
this pull request
May 25, 2024
**Description:** <Describe what has changed.> <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> This PR implements the new container logs parser as it was proposed at open-telemetry#31959. **Link to tracking Issue:** <Issue number if applicable> open-telemetry#31959 **Testing:** <Describe what testing was performed and which tests were added.> Added unit tests. Providing manual testing steps as well: ### How to test this manually 1. Using the following config file: ```yaml receivers: filelog: start_at: end include_file_name: false include_file_path: true include: - /var/log/pods/*/*/*.log operators: - id: container-parser type: container output: m1 - type: move id: m1 from: attributes.k8s.pod.name to: attributes.val - id: some type: add field: attributes.key2.key_in value: val2 exporters: debug: verbosity: detailed service: pipelines: logs: receivers: [filelog] exporters: [debug] processors: [] ``` 2. Start the collector: `./bin/otelcontribcol_linux_amd64 --config ~/otelcol/container_parser/config.yaml` 3. Use the following bash script to create some logs: ```bash #! /bin/bash echo '2024-04-13T07:59:37.505201169-05:00 stdout P This is a very very long crio line th' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log echo '{"log":"INFO: log line here","stream":"stdout","time":"2029-03-30T08:31:20.545192187Z"}' >> /var/log/pods/kube-controller-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d6/kube-controller/1.log echo '2024-04-13T07:59:37.505201169-05:00 stdout F at is awesome! crio is awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log echo '2021-06-22T10:27:25.813799277Z stdout P some containerd log th' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log echo '{"log":"INFO: another log line here","stream":"stdout","time":"2029-03-30T08:31:20.545192187Z"}' >> /var/log/pods/kube-controller-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d6/kube-controller/1.log echo '2021-06-22T10:27:25.813799277Z stdout F at is super awesome! Containerd is awesome' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log echo '2024-04-13T07:59:37.505201169-05:00 stdout F standalone crio line which is awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log echo '2021-06-22T10:27:25.813799277Z stdout F standalone containerd line that is super awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log ``` 4. Run the above as a bash script to verify any parallel processing. Verify that the output is correct. ### Test manually on k8s 1. `make docker-otelcontribcol && docker tag otelcontribcol otelcontribcol-dev:0.0.1 && kind load docker-image otelcontribcol-dev:0.0.1` 2. Install using the following helm values file: ```yaml mode: daemonset presets: logsCollection: enabled: true image: repository: otelcontribcol-dev tag: "0.0.1" pullPolicy: IfNotPresent command: name: otelcontribcol config: exporters: debug: verbosity: detailed receivers: filelog: start_at: end include_file_name: false include_file_path: true exclude: - /var/log/pods/default_daemonset-opentelemetry-collector*_*/opentelemetry-collector/*.log include: - /var/log/pods/*/*/*.log operators: - id: container-parser type: container output: some - id: some type: add field: attributes.key2.key_in value: val2 service: pipelines: logs: receivers: [filelog] processors: [batch] exporters: [debug] ``` 3. Check collector's output to verify the logs are parsed properly: ```console 2024-05-10T07:52:02.307Z info LogsExporter {"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 1, "log records": 2} 2024-05-10T07:52:02.307Z info ResourceLog #0 Resource SchemaURL: ScopeLogs #0 ScopeLogs SchemaURL: InstrumentationScope LogRecord #0 ObservedTimestamp: 2024-05-10 07:52:02.046236071 +0000 UTC Timestamp: 2024-05-10 07:52:01.92533954 +0000 UTC SeverityText: SeverityNumber: Unspecified(0) Body: Str(otel logs at 07:52:01) Attributes: -> log: Map({"iostream":"stdout"}) -> time: Str(2024-05-10T07:52:01.92533954Z) -> k8s: Map({"container":{"name":"busybox","restart_count":"0"},"namespace":{"name":"default"},"pod":{"name":"daemonset-logs-6f6mn","uid":"1069e46b-03b2-4532-a71f-aaec06c0197b"}}) -> logtag: Str(F) -> key2: Map({"key_in":"val2"}) -> log.file.path: Str(/var/log/pods/default_daemonset-logs-6f6mn_1069e46b-03b2-4532-a71f-aaec06c0197b/busybox/0.log) Trace ID: Span ID: Flags: 0 LogRecord #1 ObservedTimestamp: 2024-05-10 07:52:02.046411602 +0000 UTC Timestamp: 2024-05-10 07:52:02.027386192 +0000 UTC SeverityText: SeverityNumber: Unspecified(0) Body: Str(otel logs at 07:52:02) Attributes: -> log.file.path: Str(/var/log/pods/default_daemonset-logs-6f6mn_1069e46b-03b2-4532-a71f-aaec06c0197b/busybox/0.log) -> time: Str(2024-05-10T07:52:02.027386192Z) -> log: Map({"iostream":"stdout"}) -> logtag: Str(F) -> k8s: Map({"container":{"name":"busybox","restart_count":"0"},"namespace":{"name":"default"},"pod":{"name":"daemonset-logs-6f6mn","uid":"1069e46b-03b2-4532-a71f-aaec06c0197b"}}) -> key2: Map({"key_in":"val2"}) Trace ID: Span ID: Flags: 0 ... ``` **Documentation:** <Describe the documentation added.> Added Signed-off-by: ChrsMark <chrismarkou92@gmail.com>
mxiamxia
added a commit
that referenced
this pull request
Jun 10, 2024
…try#33225) **Description:** <Describe what has changed.> Using the DB span example below, X-Ray exporter failed to generate the expected DB call subsegment names because it could not parse JDBC connection strings that start with the `jdbc:` prefix. ``` Span #1 Trace ID : 663a0b68a5e3849c09c07f914b3df738 Parent ID : 1052e2a4a2516884 ID : 374de78b552e23c2 Name : orders@no-appsignals-mysql-1.cnkqok6c8mo1.eu-west-1.rds.amazonaws.com Kind : Client Start time : 2024-05-07 11:07:20.62 +0000 UTC End time : 2024-05-07 11:07:20.624 +0000 UTC Status code : Unset Status message : Attributes: -> db.connection_string: Str(jdbc:mysql://no-appsignals-mysql-1.cnkqok6c8mo1.eu-west-1.rds.amazonaws.com:3306) -> db.name: Str(orders) -> db.system: Str(MySQL) -> db.user: Str(myuser@10.0.149.233) ``` **Link to tracking Issue:** <Issue number if applicable> **Testing:** <Describe what testing was performed and which tests were added.> local tests
mxiamxia
pushed a commit
that referenced
this pull request
Jun 10, 2024
…pen-telemetry#33353) **Description:** <Describe what has changed.> <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> Container parser should add k8s metadata as resource attributes and not as log record attributes. **Link to tracking Issue:** <Issue number if applicable> Fixes open-telemetry#33341 **Testing:** <Describe what testing was performed and which tests were added.> Manual testing on local k8s cluster: ```console 2024-06-04T06:40:08.219Z info ResourceLog #0 Resource SchemaURL: Resource attributes: -> k8s.pod.uid: Str(d5ecc924-e255-4525-b5be-6437939b1e4d) -> k8s.container.name: Str(busybox) -> k8s.namespace.name: Str(default) -> k8s.pod.name: Str(daemonset-logs-dhzcq) -> k8s.container.restart_count: Str(0) ScopeLogs #0 ScopeLogs SchemaURL: InstrumentationScope LogRecord #0 ObservedTimestamp: 2024-06-04 06:40:08.007370503 +0000 UTC Timestamp: 2024-06-04 06:40:07.855932421 +0000 UTC SeverityText: SeverityNumber: Unspecified(0) Body: Str(otel logs at 06:40:07) Attributes: -> logtag: Str(F) -> key2: Map({"key_in":"val2"}) -> log.file.path: Str(/var/log/pods/default_daemonset-logs-dhzcq_d5ecc924-e255-4525-b5be-6437939b1e4d/busybox/0.log) -> time: Str(2024-06-04T06:40:07.855932421Z) -> log.iostream: Str(stdout) Trace ID: Span ID: Flags: 0 LogRecord #1 ObservedTimestamp: 2024-06-04 06:40:08.007451031 +0000 UTC Timestamp: 2024-06-04 06:40:07.957875321 +0000 UTC SeverityText: SeverityNumber: Unspecified(0) Body: Str(otel logs at 06:40:07) Attributes: -> log.file.path: Str(/var/log/pods/default_daemonset-logs-dhzcq_d5ecc924-e255-4525-b5be-6437939b1e4d/busybox/0.log) -> log.iostream: Str(stdout) -> time: Str(2024-06-04T06:40:07.957875321Z) -> key2: Map({"key_in":"val2"}) -> logtag: Str(F) Trace ID: Span ID: Flags: 0 ``` **Documentation:** <Describe the documentation added.> ~ --------- Signed-off-by: ChrsMark <chrismarkou92@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description: <Describe what has changed.
Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.>
Link to tracking Issue:
Testing: < Describe what testing was performed and which tests were added.>
Documentation: < Describe the documentation added.>