forked from open-telemetry/opentelemetry-collector-contrib
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Address feedback from upstream PR we did not merge #1
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
mx-psi
commented
Sep 21, 2020
@@ -54,20 +55,21 @@ func (api *APIConfig) GetCensoredKey() string { | |||
|
|||
// DogStatsDConfig defines the DogStatsd related configuration | |||
type DogStatsDConfig struct { | |||
// FIXME Use confignet.NetAddr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not trivial and since we are going to remove this mode I decided to just add a note.
albertvaka
approved these changes
Sep 21, 2020
mx-psi
added a commit
that referenced
this pull request
Oct 2, 2020
* Add DataDog exporter back from old fork commit 99129fb96e29e9c1a92da00b7e3f8efcae8a31e8 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Thu Sep 3 18:10:28 2020 +0200 Handle namespace at initialization time commit babca25927926a60c0c416294af3aadf784d41b9 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Thu Sep 3 17:23:53 2020 +0200 Initialize on a separate function This way the variables can be checked without worrying about the env commit 24d0cb4cc566fa5313a8650c904a27bea68bf555 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Thu Sep 3 14:30:35 2020 +0200 Check environment variables for unified service tagging commit 6695f8297ab8b1fcae71b05acb027c4a0992e3a0 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Wed Sep 2 14:57:37 2020 +0200 Add support for sending metrics through the API - Use datadog.Metric type for simplicity - Get host if unset commit c366603 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Wed Sep 2 09:56:24 2020 +0200 Disable Queue and Retry settings (#72) These are handled by the statsd package. OpenTelemetry docs are confusing and the default configuration (disabled) is different from the one returned by "GetDefault..." functions commit a660b56 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Tue Sep 1 15:26:14 2020 +0200 Add support for summary and distribution metric types (#65) * Add support for summary metric type * Add support for distribution metrics * Refactor metrics construction - Drop name in Metrics (now they act as Metric values) - Refactor constructor so that errors happen at compile-time * Report Summary total sum and count values Snapshot values are not filled in by OpenTelemetry * Report p00 and p100 as `.min` and `.max` This is more similar to what we do for our own non-additive type * Keep hostname if it has not been overridden commit c95adc4 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Thu Aug 27 13:00:02 2020 +0200 Update dependencies and `make gofmt` The collector was updated to 0.9.0 upstream commit 20afb0e Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Wed Aug 26 18:25:49 2020 +0200 Refactor configuration (#45) * Refactor configuration * Implement telemetry and tags configuration handling * Update example configuration and README file Co-authored-by: Kylian Serrania <kylian.serrania@datadoghq.com> commit fdc98b5 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Fri Aug 21 11:09:08 2020 +0200 Initial DogStatsD implementation (#15) Initial metrics exporter through DogStatsD with support for all metric types but summary and distribution commit e848a60 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Fri Aug 21 10:42:45 2020 +0200 Bump collector version commit 58be9a4 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Thu Aug 6 10:04:32 2020 +0200 Address linter commit 695430c Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Tue Aug 4 13:28:01 2020 +0200 Fix field name error MetricsEndpoint was renamed to MetricsURL commit 168b319 Author: Pablo Baeyens <pablo.baeyens@datadoghq.com> Date: Mon Aug 3 11:05:01 2020 +0200 Create initial outline for Datadog exporter (#1) * Add support for basic configuration options * Documents configuration options * go mod tidy * Address feedback from upstream PR we did not merge (#1) * Backport changes from upstream PR Remove `err` from MapMetrics * Remove usage of pdatautil * Fix tests * Use TCPAddr * Review which functions should be private * Remove DogStatsD mode (#2) * Remove DogStatsD mode * go mod tidy * Remove mentions to DogStatSD * Improve test coverage (#3) * Improve test coverage Added unit tests for - API key censoring - Hostname - Metrics exporter Renamed test and implementation files for consistency * Add one additional test * Remove client validation (#6) The zorkian API does not validate the API key unless you also have an application key, even though the endpoint works without it. I am removing this validation until this gets fixed on the zorkian library * Keep only configuration and factory methods Following the contribution guidelines we need to make a first PR with this * Use latest version of collector * Remove `report_percentiles` option It is not being used as of now until the OTLP metrics format stabilizes and we have a Summary type metric again * Correct configuration The API key is now a required setting * Remove test not relevant for this PR * Remove unnecessary imports after removing test * Address review comment * Apply suggestions from code review Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com> * Separate documentation into two examples One example with the minimal configuration, for sending to `datadoghq.com` and a second one for sending to `datadoghq.eu` Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com>
ericmustin
pushed a commit
that referenced
this pull request
Oct 26, 2020
* Restructure buildCWMetric logic (#1) * Restructure code to remove duplicated logic * Update format * Improve function and variable names * Extract logic for dimension creation and add test * Implement minor fixes * Remove changes to go.sum * Implement tests for getCWMetrics * Implement tests for buildCWMetric * Format metric_translator_test.go * Run with gofmt -s * Disregard ordering of dimensions in test case * Perform dimension equality checking as a helper function
mx-psi
pushed a commit
that referenced
this pull request
Sep 23, 2021
Adds a Cloud Foundry metric receiver which reads metrics from Cloud Foundry Reverse Log Proxy Gateway. More details available in the `README.md`. `make gotidy` seems to have made plenty of subtle changes to `go.sum` files, not sure if this is normal. This PR contains the overall structure, documentation, implementation for config and factory, but does NOT contain the implementation of the receiver and does not enable the component, as that will come in separate PRs later. **Link to tracking Issue:** open-telemetry#5320 **Testing:** Unit tests. Manual testing was performed against Tanzu Application Service (TAS) versions 2.7, 2.8 and 2.11. Considered adding an integration test with mocked HTTP servers acting as endpoints where the HTTP server would provide a constant response (prerecorded from the real TAS traffic), but not sure if mocks would make more sense. **Documentation:** `README.md` and `doc.go` for the new receiver module were added.
mx-psi
pushed a commit
that referenced
this pull request
May 30, 2022
…y#9224) * add vcenter vSAN collection * checkpoint on getting property collection working * checkpoint before integration test * dual receivers under root receiver pointer * checkpoint before updated mdatagen * use syslog receiver rather than tcplogreceiver * getting more performance counter refinements * remove unneccessary component addition * try to fix go.mod resolution issues * try to fix go.mod resolution issues pt 2 * addlicense * fix go.mod by fixing require directive * add readme for metrics * update readme * fix go.mod referring nonexistent version * add performance manager tests * more tests * add more attributes to virtual machines and host systems * add more attributes to virtual machines and host systems * spike changelog entry * fix go.mod in both places * fix go.mod in configschema * add // import github.com/open-telemetry/opentelemetry-collector-contrib/receiver/vmwarevcenterreceiver to imports * add quotations * add to receiver lifecycle * remove extra go generate direction * fix typo of utilizaiton in metric description * small changes to interval id in performance queries to be more consistent * PR feedback including omitting company name prefix * PR feedback to not fail starting the component on potential network failures * minor grammar correction in vcenter readme * update expected metrics * update host_effective attribute value * remove PerformanceInterval customizability * add to codeowners * fix indentation on merge conflict * fix changelog entry place so its in the new components section * update to be on 0.49.0 of the collector' * add PR number to changelog * regenerate with newer version of mdatagen * move error log if unable to connect on start to receiver.Start() rather than scraper.Start() * fix test cases from last commit * minor update to config with tests * fix metric description * use utc for host vsan collection as well * update comments of public facing methods * return errors on getting clusters to the scraper errors * PR feedback #1 * instantiate new client if client is nil * update all descriptions to have punctuation * three more descs * move ensureReceiver up to once we validated as a config * some more PR feedback * looking into race conditions * run go tidy * fix import order and remove unneccessary mutex * remove mutex from struct * refactor client to responsible for knowing if the vsan endpoints are reachable * fix integration test referencing old var * change metrics.metrics => metrics.settings, update client pr feedback * remove vSAN collection temporarily * remove extra metric attributes for vSAN * remove vsan specific variables * clean up host PerfCounter disk latency metrics and fix some descriptions to better reflect interval * add 20s interval to extended documentation as needed * mdatagen fixes * add integration test metric scrape * fix import order * go up to 0.49.1 * gotidy * add replace directive for semconv * gotidy fixes * fix component not being on 0.50.0 * update to v0.50.1-0.20220429151328-041f39835df7 * use newer mdatagen * remove any logging functionality change && update documentation * fix integration test from flattening of config * fix scraper start not erroring if connection cannot be established * make scrapertest less flaky * format test json * Apply suggestions from code review Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com> * adjust metric definition for vcenter.host.disk.throughput * remove comment and move pm level 2 metrics to appropriate section * try to be respective of datacenters * fix only vCenter server functionality * try building out a mock server for test coverage * make goporto * fix build issues * use latest mdatagen * add newlines to ends of xml recordings * fix integration test * moved around scrapererrors because now the receiver is datacenter dependent * try and do an audit of performance metrics and requests/responses * update testdata with correct units * make tidy * make tidy * update collector version * fix local testing code including modules * remove deprecated use of commonponenterror * pr feedback; add method of collection recording, return poweredOn/poweredOff VMs * remove content.json * fix description change in scraper_test.go * update collector version * bump replaced module; rebuild load tests * fix alibaba version auto localizing Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com>
dineshg13
pushed a commit
that referenced
this pull request
Aug 4, 2023
…emetry#24676) **Description:** The metadata.yml for the SSH check receiver currently documents a resource attribute containing the SSH endpoint but this is not emitted. This PR updates the receiver to include this resource attribute. **Link to tracking Issue:** open-telemetry#24441 **Testing:** Example collector config: ```yaml receivers: sshcheck: endpoint: 13.245.150.131:22 username: ec2-user key_file: /Users/dewald.dejager/.ssh/sandbox.pem collection_interval: 15s known_hosts: /Users/dewald.dejager/.ssh/known_hosts ignore_host_key: false resource_attributes: "ssh.endpoint": enabled: true exporters: logging: verbosity: detailed prometheus: endpoint: 0.0.0.0:8081 resource_to_telemetry_conversion: enabled: true service: pipelines: metrics: receivers: [sshcheck] exporters: [logging, prometheus] ``` The log output looks like this: ``` 2023-07-30T16:52:38.724+0200 info MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 2, "data points": 2} 2023-07-30T16:52:38.724+0200 info ResourceMetrics #0 Resource SchemaURL: Resource attributes: -> ssh.endpoint: Str(13.245.150.131:22) ScopeMetrics #0 ScopeMetrics SchemaURL: InstrumentationScope otelcol/sshcheckreceiver 0.82.0-dev Metric #0 Descriptor: -> Name: sshcheck.duration -> Description: Measures the duration of SSH connection. -> Unit: ms -> DataType: Gauge NumberDataPoints #0 StartTimestamp: 2023-07-30 14:52:22.381672 +0000 UTC Timestamp: 2023-07-30 14:52:38.404003 +0000 UTC Value: 319 Metric #1 Descriptor: -> Name: sshcheck.status -> Description: 1 if the SSH client successfully connected, otherwise 0. -> Unit: 1 -> DataType: Sum -> IsMonotonic: false -> AggregationTemporality: Cumulative NumberDataPoints #0 StartTimestamp: 2023-07-30 14:52:22.381672 +0000 UTC Timestamp: 2023-07-30 14:52:38.404003 +0000 UTC Value: 1 ``` And the Prometheus metrics look like this: ``` # HELP sshcheck_duration Measures the duration of SSH connection. # TYPE sshcheck_duration gauge sshcheck_duration{ssh_endpoint="13.245.150.131:22"} 311 # HELP sshcheck_status 1 if the SSH client successfully connected, otherwise 0. # TYPE sshcheck_status gauge sshcheck_status{ssh_endpoint="13.245.150.131:22"} 1 ```
songy23
pushed a commit
that referenced
this pull request
Sep 13, 2023
) **Description:** Adding command line argument `--status-code` to `telemetrygen traces`, which accepts `(Unset,Error,Ok)` (case sensitive) or the enum equivalent of `(0,1,2)`. Running ```shell telemetrygen traces --otlp-insecure --traces 1 --status-code 1 ``` against a minimal local collector yields ```txt 2023-07-29T21:27:57.862+0100 info ResourceSpans #0 Resource SchemaURL: https://opentelemetry.io/schemas/1.4.0 Resource attributes: -> service.name: Str(telemetrygen) ScopeSpans #0 ScopeSpans SchemaURL: InstrumentationScope telemetrygen Span #0 Trace ID : f6dc4be32c78b9999c69d504a79e68c1 Parent ID : 4e2cd6e0e90cf2ea ID : 20835413e32d26a5 Name : okey-dokey Kind : Server Start time : 2023-07-29 20:27:57.861602 +0000 UTC End time : 2023-07-29 20:27:57.861726 +0000 UTC Status code : Error Status message : Attributes: -> net.peer.ip: Str(1.2.3.4) -> peer.service: Str(telemetrygen-client) Span #1 Trace ID : f6dc4be32c78b9999c69d504a79e68c1 Parent ID : ID : 4e2cd6e0e90cf2ea Name : lets-go Kind : Client Start time : 2023-07-29 20:27:57.861584 +0000 UTC End time : 2023-07-29 20:27:57.861726 +0000 UTC Status code : Error Status message : Attributes: -> net.peer.ip: Str(1.2.3.4) -> peer.service: Str(telemetrygen-server) ``` and similarly (the string version) ```shell telemetrygen traces --otlp-insecure --traces 1 --status-code '"Ok"' ``` produces ```txt Resource SchemaURL: https://opentelemetry.io/schemas/1.4.0 Resource attributes: -> service.name: Str(telemetrygen) ScopeSpans #0 ScopeSpans SchemaURL: InstrumentationScope telemetrygen Span #0 Trace ID : dfd830da170acfe567b12f87685d7917 Parent ID : 8e15b390dc6a1ccc ID : 165c300130532072 Name : okey-dokey Kind : Server Start time : 2023-07-29 20:29:16.026965 +0000 UTC End time : 2023-07-29 20:29:16.027089 +0000 UTC Status code : Ok Status message : Attributes: -> net.peer.ip: Str(1.2.3.4) -> peer.service: Str(telemetrygen-client) Span #1 Trace ID : dfd830da170acfe567b12f87685d7917 Parent ID : ID : 8e15b390dc6a1ccc Name : lets-go Kind : Client Start time : 2023-07-29 20:29:16.026956 +0000 UTC End time : 2023-07-29 20:29:16.027089 +0000 UTC Status code : Ok Status message : Attributes: -> net.peer.ip: Str(1.2.3.4) -> peer.service: Str(telemetrygen-server) ``` The default is `Unset` which is the current behaviour. **Link to tracking Issue:** 24286 **Testing:** Added unit tests which covers both valid and invalid inputs. **Documentation:** Command line arguments are self documenting via the usage info in the flag. Co-authored-by: Pablo Baeyens <pbaeyens31+github@gmail.com>
songy23
pushed a commit
that referenced
this pull request
Dec 5, 2023
open-telemetry#29116) **Description:** As originally proposed in open-telemetry#26991 before I got distracted Exposes the duration of generated spans as a command line parameter. It uses a `DurationVar` flag so units can be easily provided and are automatically applied. Example usage: ```bash telemetrygen traces --traces 100 --otlp-insecure --span-duration 10ns # nanoseconds telemetrygen traces --traces 100 --otlp-insecure --span-duration 10us # microseconds telemetrygen traces --traces 100 --otlp-insecure --span-duration 10ms # milliseconds telemetrygen traces --traces 100 --otlp-insecure --span-duration 10s # seconds ``` **Testing:** Ran without the argument provided `telemetrygen traces --traces 1 --otlp-insecure` and seen spans publishing with the default value. Ran again with the argument provided: `telemetrygen traces --traces 1 --otlp-insecure --span-duration 1s` And observed the expected output: ``` Resource SchemaURL: https://opentelemetry.io/schemas/1.4.0 Resource attributes: -> service.name: Str(telemetrygen) ScopeSpans #0 ScopeSpans SchemaURL: InstrumentationScope telemetrygen Span #0 Trace ID : 8b441587ffa5820688b87a6b511d634c Parent ID : 39faad428638791b ID : 88f0886894bd4ee2 Name : okey-dokey Kind : Server Start time : 2023-11-12 02:05:07.97443 +0000 UTC End time : 2023-11-12 02:05:08.97443 +0000 UTC Status code : Unset Status message : Attributes: -> net.peer.ip: Str(1.2.3.4) -> peer.service: Str(telemetrygen-client) Span #1 Trace ID : 8b441587ffa5820688b87a6b511d634c Parent ID : ID : 39faad428638791b Name : lets-go Kind : Client Start time : 2023-11-12 02:05:07.97443 +0000 UTC End time : 2023-11-12 02:05:08.97443 +0000 UTC Status code : Unset Status message : Attributes: -> net.peer.ip: Str(1.2.3.4) -> peer.service: Str(telemetrygen-server) {"kind": "exporter", "data_type": "traces", "name": "debug"} ``` **Documentation:** No documentation added. --------- Co-authored-by: Pablo Baeyens <pbaeyens31+github@gmail.com>
mx-psi
pushed a commit
that referenced
this pull request
May 14, 2024
**Description:** <Describe what has changed.> <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> This PR implements the new container logs parser as it was proposed at open-telemetry#31959. **Link to tracking Issue:** <Issue number if applicable> open-telemetry#31959 **Testing:** <Describe what testing was performed and which tests were added.> Added unit tests. Providing manual testing steps as well: ### How to test this manually 1. Using the following config file: ```yaml receivers: filelog: start_at: end include_file_name: false include_file_path: true include: - /var/log/pods/*/*/*.log operators: - id: container-parser type: container output: m1 - type: move id: m1 from: attributes.k8s.pod.name to: attributes.val - id: some type: add field: attributes.key2.key_in value: val2 exporters: debug: verbosity: detailed service: pipelines: logs: receivers: [filelog] exporters: [debug] processors: [] ``` 2. Start the collector: `./bin/otelcontribcol_linux_amd64 --config ~/otelcol/container_parser/config.yaml` 3. Use the following bash script to create some logs: ```bash #! /bin/bash echo '2024-04-13T07:59:37.505201169-05:00 stdout P This is a very very long crio line th' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log echo '{"log":"INFO: log line here","stream":"stdout","time":"2029-03-30T08:31:20.545192187Z"}' >> /var/log/pods/kube-controller-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d6/kube-controller/1.log echo '2024-04-13T07:59:37.505201169-05:00 stdout F at is awesome! crio is awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log echo '2021-06-22T10:27:25.813799277Z stdout P some containerd log th' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log echo '{"log":"INFO: another log line here","stream":"stdout","time":"2029-03-30T08:31:20.545192187Z"}' >> /var/log/pods/kube-controller-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d6/kube-controller/1.log echo '2021-06-22T10:27:25.813799277Z stdout F at is super awesome! Containerd is awesome' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log echo '2024-04-13T07:59:37.505201169-05:00 stdout F standalone crio line which is awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log echo '2021-06-22T10:27:25.813799277Z stdout F standalone containerd line that is super awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log ``` 4. Run the above as a bash script to verify any parallel processing. Verify that the output is correct. ### Test manually on k8s 1. `make docker-otelcontribcol && docker tag otelcontribcol otelcontribcol-dev:0.0.1 && kind load docker-image otelcontribcol-dev:0.0.1` 2. Install using the following helm values file: ```yaml mode: daemonset presets: logsCollection: enabled: true image: repository: otelcontribcol-dev tag: "0.0.1" pullPolicy: IfNotPresent command: name: otelcontribcol config: exporters: debug: verbosity: detailed receivers: filelog: start_at: end include_file_name: false include_file_path: true exclude: - /var/log/pods/default_daemonset-opentelemetry-collector*_*/opentelemetry-collector/*.log include: - /var/log/pods/*/*/*.log operators: - id: container-parser type: container output: some - id: some type: add field: attributes.key2.key_in value: val2 service: pipelines: logs: receivers: [filelog] processors: [batch] exporters: [debug] ``` 3. Check collector's output to verify the logs are parsed properly: ```console 2024-05-10T07:52:02.307Z info LogsExporter {"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 1, "log records": 2} 2024-05-10T07:52:02.307Z info ResourceLog #0 Resource SchemaURL: ScopeLogs #0 ScopeLogs SchemaURL: InstrumentationScope LogRecord #0 ObservedTimestamp: 2024-05-10 07:52:02.046236071 +0000 UTC Timestamp: 2024-05-10 07:52:01.92533954 +0000 UTC SeverityText: SeverityNumber: Unspecified(0) Body: Str(otel logs at 07:52:01) Attributes: -> log: Map({"iostream":"stdout"}) -> time: Str(2024-05-10T07:52:01.92533954Z) -> k8s: Map({"container":{"name":"busybox","restart_count":"0"},"namespace":{"name":"default"},"pod":{"name":"daemonset-logs-6f6mn","uid":"1069e46b-03b2-4532-a71f-aaec06c0197b"}}) -> logtag: Str(F) -> key2: Map({"key_in":"val2"}) -> log.file.path: Str(/var/log/pods/default_daemonset-logs-6f6mn_1069e46b-03b2-4532-a71f-aaec06c0197b/busybox/0.log) Trace ID: Span ID: Flags: 0 LogRecord #1 ObservedTimestamp: 2024-05-10 07:52:02.046411602 +0000 UTC Timestamp: 2024-05-10 07:52:02.027386192 +0000 UTC SeverityText: SeverityNumber: Unspecified(0) Body: Str(otel logs at 07:52:02) Attributes: -> log.file.path: Str(/var/log/pods/default_daemonset-logs-6f6mn_1069e46b-03b2-4532-a71f-aaec06c0197b/busybox/0.log) -> time: Str(2024-05-10T07:52:02.027386192Z) -> log: Map({"iostream":"stdout"}) -> logtag: Str(F) -> k8s: Map({"container":{"name":"busybox","restart_count":"0"},"namespace":{"name":"default"},"pod":{"name":"daemonset-logs-6f6mn","uid":"1069e46b-03b2-4532-a71f-aaec06c0197b"}}) -> key2: Map({"key_in":"val2"}) Trace ID: Span ID: Flags: 0 ... ``` **Documentation:** <Describe the documentation added.> Added Signed-off-by: ChrsMark <chrismarkou92@gmail.com>
mackjmr
pushed a commit
that referenced
this pull request
Jul 3, 2024
…try#33225) **Description:** <Describe what has changed.> Using the DB span example below, X-Ray exporter failed to generate the expected DB call subsegment names because it could not parse JDBC connection strings that start with the `jdbc:` prefix. ``` Span #1 Trace ID : 663a0b68a5e3849c09c07f914b3df738 Parent ID : 1052e2a4a2516884 ID : 374de78b552e23c2 Name : orders@no-appsignals-mysql-1.cnkqok6c8mo1.eu-west-1.rds.amazonaws.com Kind : Client Start time : 2024-05-07 11:07:20.62 +0000 UTC End time : 2024-05-07 11:07:20.624 +0000 UTC Status code : Unset Status message : Attributes: -> db.connection_string: Str(jdbc:mysql://no-appsignals-mysql-1.cnkqok6c8mo1.eu-west-1.rds.amazonaws.com:3306) -> db.name: Str(orders) -> db.system: Str(MySQL) -> db.user: Str(myuser@10.0.149.233) ``` **Link to tracking Issue:** <Issue number if applicable> **Testing:** <Describe what testing was performed and which tests were added.> local tests
mackjmr
pushed a commit
that referenced
this pull request
Jul 3, 2024
…pen-telemetry#33353) **Description:** <Describe what has changed.> <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> Container parser should add k8s metadata as resource attributes and not as log record attributes. **Link to tracking Issue:** <Issue number if applicable> Fixes open-telemetry#33341 **Testing:** <Describe what testing was performed and which tests were added.> Manual testing on local k8s cluster: ```console 2024-06-04T06:40:08.219Z info ResourceLog #0 Resource SchemaURL: Resource attributes: -> k8s.pod.uid: Str(d5ecc924-e255-4525-b5be-6437939b1e4d) -> k8s.container.name: Str(busybox) -> k8s.namespace.name: Str(default) -> k8s.pod.name: Str(daemonset-logs-dhzcq) -> k8s.container.restart_count: Str(0) ScopeLogs #0 ScopeLogs SchemaURL: InstrumentationScope LogRecord #0 ObservedTimestamp: 2024-06-04 06:40:08.007370503 +0000 UTC Timestamp: 2024-06-04 06:40:07.855932421 +0000 UTC SeverityText: SeverityNumber: Unspecified(0) Body: Str(otel logs at 06:40:07) Attributes: -> logtag: Str(F) -> key2: Map({"key_in":"val2"}) -> log.file.path: Str(/var/log/pods/default_daemonset-logs-dhzcq_d5ecc924-e255-4525-b5be-6437939b1e4d/busybox/0.log) -> time: Str(2024-06-04T06:40:07.855932421Z) -> log.iostream: Str(stdout) Trace ID: Span ID: Flags: 0 LogRecord #1 ObservedTimestamp: 2024-06-04 06:40:08.007451031 +0000 UTC Timestamp: 2024-06-04 06:40:07.957875321 +0000 UTC SeverityText: SeverityNumber: Unspecified(0) Body: Str(otel logs at 06:40:07) Attributes: -> log.file.path: Str(/var/log/pods/default_daemonset-logs-dhzcq_d5ecc924-e255-4525-b5be-6437939b1e4d/busybox/0.log) -> log.iostream: Str(stdout) -> time: Str(2024-06-04T06:40:07.957875321Z) -> key2: Map({"key_in":"val2"}) -> logtag: Str(F) Trace ID: Span ID: Flags: 0 ``` **Documentation:** <Describe the documentation added.> ~ --------- Signed-off-by: ChrsMark <chrismarkou92@gmail.com>
mackjmr
pushed a commit
that referenced
this pull request
Jul 3, 2024
…try.log_response_body` config (open-telemetry#33854) **Description:** <Describe what has changed.> <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> - Add `telemetry.log_request_body` and `telemetry.log_response_body` config for debugging. Debug log will contain field `request_body` and/or `response_body` in the same log line instead of separate lines to avoid interleaved log lines. - Change "Request failed" log level to debug. Output: ``` 2024-07-02T14:09:24.983+0100 debug elasticsearchexporter/elasticsearch_bulk.go:67 Request roundtrip completed. {"kind": "exporter", "data_type": "logs", "name": "elasticsearch", "response_body": "{\"version\":{\"number\":\"1.2.3\"}}\n", "path": "/", "method": "GET", "duration": 0.000865486, "status": "200 OK"} 2024-07-02T14:09:24.984+0100 debug elasticsearchexporter/elasticsearch_bulk.go:67 Request roundtrip completed. {"kind": "exporter", "data_type": "logs", "name": "elasticsearch", "request_body": "{\"create\":{\"_index\":\"logs-test-idx\"}}\n{\"@timestamp\":\"2024-07-02T13:09:24.970187592Z\",\"Attributes\":{\"a\":\"test\",\"b\":5,\"batch_index\":\"batch_1\",\"c\":3,\"d\":true,\"item_index\":\"item_1\"},\"Body\":\"Load Generator Counter #0\",\"Scope\":{\"name\":\"\",\"version\":\"\"},\"SeverityNumber\":11,\"SeverityText\":\"INFO3\",\"TraceFlags\":1}\n{\"create\":{\"_index\":\"logs-test-idx\"}}\n{\"@timestamp\":\"2024-07-02T13:09:24.970187592Z\",\"Attributes\":{\"a\":\"test\",\"b\":5,\"batch_index\":\"batch_1\",\"c\":3,\"d\":true,\"item_index\":\"item_2\"},\"Body\":\"Load Generator Counter #1\",\"Scope\":{\"name\":\"\",\"version\":\"\"},\"SeverityNumber\":11,\"SeverityText\":\"INFO3\",\"TraceFlags\":1}\n", "response_body": "{\"took\":0,\"errors\":false,\"items\":[{\"create\":{\"_index\":\"logs-test-idx\",\"_id\":\"\",\"_version\":0,\"result\":\"\",\"status\":201,\"_seq_no\":0,\"_primary_term\":0,\"_shards\":{\"total\":0,\"successful\":0,\"failed\":0},\"error\":{\"type\":\"\",\"reason\":\"\",\"caused_by\":{\"type\":\"\",\"reason\":\"\"}}}},{\"create\":{\"_index\":\"logs-test-idx\",\"_id\":\"\",\"_version\":0,\"result\":\"\",\"status\":201,\"_seq_no\":0,\"_primary_term\":0,\"_shards\":{\"total\":0,\"successful\":0,\"failed\":0},\"error\":{\"type\":\"\",\"reason\":\"\",\"caused_by\":{\"type\":\"\",\"reason\":\"\"}}}}]}\n", "path": "/_bulk", "method": "POST", "duration": 0.000539979, "status": "200 OK"} ``` Required config to log ``` exporters: elasticsearch: telemetry: log_request_body: true log_response_body: true service: telemetry: logs: level: debug ``` For easier analysis, limit the size of request body size. Use `num_workers`=1 and lower `flush.bytes` and/or `flush.interval`. **Link to tracking Issue:** <Issue number if applicable> **Testing:** <Describe what testing was performed and which tests were added.> Manually verified with a modified integration test. **Documentation:** <Describe the documentation added.>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Incorporate feedback and changes made on upstream PR open-telemetry#900 since we are no longer merging this PR, in particular:
Telemetry
optionconfignet
structs when appropriatepdatautil
package