Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[azuremonitorexporter] Duplicate logs on Kubernetes #32480

Closed
titt opened this issue Apr 17, 2024 · 3 comments
Closed

[azuremonitorexporter] Duplicate logs on Kubernetes #32480

titt opened this issue Apr 17, 2024 · 3 comments
Labels
bug Something isn't working closed as inactive exporter/azuremonitor needs triage New item requiring triage Stale

Comments

@titt
Copy link

titt commented Apr 17, 2024

Component(s)

exporter/azuremonitor

What happened?

Description

The azuremonitorexporter seems to duplicate log-record on Kubernetes

Steps to Reproduce

Deploy opentelemetry-collector with daemonset node on Kubernetes and use filelog to scrap log file of the node.

Expected Result

To have only one log-record in application insight instead to have duplicate log-record

Actual Result

image image

Collector version

0.98.0

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration

mode: daemonset


presets:
  # enables the k8sattributesprocessor and adds it to the traces, metrics, and logs pipelines
  kubernetesAttributes:
    enabled: false
  # enables the kubeletstatsreceiver and adds it to the metrics pipelines
  kubeletMetrics:
    enabled: false
  # Enables the filelogreceiver and adds it to the logs pipelines
  logsCollection:
    enabled: true
  # Kubernetes metrics
  clusterMetrics:
    enabled: false
  kubernetesEvents:
    enabled: false
  hostMetrics:
    enabled: false


resources:
  limits:
    cpu: 250m
    memory: 512Mi


config:
  receivers:
    # Read log
    filelog:
      include:
      - /var/log/pods/dremio-dev*/dremio-master-coordinator/*.log


  processors:
    # Limit the opentelemetry consumption
    memory_limiter:
      check_interval: 5s
      limit_percentage: 50
      spike_limit_percentage: 30

    # Create an attribute to indicate we catch dremio queries
    attributes/dremioqueries:
      actions:
      - action: insert
        key: filename
        value: queries.json

    # Parse json whent the log start by "{"
    transform/dremioqueries:
      error_mode: ignore
      log_statements:
      - context: log
        statements:
        - set(body, ParseJSON(body)) where IsMatch(body, "^{") == true

    # Filter log
    filter/dremioqueries:
      logs:
        log_record:
        - 'not IsMatch(body, ".*queryId.*")'
        - 'IsMatch(body["queryType"], "METADATA_REFRESH")'
        

  exporters:
    debug:
      verbosity: detailed
    azuremonitor:
      connection_string:


  service:
    telemetry:
      metrics:
        level: none


    pipelines:
      logs:
        receivers: [filelog]
        processors: [memory_limiter, attributes/dremioqueries, transform/dremioqueries, filter/dremioqueries]
        exporters: [debug,azuremonitor]


useGOMEMLIMIT: true

Log output

Timestamp: 2024-04-16 14:06:22.136 +0000 UTC
Value: 0.000000
        {"kind": "exporter", "data_type": "metrics", "name": "debug"}
2024-04-16T14:06:29.166Z        info    LogsExporter    {"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 1, "log records": 1}
2024-04-16T14:06:29.166Z        info    ResourceLog #0
Resource SchemaURL:
Resource attributes:
     -> k8s.container.name: Str(dremio-master-coordinator)
     -> k8s.namespace.name: Str(dremio-dev)
     -> k8s.pod.name: Str(dremio-master-0)
     -> k8s.container.restart_count: Str(2)
     -> k8s.pod.uid: Str(0c1a4aef-3df9-40b3-8996-bd3bbceabf1c)
ScopeLogs #0
ScopeLogs SchemaURL:
InstrumentationScope
LogRecord #0
ObservedTimestamp: 2024-04-16 14:06:29.066262948 +0000 UTC
Timestamp: 2024-04-16 14:06:28.872550158 +0000 UTC
SeverityText:
SeverityNumber: Unspecified(0)
Body: Map({"accelerated":false,"attemptCount":1,"context":"[]","engineName":"","engineStart":1713276388819,"engineStartTime":0,"executionCpuTimeNs":2916604,"executionPlanningStart":1713276388830,"executionPlanningTime":1,"executionStart":1713276388835,"finish":1713276388863,"inputBytes":93663,"inputRecords":175,"isTruncatedQueryText":false,"memoryAllocated":7000000,"metadataRetrieval":1713276388796,"metadataRetrievalTime":2,"outcome":"COMPLETED","outcomeReason":"","outputBytes":93663,"outputRecords":175,"pendingTime":0,"plan
ningStart":1713276388798,"planningTime":21,"poolWaitTime":0,"queryCost":700,"queryEnqueued":1713276388819,"queryId":"19e17a1a-a126-5f53-f862-cccec6f7e700","queryText":"SELECT * FROM sys.boot", "queryType":"UI_RUN" ,"start":1713276388796,"startingStart":1713276388831,"startingTime":4,"submitted":1713276388796,"waitTimeNs":892501})
Attributes:
     -> logtag: Str(F)
     -> log.iostream: Str(stdout)
     -> log.file.path: Str(/var/log/pods/mypod_0c1a4aef-3df9-40b3-8996-bd3bbceabf1c/master-coordinator/2.log)
     -> time: Str(2024-04-16T14:06:28.872550158Z)
     -> filename: Str(queries.json)
Trace ID:
Span ID:
Flags: 0
        {"kind": "exporter", "data_type": "logs", "name": "debug"}
2024-04-16T14:06:32.137Z        warn    internal/transaction.go:129     Failed to scrape Prometheus endpoint    {"kind": "receiver", "name": "prometheus", "data_type": "met
rics", "scrape_timestamp": 1713276392136, "target_labels": "{__name__=\"up\", instance=\"10.162.10.65:8888\", job=\"opentelemetry-collector\"}"}
2024-04-16T14:06:32.311Z        info    MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 5, "data points": 5}

Additional context

No response

@titt titt added bug Something isn't working needs triage New item requiring triage labels Apr 17, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working closed as inactive exporter/azuremonitor needs triage New item requiring triage Stale
Projects
None yet
Development

No branches or pull requests

1 participant