Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

outputs.prometheus_client expiration_interval not working #6973

Closed
debeste123 opened this issue Feb 3, 2020 · 1 comment · Fixed by #6981
Closed

outputs.prometheus_client expiration_interval not working #6973

debeste123 opened this issue Feb 3, 2020 · 1 comment · Fixed by #6981
Labels
area/prometheus bug unexpected problem or unintended behavior
Milestone

Comments

@debeste123
Copy link

Relevant telegraf.conf:

[global_tags]
  # dc = "us-east-1" # will tag all metrics with dc=us-east-1
  exporter = "telegraf"
  ## Environment variables can be used as tags, and throughout the config file
  # user = "$USER"


# Configuration for telegraf agent
[agent]
  omit_hostname = false
  interval = "20s"
  round_interval = false
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "30s"
  flush_jitter = "0s"
  logfile = ""
  debug = true
  quiet = false



###################################################################################################################



# Stream and parse log file(s).
[[inputs.logparser]]
  ## Log files to parse.
  ## These accept standard unix glob matching rules, but with the addition of
  ## ** as a "super asterisk". ie:
  ##   /var/log/**.log     -> recursively find all .log files in /var/log
  ##   /var/log/*/*.log    -> find all .log files with a parent dir in /var/log
  ##   /var/log/apache.log -> only tail the apache log file
  files = ["/opt/prometheus/telegraf/test_telegraf.log"]

  from_beginning = false

  [inputs.logparser.grok]

    custom_patterns = '''
    METRICS_TEST %{GREEDYDATA:message}:%{GREEDYDATA:count:int}
    METRICS_TESTT %{GREEDYDATA:message}%{DATA:hostname}
    '''

    measurement = "log_data"
    patterns = ["%{METRICS_TEST}", "%{METRICS_TESTT}"]

    ## Full path(s) to custom pattern files.
    custom_pattern_files = []

#Add tag value, as a dummy so it can be parsed as a prometheus metric
  [inputs.logparser.tags]
    value = "1"


[[processors.converter]]
  namepass = ["log_data"]
  [processors.converter.tags]
    integer = ["value"]
  [processors.converter.fields]
    tag = ["hostname"]

#debug file
[[outputs.file]]
  files = ["/opt/prometheus/telegraf/output.log"]


#PROMETHEUS#
# Publish all metrics to /metrics for Prometheus to scrape
[[outputs.prometheus_client]]
  listen = ":9273"
  metric_version = 2
  expiration_interval = "60s"
  collectors_exclude = ["gocollector", "process"]
  string_as_label = true
  export_timestamp = false

System info:

telegraf-1.13.2-1.x86_64.rpm

Steps to reproduce:

  1. Run telegraf with my config file
  2. Put a string in the logfile you reference: "/opt/prometheus/telegraf/output.log" in my example.

Expected behavior:

Prometheus metrics are available and are removed after 60s.
log_data_value{exporter="telegraf",host="hostname",message="lalalatest",path="/opt/prometheus/telegraf/test_telegraf.log"} 1

This metric should then be removed after 60s, right?

Actual behavior:

The metric is available on the "url:9273/metrics" indefinitely.

Additional info:

I played around endlessly with interval, flush_interval & expiration_interval values, with not the expected result. Or I get no result, or the metrics are available indefinitely.

Thanks,

Mathias

@danielnelson danielnelson added this to the 1.13.3 milestone Feb 4, 2020
@danielnelson danielnelson added area/prometheus bug unexpected problem or unintended behavior labels Feb 4, 2020
@debeste123
Copy link
Author

@danielnelson Tested OK.

You are a god among men, thank you sir.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/prometheus bug unexpected problem or unintended behavior
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants