-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make Logparser Plugin Check For New Files #2141
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please put a note in the sample config that new files will always be tailed from the beginning.
Check in the Gather metric to see if any new files matching the glob have appeared. If so, start tailing them from the beginning.
80242d6
to
7db7591
Compare
@sparrc thanks - I've updated the PR with your review feedback |
looks good, please update the changelog! |
I believe this closes #1829 |
2cab413
to
7db7591
Compare
@sparrc thanks - I've updated the changelog. |
* Make Logparser Plugin Check For New Files Check in the Gather metric to see if any new files matching the glob have appeared. If so, start tailing them from the beginning. * changelog update for influxdata#2141
* Make Logparser Plugin Check For New Files Check in the Gather metric to see if any new files matching the glob have appeared. If so, start tailing them from the beginning. * changelog update for influxdata#2141
Telegraf v1.2.1 (git: release-1.2 3b6ffb3) Was this fixed? I just fired up a logParser input for a haproxy.log and noticed that Grafana quit displaying the metrics from the haproxy.log. When I checked telegraf.log I see the information below. At 16:13:31 I started telegraf. At 16:17:01 it looks like telegraf is attempting to handle the log rotation and at 16.17.02 it looks to have reopened the new haproxy.log file. That's when I lost my metrics in Grafana. I "reloaded" telegraf (service reload telegraf) at 16:34 and started collecting metrics again. Maybe telegraf need to HUP itself as part of this process. 2017/04/19 16:13:31 Seeked /var/log/haproxy.log - &{Offset:0 Whence:2} Relevant portion of my telegraf configuration
|
@kiplandiles See #1829 |
@danielnelson - Thanks. I actually started at #1829 and it referred me here. I'm a bit of a GitHub newbie so sorry if I should have posted there instead. So do I understand that this won't be fixed until 1.3? @sparrc commented that this closed 1829. |
I reopened that issue because there are still problems. I believe it is caused by a bug in the upstream project. Perhaps this issue: hpcloud/tail#97 I'm not sure when it will get fixed, but I don't consider it a blocker because it isn't a regression. Perhaps there is already a solution floating around that we could merge into our fork https://github.com/influxdata/tail. The next release of Telegraf will be 1.3, not planning any bug fix releases before then. |
Thanks again. Guess I will handle this issue myself with logrotate.
We shall see how that works out. Already have to add telegraf to the adm group just to access the haproxy log file so logParser seems to require a bit of tweaking. |
how to closed the fd of the deleted files? since tail only execute closefile when telegraf exits |
* Make Logparser Plugin Check For New Files Check in the Gather metric to see if any new files matching the glob have appeared. If so, start tailing them from the beginning. * changelog update for #2141
Required for all PRs:
Check in the Gather metric to see if any new files matching the glob
have appeared. If so, start tailing them from the beginning.