-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New Kubernetes container logs are not tailed by fluentd #3423
Comments
@cosmo0920 and @ashie, I see you have handled a number of |
I checked with such symlinks, but I get work correctly with them.
Although I'm not sure for now that it's the plugin's issue or fluentd's issue, it seems that they might be filtered out by fluent-plugin-kubernetes_metadata_filter. |
@ashie A few questions for you:
|
AFAIK filter plugins cannot affect to input plugin's behavior.
isn't output for the file you want, it's considered as in_tail's issue. Because I didn't check your report & log exactly yet,I missed some important point like |
I met the same issue on
I also checked my
@ashie Yes. I didn't see the file log content I want . If you restart fluentd, everything will be fine. But with frequent creation and deletion of PODs, problems will continue to arise.
|
@Gallardot I have tested again and I do NOT see any entries in the pos file and do NOT see any fluentd logs for my test pod:
pos file doesn't have the entry for this pod's log as well:
@ashie @cosmo0920 Any help on this would be highly appreciated as this issue is preventing us from getting any new pod logs. Thank you very much in advance! |
I'm not sure the root cause of this issue but new k8s gets changed log directories due to removals of dockershim. Older k8s, they should be pointed on
So, I think that this line should adopt to new CRI-O k8s environment: And also I added a guide for tailing logs on CRI-O k8s environment in official Fluentd daemonset: Hope this helps. |
BTW @Gallardot v1.12.1 isn't recommended for in_tail, it has some serious bugs in it. |
@ashie and @cosmo0920 We are aware of the k8s changes, but do NOT have the issue with the log file locations. On startup or reload, fluentd doesn't have any issues tailing the log files. The issue only happens for newly created k8s pods! |
@ashie @cosmo0920 For the latest pod example, I just noticed that
When I check our external log receiver (VMware LogInsight) it only received the logs from fluentd for ~10mins (between 2021-06-21 23:26:22 and 2021-06-21 23:36:14) and then again all logs stopped coming completely! |
Do you have huge log files? BTW I think this issue can be considered as same issue with #3239, so I want to close this issue and continue discussion at #3239. |
Are you asking about any large log files on the node? Or are you asking if my test k8s pod has a large log file?
We don't seem to have any issues with the network saturation, so I am confused on how
Personally, I would rather keep this issue separate as it only deals with a specific re-creatable problem instead of dealing with 2 years old ticket and a ton of unrelated comments in it. |
On the node. When
95MB isn't so big but it might take several tens of minutes to reach EOF (depends on parser's performance).
A smaller value makes easy to work other event handlers, but reading pace of a file is slow. I suggest you to start with |
See attached file:
Right before you replied, I was doing testing with
OK, I will test now with |
I'm also thinking about other possibilities because of your following comment:
If in_tail is running busy loop, events should be emitted continuously. But your case isn't. |
@ashie the On the same exact node:
|
@ashie also just tested with
I will also test with |
With |
Thanks for your test. |
A known issue is that you'll lost logs when rotation is occurred before reaching EOF as I mentioned above. |
So, for the past 2 days the |
One of possibilities is JSON library. |
We use kube-fluentd-operator and it does install oj into its image: |
@ashie If |
Yes, it will lost even if |
Landed onto v1.13.2, so I close this issue. |
We have noticed an issue where new Kubernetes container logs are not tailed by fluentd.
For example:
Describe the bug
For example:
To Reproduce
Setup fluentd to tail logs of Kubernetes pods and create/delete Kubernetes pods.
Expected behavior
fluentd should successfully tail logs for new Kubernetes pods.
Your Environment
Fluentd or td-agent version: fluentd
1.13.0
.Operating system:
Ubuntu 20.04.1 LTS
Kernel version:
5.4.0-62-generic
If you hit the problem with older fluentd version, try latest version first.
Your Configuration
Your Error Log
kube-fluentd-operator-jcss8-fluentd.log.gz
Additional context
With Kubernetes and Docker there are 2 levels of links before we get to a log file. Just mentioning, in case fluentd has some issues reading logs via symlinks.
The text was updated successfully, but these errors were encountered: