Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default image could have decent logratate options on #38

Closed
heikkis opened this issue Apr 29, 2016 · 10 comments
Closed

Default image could have decent logratate options on #38

heikkis opened this issue Apr 29, 2016 · 10 comments

Comments

@heikkis
Copy link

heikkis commented Apr 29, 2016

Wish from a guy with 30GB logfile from logstash for example :)

@spujadas
Copy link
Owner

Fair point, and open to suggestions: any recommendations as to what options would be decent?

@heikkis
Copy link
Author

heikkis commented Apr 29, 2016

Maybe something like this:

5 files with 100MB limit per service totally 500MB per service and totally over kibana, logstash, elasticsearch it would be 1500MB.

@heikkis
Copy link
Author

heikkis commented Apr 29, 2016

Hmm. How logstash handle log writing? Is there log4j etc. handler to handle logrotating? Few examples just point out normal daily logrorate (which should be then hourly in this case?).

@spujadas
Copy link
Owner

OK thanks for the input, will look into this and update the image.
For Logstash at the very least, the logrotate configuration for Logstash that is included in the DEB/RPM/etc. packages looks promising.

@gvenka008c
Copy link

@spujadas How do you manage the logs that is sent from the client servers to ELK servers? How can we retain only 3 days worth of data in ELK servers that gets displayed in Kibana?

@spujadas
Copy link
Owner

@gvenka008c Sorry, not a Logstash expert so haven't got a definite answer to that one.
Deleting old indices as described here seems like a good idea, but don't take my word for it: you may want to check in with the Logstash community over at https://discuss.elastic.co/c/elasticsearch for a more solid answer to your question.

@spujadas
Copy link
Owner

Ended up going with a daily rotation + compression + deletion after a week, using logrotate for all three services.
Tried to configure Elasticsearch's logging.yml but that was very fiddly and ultimately dissatisfying: Elasticsearch's use of log4j 1.x does allow for log rotation but it appears to be much less flexible than logrotate (i.e. can't seem do to what logrotate easily does). This should change if/when log4j is upgraded to 2.x (see elastic/elasticsearch#17697 for upgrade plans), in the meantime will play it safe and use logrotate (tested, appears to work as intended).
Anyway, now has a "sensible" default, which can be overriden as needed.

@heikkis
Copy link
Author

heikkis commented Apr 30, 2016

Thanks. Sound very good!

@reallistic
Copy link

Instead of using logrotate on logstash you may consider removing the following from 30-output.conf

stdout { codec => rubydebug }

See here: https://discuss.elastic.co/t/logstash-stdout-large-size/24939/4

Also, this logrotate setup did not work for me. After some debugging it seems as if the cron daemon is not running within the container (see issue #60)

@spujadas
Copy link
Owner

spujadas commented Aug 6, 2016

@reallistic Yep, sounds like a good idea, but I need to do a few tests first (see #60, will have to wait a couple of weeks for that).

reallistic pushed a commit to reallistic/elk-docker that referenced this issue Aug 6, 2016
igodev0001 pushed a commit to igodev0001/elk-docker that referenced this issue Jul 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants