Skip to content

Update log4j rollover to configure time retention #16179

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Jun 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions config/log4j2.properties
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,12 @@ appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 30
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:ls.logs}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = logstash-plain-*
appender.rolling.strategy.action.condition.nested_condition.type = IfLastModified
appender.rolling.strategy.action.condition.nested_condition.age = 7D
appender.rolling.avoid_pipelined_filter.type = PipelineRoutingFilter

appender.json_rolling.type = RollingFile
Expand All @@ -43,6 +49,12 @@ appender.json_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.json_rolling.policies.size.size = 100MB
appender.json_rolling.strategy.type = DefaultRolloverStrategy
appender.json_rolling.strategy.max = 30
appender.json_rolling.strategy.action.type = Delete
appender.json_rolling.strategy.action.basepath = ${sys:ls.logs}
appender.json_rolling.strategy.action.condition.type = IfFileName
appender.json_rolling.strategy.action.condition.glob = logstash-json-*
appender.json_rolling.strategy.action.condition.nested_condition.type = IfLastModified
appender.json_rolling.strategy.action.condition.nested_condition.age = 7D
appender.json_rolling.avoid_pipelined_filter.type = PipelineRoutingFilter

appender.routing.type = PipelineRouting
Expand All @@ -57,6 +69,12 @@ appender.routing.pipeline.policy.type = SizeBasedTriggeringPolicy
appender.routing.pipeline.policy.size = 100MB
appender.routing.pipeline.strategy.type = DefaultRolloverStrategy
appender.routing.pipeline.strategy.max = 30
appender.routing.pipeline.strategy.action.type = Delete
appender.routing.pipeline.strategy.action.basepath = ${sys:ls.logs}
appender.routing.pipeline.strategy.action.condition.type = IfFileName
appender.routing.pipeline.strategy.action.condition.glob = pipeline_${ctx:pipeline.id}*.log.gz
appender.routing.pipeline.strategy.action.condition.nested_condition.type = IfLastModified
appender.routing.pipeline.strategy.action.condition.nested_condition.age = 7D

rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
Expand Down
71 changes: 71 additions & 0 deletions docs/static/logging.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,77 @@ The logger is usually identified by a Java class name, such as
path as in `org.logstash.dissect`. For Ruby classes, like `LogStash::Outputs::Elasticsearch`,
the logger name is obtained by lowercasing the full class name and replacing double colons with a single dot.

NOTE: Consider using the default log4j configuration that is shipped with {ls}, as it is configured to work well for most deployments.
The next section describes how the rolling strategy works in case you need to make adjustments.

[[rollover]]
===== Rollover settings

The `log4j2.properties` file has three appenders for writing to log files:
one for plain text, one with json format, and one to split log lines on per pipeline basis when you set the `pipeline.separate_logs` value.

These appenders define:

* **triggering policies** that determine _if_ a rollover should be performed, and
* **rollover strategy** to defines _how_ the rollover should be done.

By default, two triggering policies are defined--time and size.

* The **time** policy creates one file per day.
* The **size** policy forces the creation of a new file after the file size surpasses 100 MB.

The default strategy also performs file rollovers based on a **maximum number of files**.
When the limit of 30 files has been reached, the first (oldest) file is removed to create space for the new file.
Subsequent files are renumbered accordingly.

Each file has a date, and files older than 7 days (default) are removed during rollover.
Comment on lines +71 to +74
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andsel If I'm understanding we're saying that we're showing users how to override default settings, but in the example, we're showing default settings. Is that correct?

Can we offer any guidelines on appropriate settings other than recommending that people don't change them?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intention is to describe what the default settings are and what they accomplish. The second example demonstrates how to switch the strategy to be size-based (the overall file sizes) instead of time-based (the default configuration that roll over after 7 days).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha. Thank you.


[source,text]
----------------------------------
appender.rolling.type = RollingFile <1>
appender.rolling.name = plain_rolling
appender.rolling.fileName = ${sys:ls.logs}/logstash-plain.log <2>
appender.rolling.filePattern = ${sys:ls.logs}/logstash-plain-%d{yyyy-MM-dd}-%i.log.gz <3>
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy <4>
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy <5>
appender.rolling.policies.size.size = 100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 30 <6>
appender.rolling.strategy.action.type = Delete <7>
appender.rolling.strategy.action.basepath = ${sys:ls.logs}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = logstash-plain-* <8>
appender.rolling.strategy.action.condition.nested_condition.type = IfLastModified
appender.rolling.strategy.action.condition.nested_condition.age = 7D <9>
----------------------------------
<1> The appender type, which rolls older log files.
<2> Name of the current log file.
<3> Name's format definition of the rolled files, in this case a date followed by an incremental number, up to 30 (by default).
<4> Time policy to trigger a rollover at the end of the day.
<5> Size policy to trigger a rollover once the plain text file reaches the size of 100 MB.
<6> Rollover strategy defines a maximum of 30 files.
<7> Action to execute during the rollover.
<8> The file set to consider by the action.
<9> Condition to execute the rollover action: older than 7 days.

The rollover action can also enforce a disk usage limit, deleting older files to match
the requested condition, as an example:

[source,text]
----------------------------------
appender.rolling.type = RollingFile
...
appender.rolling.strategy.action.condition.glob = pipeline_${ctx:pipeline.id}.*.log.gz
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds = 5MB <1>
----------------------------------
<1> Deletes files if total accumulated compressed file size is over 5MB.

==== Logging APIs

For temporary logging changes, modifying the `log4j2.properties` file and restarting Logstash leads to unnecessary
Expand Down