The prospector is running a file system watcher which looks for files specified
in the paths
option. At the moment only simple file system scanning is
supported.
A unique identifier for this filestream input. Each filestream input must have a unique ID.
Warning
|
If there are multiple inputs without an ID, there will be data duplication when {beatname_lc} is restarted. |
Warning
|
Adding an ID to an existing configuration will read the files from the beginning. |
A list of glob-based paths that will be crawled and fetched. All patterns
supported by Go Glob are also
supported here. For example, to fetch all files from a predefined level of
subdirectories, the following pattern can be used: /var/log//.log
. This
fetches all .log
files from the subfolders of /var/log
. It does not
fetch log files from the /var/log
folder itself.
It is possible to recursively fetch all files in all subdirectories of a directory
using the optional recursive_glob
settings.
{beatname_uc} starts a harvester for each file that it finds under the specified paths. You can specify one path per line. Each line begins with a dash (-).
The scanner watches the configured paths. It scans the file system periodically and returns the file system events to the Prospector.
Enable expanding into recursive glob patterns. With this feature enabled,
the rightmost
in each path is expanded into a fixed number of glob
patterns. For example:
/foo/
expands to /foo
, /foo/
, /foo/
/
, and so
on. If enabled it expands a single into a 8-level deep
pattern.
This feature is enabled by default. Set prospector.scanner.recursive_glob
to false to
disable it.
A list of regular expressions to match the files that you want {beatname_uc} to ignore. By default no files are excluded.
The following example configures {beatname_uc} to ignore all the files that have
a gz
extension:
{beatname_lc}.inputs:
- type: {type}
...
prospector.scanner.exclude_files: ['\.gz$']
See [regexp-support] for a list of supported regexp patterns.
A list of regular expressions to match the files that you want {beatname_uc} to include. If a list of regexes is provided, only the files that are allowed by the patterns are harvested.
By default no files are excluded. This option is the counterpart of
prospector.scanner.exclude_files
.
The following example configures {beatname_uc} to exclude files that
are not under /var/log
:
{beatname_lc}.inputs:
- type: {type}
...
prospector.scanner.include_files: ['^/var/log/.*']
Note
|
Patterns should start with ^ in case of absolute paths.
|
See [regexp-support] for a list of supported regexp patterns.
The symlinks
option allows {beatname_uc} to harvest symlinks in addition to
regular files. When harvesting symlinks, {beatname_uc} opens and reads the
original file even though it reports the path of the symlink.
When you configure a symlink for harvesting, make sure the original path is excluded. If a single input is configured to harvest both the symlink and the original file, {beatname_uc} will detect the problem and only process the first file it finds. However, if two different inputs are configured (one to read the symlink and the other the original path), both paths will be harvested, causing {beatname_uc} to send duplicate data and the inputs to overwrite each other’s state.
The symlinks
option can be useful if symlinks to the log files have additional
metadata in the file name, and you want to process the metadata in Logstash.
This is, for example, the case for Kubernetes log files.
Because this option may lead to data loss, it is disabled by default.
If this option is enabled a file is resent if its size has not changed but its modification time has changed to a later time than before. It is disabled by default to avoid accidentally resending files.
How often {beatname_uc} checks for new files in the paths that are specified
for harvesting. For example, if you specify a glob like /var/log/*
, the
directory is scanned for files using the frequency specified by
check_interval
. Specify 1s to scan the directory as frequently as possible
without causing {beatname_uc} to scan too frequently. We do not recommend to set
this value <1s
.
If you require log lines to be sent in near real time do not use a very low
check_interval
but adjust close.on_state_change.inactive
so the file handler
stays open and constantly polls your files.
The default setting is 10s.
If this option is enabled, {beatname_uc} ignores any files that were modified
before the specified timespan. Configuring ignore_older
can be especially
useful if you keep log files for a long time. For example, if you want to start
{beatname_uc}, but only want to send the newest files and files from last week,
you can configure this option.
You can use time strings like 2h (2 hours) and 5m (5 minutes). The default is 0, which disables the setting. Commenting out the config has the same effect as setting it to 0.
Important
|
You must set ignore_older to be greater than close.on_state_change.inactive .
|
The files affected by this setting fall into two categories:
-
Files that were never harvested
-
Files that were harvested but weren’t updated for longer than
ignore_older
For files which were never seen before, the offset state is set to the end of the file. If a state already exists, the offset is not changed. If a file is updated again later, reading continues at the set offset position.
The ignore_older
setting relies on the modification time of the file to
determine if a file is ignored. If the modification time of the file is not
updated when lines are written to a file (which can happen on Windows), the
ignore_older
setting may cause {beatname_uc} to ignore files even though
content was added at a later time.
To remove the state of previously harvested files from the registry file, use
the clean_inactive
configuration option.
Before a file can be ignored by {beatname_uc}, the file must be closed. To
ensure a file is no longer being harvested when it is ignored, you must set
ignore_older
to a longer duration than close.on_state_change.inactive
.
If a file that’s currently being harvested falls under ignore_older
, the
harvester will first finish reading the file and close it after
close.on_state_change.inactive
is reached. Then, after that, the file will be ignored.
If this option is enabled, {beatname_uc} ignores every file that has not been
updated since the selected time. Possible options are since_first_start
and
since_last_start
. The first option ignores every file that has not been updated since
the first start of {beatname_uc}. It is useful when the Beat might be restarted
due to configuration changes or a failure. The second option tells
the Beat to read from files that have been updated since its start.
The files affected by this setting fall into two categories:
-
Files that were never harvested
-
Files that were harvested but weren’t updated since
ignore_inactive
.
For files that were never seen before, the offset state is set to the end of the file. If a state already exist, the offset is not changed. In case a file is updated again later, reading continues at the set offset position.
The setting relies on the modification time of the file to determine if a file is ignored. If the modification time of the file is not updated when lines are written to a file (which can happen on Windows), the setting may cause {beatname_uc} to ignore files even though content was added at a later time.
To remove the state of previously harvested files from the registry file, use
the clean_inactive
configuration option.
The close.*
configuration options are used to close the harvester after a
certain criteria or time. Closing the harvester means closing the file handler.
If a file is updated after the harvester is closed, the file will be picked up
again after prospector.scanner.check_interval
has elapsed. However, if the file
is moved or deleted while the harvester is closed, {beatname_uc} will not be able
to pick up the file again, and any data that the harvester hasn’t read will be lost.
The close.on_state_change.*
settings are applied asynchronously
to read from a file, meaning that if {beatname_uc} is in a blocked state
due to blocked output, full queue or other issue, a file that would be
closed regardless.
When this option is enabled, {beatname_uc} closes the file handle if a file has
not been harvested for the specified duration. The counter for the defined
period starts when the last log line was read by the harvester. It is not based
on the modification time of the file. If the closed file changes again, a new
harvester is started and the latest changes will be picked up after
prospector.scanner.check_interval
has elapsed.
We recommended that you set close.on_state_change.inactive
to a value that is
larger than the least frequent updates to your log files. For example, if your
log files get updated every few seconds, you can safely set
close.on_state_change.inactive
to 1m
. If there are log files with very
different update rates, you can use multiple configurations with different values.
Setting close.on_state_change.inactive
to a lower value means that file handles
are closed sooner. However this has the side effect that new log lines are not
sent in near real time if the harvester is closed.
The timestamp for closing a file does not depend on the modification time of the
file. Instead, {beatname_uc} uses an internal timestamp that reflects when the
file was last harvested. For example, if close.on_state_change.inactive
is set
to 5 minutes, the countdown for the 5 minutes starts after the harvester reads the
last line of the file.
You can use time strings like 2h (2 hours) and 5m (5 minutes). The default is 5m.
Warning
|
Only use this option if you understand that data loss is a potential side effect. |
When this option is enabled, {beatname_uc} closes the file handler when a file
is renamed. This happens, for example, when rotating files. By default, the
harvester stays open and keeps reading the file because the file handler does
not depend on the file name. If the close.on_state_change.renamed
option is
enabled and the file is renamed or moved in such a way that it’s no longer
matched by the file patterns specified for the , the file will not be picked
up again. {beatname_uc} will not finish reading the file.
Do not use this option when path
based file_identity
is configured. It does
not make sense to enable the option, as Filebeat cannot detect renames using
path names as unique identifiers.
WINDOWS: If your Windows log rotation system shows errors because it can’t rotate the files, you should enable this option.
When this option is enabled, {beatname_uc} closes the harvester when a file is
removed. Normally a file should only be removed after it’s inactive for the
duration specified by close.on_state_change.inactive
. However, if a file is
removed early and you don’t enable close.on_state_change.removed
, {beatname_uc}
keeps the file open to make sure the harvester has completed. If this setting
results in files that are not completely read because they are removed from
disk too early, disable this option.
This option is enabled by default. If you disable this option, you must also
disable clean.on_state_change.removed
.
WINDOWS: If your Windows log rotation system shows errors because it can’t rotate files, make sure this option is enabled.
Warning
|
Only use this option if you understand that data loss is a potential side effect. |
When this option is enabled, {beatname_uc} closes a file as soon as the end of a file is reached. This is useful when your files are only written once and not updated from time to time. For example, this happens when you are writing every single log event to a new file. This option is disabled by default.
Warning
|
Only use this option if you understand that data loss is a potential side effect. Another side effect is that multiline events might not be completely sent before the timeout expires. |
When this option is enabled, {beatname_uc} gives every harvester a predefined
lifetime. Regardless of where the reader is in the file, reading will stop after
the close.reader.after_interval
period has elapsed. This option can be useful for older log
files when you want to spend only a predefined amount of time on the files.
While close.reader.after_interval
will close the file after the predefined timeout, if the
file is still being updated, {beatname_uc} will start a new harvester again per
the defined prospector.scanner.check_interval
. And the close.reader.after_interval for this harvester will
start again with the countdown for the timeout.
This option is particularly useful in case the output is blocked, which makes
{beatname_uc} keep open file handlers even for files that were deleted from the
disk. Setting close.reader.after_interval
to 5m
ensures that the files are periodically
closed so they can be freed up by the operating system.
If you set close.reader.after_interval
to equal ignore_older
, the file will not be picked
up if it’s modified while the harvester is closed. This combination of settings
normally leads to data loss, and the complete file is not sent.
When you use close.reader.after_interval
for logs that contain multiline events, the
harvester might stop in the middle of a multiline event, which means that only
parts of the event will be sent. If the harvester is started again and the file
still exists, only the second part of the event will be sent.
This option is set to 0 by default which means it is disabled.
The clean_*
options are used to clean up the state entries in the registry
file. These settings help to reduce the size of the registry file and can
prevent a potential inode reuse issue.
Warning
|
Only use this option if you understand that data loss is a potential side effect. |
When this option is enabled, {beatname_uc} removes the state of a file after the
specified period of inactivity has elapsed. The state can only be removed if
the file is already ignored by {beatname_uc} (the file is older than
ignore_older
). The clean_inactive
setting must be greater than ignore_older
to make sure that no states are removed while a file is still
being harvested. Otherwise, the setting could result in {beatname_uc} resending
the full content constantly because
prospector.scanner.check_intervalclean_inactive
removes state for files
that are still detected by {beatname_uc}. If a file is updated or appears
again, the file is read from the beginning.
The clean_inactive
configuration option is useful to reduce the size of the
registry file, especially if a large amount of new files are generated every
day.
This config option is also useful to prevent {beatname_uc} problems resulting from inode reuse on Linux. For more information, see [inode-reuse-issue].
Note
|
Every time a file is renamed, the file state is updated and the counter
for clean_inactive starts at 0 again.
|
Tip
|
During testing, you might notice that the registry contains state entries
that should be removed based on the clean_inactive setting. This happens
because {beatname_uc} doesn’t remove the entries until it opens the registry
again to read a different file. If you are testing the clean_inactive setting,
make sure {beatname_uc} is configured to read from more than one file, or the
file state will never be removed from the registry.
|
When this option is enabled, {beatname_uc} cleans files from the registry if they cannot be found on disk anymore under the last known name. This means also files which were renamed after the harvester was finished will be removed. This option is enabled by default.
If a shared drive disappears for a short period and appears again, all files
will be read again from the beginning because the states were removed from the
registry file. In such cases, we recommend that you disable the clean_removed
option.
You must disable this option if you also disable close_removed
.
The backoff options specify how aggressively {beatname_uc} crawls open files for updates. You can use the default values in most cases.
The backoff.init
option defines how long {beatname_uc} waits for the first time
before checking a file again after EOF is reached. The backoff intervals increase exponentially.
The default is 2s. Thus, the file is checked after 2 seconds, then 4 seconds,
then 8 seconds and so on until it reaches the limit defined in backoff.max
.
Every time a new line appears in the file, the backoff.init
value is reset to the
initial value.
The maximum time for {beatname_uc} to wait before checking a file again after
EOF is reached. After having backed off multiple times from checking the file,
the wait time will never exceed backoff.max
.
Because it takes a maximum of 10s to read a new line,
specifying 10s for backoff.max
means that, at the worst, a new line could be
added to the log file if {beatname_uc} has backed off multiple times. The
default is 10s.
Requirement: Set backoff.max
to be greater than or equal to backoff.init
and
less than or equal to prospector.scanner.check_interval
(backoff.init ⇐ backoff.max ⇐ prospector.scanner.check_interval
).
If backoff.max
needs to be higher, it is recommended to close the file handler
instead and let {beatname_uc} pick up the file again.
Different file_identity
methods can be configured to suit the
environment where you are collecting log messages.
Warning
|
Changing file_identity methods between runs may result in
duplicated events in the output.
|
native
-
The default behaviour of {beatname_uc} is to differentiate between files using their inodes and device ids.
file_identity.native: ~
path
-
To identify files based on their paths use this strategy.
Warning
|
Only use this strategy if your log files are rotated to a folder outside of the scope of your input or not at all. Otherwise you end up with duplicated events. |
Warning
|
This strategy does not support renaming files. If an input file is renamed, {beatname_uc} will read it again if the new path matches the settings of the input. |
file_identity.path: ~
inode_marker
-
If the device id changes from time to time, you must use this method to distinguish files. This option is not supported on Windows.
Set the location of the marker file the following way:
file_identity.inode_marker.path: /logs/.filebeat-marker
As log files are constantly written, they must be rotated and purged to prevent the logger application from filling up the disk. Rotation is done by an external application, thus, {beatname_uc} needs information how to cooperate with it.
When reading from rotating files make sure the paths configuration includes both the active file and all rotated files.
By default, {beatname_uc} is able to track files correctly in the following strategies: * create: new active file with a unique name is created on rotation * rename: rotated files are renamed
However, in case of copytruncate strategy, you should provide additional configuration to {beatname_uc}.
experimental[]
If the log rotating application copies the contents of the active file and then truncates the original file, use these options to help {beatname_uc} to read files correctly.
Set the option suffix_regex
so {beatname_uc} can tell active and rotated files apart. There are
two supported suffix types in the input: numberic and date.
If your rotated files have an incrementing index appended to the end of the filename, e.g.
active file apache.log
and the rotated files are named apache.log.1
, apache.log.2
, etc,
use the following configuration.
---
rotation.external.strategy.copytruncate:
suffix_regex: \.\d$
---
If the rotation date is appended to the end of the filename, e.g. active file apache.log
and the
rotated files are named apache.log-20210526
, apache.log-20210527
, etc. use the following configuration:
---
rotation.external.strategy.copytruncate:
suffix_regex: \-\d{6}$
dateformat: -20060102
---