Status | |
---|---|
Stability | alpha: logs |
Distributions | [] |
Issues | |
Code Owners | @BinaryFissionGames, @MikeGoldsmith, @djaglowski |
This processor is used to deduplicate logs by detecting identical logs over a range of time and emitting a single log with the count of logs that were deduplicated.
-
The user configures the log deduplication processor in the desired logs pipeline.
-
If the processor does not provide
conditions
, all logs are considered eligible for aggregation. If the processor does have configuredconditions
, all log entries where at least one of theconditions
evaluatestrue
are considered eligible for aggregation. Eligible identical logs are aggregated over the configuredinterval
. Logs are considered identical if they have the same body, resource attributes, severity, and log attributes. Logs that do not match any condition inconditions
are passed onward in the pipeline without aggregating. -
After the interval, the processor emits a single log with the count of logs that were deduplicated. The emitted log will have the same body, resource attributes, severity, and log attributes as the original log. The emitted log will also have the following new attributes:
log_count
: The count of logs that were deduplicated over the interval. The name of the attribute is configurable via thelog_count_attribute
parameter.first_observed_timestamp
: The timestamp of the first log that was observed during the aggregation interval.last_observed_timestamp
: The timestamp of the last log that was observed during the aggregation interval.
Note: The ObservedTimestamp
and Timestamp
of the emitted log will be the time that the aggregated log was emitted and will not be the same as the ObservedTimestamp
and Timestamp
of the original logs.
Field | Type | Default | Description |
---|---|---|---|
interval | duration | 10s |
The interval at which logs are aggregated. The counter will reset after each interval. |
conditions | []string | [] |
A slice of OTTL expressions used to evaluate which log records are deduped. All paths in the log context are available to reference. All converters are available to use. |
log_count_attribute | string | log_count |
The name of the count attribute of deduplicated logs that will be added to the emitted aggregated log. |
timezone | string | UTC |
The timezone of the first_observed_timestamp and last_observed_timestamp timestamps on the emitted aggregated log. The available locations depend on the local IANA Time Zone database. This page contains many examples, such as America/New_York . |
exclude_fields | []string | [] |
Fields to exclude from duplication matching. Fields can be excluded from the log body or attributes . These fields will not be present in the emitted aggregated log. Nested fields must be . delimited. If a field contains a . it can be escaped by using a \ see example config.Note: The entire body cannot be excluded. If the body is a map then fields within it can be excluded. |
The following config is an example configuration for the log deduplication processor. It is configured with an aggregation interval of 60 seconds
, a timezone of America/Los_Angeles
, and a log count attribute of dedup_count
. It has no fields being excluded.
receivers:
filelog:
include: [./example/*.log]
processors:
logdedup:
interval: 60s
log_count_attribute: dedup_count
timezone: 'America/Los_Angeles'
exporters:
googlecloud:
service:
pipelines:
logs:
receivers: [filelog]
processors: [logdedup]
exporters: [googlecloud]
The following config is an example configuration that excludes the following fields from being considered when searching for duplicate logs:
timestamp
field from the bodyhost.name
field from attributesip
nested attribute inside a map attribute namedsrc
receivers:
filelog:
include: [./example/*.log]
processors:
logdedup:
exclude_fields:
- body.timestamp
- attributes.host\.name
- attributes.src.ip
exporters:
googlecloud:
service:
pipelines:
logs:
receivers: [filelog]
processors: [logdedup]
exporters: [googlecloud]
The following config is an example configuration that only performs the deduping process on telemetry where Attribute ID
equals 1
OR where Resource Attribute service.name
equals my-service
:
receivers:
filelog:
include: [./example/*.log]
processors:
logdedup:
conditions:
- attributes["ID"] == 1
- resource.attributes["service.name"] == "my-service"
interval: 60s
log_count_attribute: dedup_count
timezone: 'America/Los_Angeles'
exporters:
googlecloud:
service:
pipelines:
logs:
receivers: [filelog]
processors: [logdedup]
exporters: [googlecloud]