You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/_structured_logging_with_log4j2.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ Using either will merge the object at the top-level (not nested under `message`)
28
28
29
29
## Tips [_tips]
30
30
31
-
We recommend using existing [ECS fields](ecs://docs/reference/ecs-field-reference.md).
31
+
We recommend using existing [ECS fields](ecs://reference/ecs-field-reference.md).
32
32
33
33
If there is no appropriate ECS field, consider prefixing your fields with `labels.`, as in `labels.foo`, for simple key/value pairs. For nested structures, consider prefixing with `custom.`. This approach protects against conflicts in case ECS later adds the same fields but with a different mapping.
34
34
@@ -51,7 +51,7 @@ A common pitfall is how dots in field names are handled in Elasticsearch and how
51
51
}
52
52
```
53
53
54
-
The property `foo` would be mapped to the [Object datatype](elasticsearch://docs/reference/elasticsearch/mapping-reference/object.md).
54
+
The property `foo` would be mapped to the [Object datatype](elasticsearch://reference/elasticsearch/mapping-reference/object.md).
55
55
56
56
This means that you can’t index a document where `foo` would be a different datatype, as in shown in the following example:
If you are using the Elastic APM Java agent, the easiest way to transform your logs into ECS-compatible JSON format is through the [`log_ecs_reformatting`](apm-agent-java://docs/reference/config-logging.md#config-log-ecs-reformatting) configuration option. By only setting this option, the Java agent will automatically import the correct ECS-logging library and configure your logging framework to use it instead (`OVERRIDE`/`REPLACE`) or in addition to (`SHADE`) your current configuration. No other changes required! Make sure to check out other [Logging configuration options](apm-agent-java://docs/reference/config-logging.md) to unlock the full potential of this option.
12
+
If you are using the Elastic APM Java agent, the easiest way to transform your logs into ECS-compatible JSON format is through the [`log_ecs_reformatting`](apm-agent-java://reference/config-logging.md#config-log-ecs-reformatting) configuration option. By only setting this option, the Java agent will automatically import the correct ECS-logging library and configure your logging framework to use it instead (`OVERRIDE`/`REPLACE`) or in addition to (`SHADE`) your current configuration. No other changes required! Make sure to check out other [Logging configuration options](apm-agent-java://reference/config-logging.md) to unlock the full potential of this option.
13
13
14
14
Otherwise, follow the steps below to manually apply ECS-formatting through your logging framework configuration. The following logging frameworks are supported:
15
15
@@ -185,9 +185,9 @@ All you have to do is to use the `co.elastic.logging.logback.EcsEncoder` instead
185
185
|`serviceEnvironment`| String || Sets the `service.environment` field so you can filter your logs by a particular service environment |
186
186
|`serviceNodeName`| String || Sets the `service.node.name` field so you can filter your logs by a particular node of your clustered service |
187
187
|`eventDataset`| String |`${serviceName}`| Sets the `event.dataset` field used by the machine learning job of the Logs app to look for anomalies in the log rate. |
188
-
|`includeMarkers`| boolean |`false`| Log [Markers](https://logging.apache.org/log4j/2.0/manual/markers.md) as [`tags`](ecs://docs/reference/ecs-base.md)|
189
-
|`stackTraceAsArray`| boolean |`false`| Serializes the [`error.stack_trace`](ecs://docs/reference/ecs-error.md) as a JSON array where each element is in a new line to improve readability.Note that this requires a slightly more complex [Filebeat configuration](#setup-stack-trace-as-array). |
190
-
|`includeOrigin`| boolean |`false`| If `true`, adds the [`log.origin.file.name`](ecs://docs/reference/ecs-log.md), [`log.origin.file.line`](ecs://docs/reference/ecs-log.md) and [`log.origin.function`](ecs://docs/reference/ecs-log.md) fields. Note that you also have to set `<includeCallerData>true</includeCallerData>` on your appenders if you are using the async ones. |
188
+
|`includeMarkers`| boolean |`false`| Log [Markers](https://logging.apache.org/log4j/2.0/manual/markers.md) as [`tags`](ecs://reference/ecs-base.md)|
189
+
|`stackTraceAsArray`| boolean |`false`| Serializes the [`error.stack_trace`](ecs://reference/ecs-error.md) as a JSON array where each element is in a new line to improve readability.Note that this requires a slightly more complex [Filebeat configuration](#setup-stack-trace-as-array). |
190
+
|`includeOrigin`| boolean |`false`| If `true`, adds the [`log.origin.file.name`](ecs://reference/ecs-log.md), [`log.origin.file.line`](ecs://reference/ecs-log.md) and [`log.origin.function`](ecs://reference/ecs-log.md) fields. Note that you also have to set `<includeCallerData>true</includeCallerData>` on your appenders if you are using the async ones. |
191
191
192
192
To include any custom field in the output, use following syntax:
193
193
@@ -235,9 +235,9 @@ Instead of the usual `<PatternLayout/>`, use `<EcsLayout serviceName="my-app"/>`
235
235
|`serviceEnvironment`| String || Sets the `service.environment` field so you can filter your logs by a particular service environment |
236
236
|`serviceNodeName`| String || Sets the `service.node.name` field so you can filter your logs by a particular node of your clustered service |
237
237
|`eventDataset`| String |`${serviceName}`| Sets the `event.dataset` field used by the machine learning job of the Logs app to look for anomalies in the log rate. |
238
-
|`includeMarkers`| boolean |`false`| Log [Markers](https://logging.apache.org/log4j/2.0/manual/markers.md) as [`tags`](ecs://docs/reference/ecs-base.md)|
239
-
|`stackTraceAsArray`| boolean |`false`| Serializes the [`error.stack_trace`](ecs://docs/reference/ecs-error.md) as a JSON array where each element is in a new line to improve readability. Note that this requires a slightly more complex [Filebeat configuration](#setup-stack-trace-as-array). |
240
-
|`includeOrigin`| boolean |`false`| If `true`, adds the [`log.origin.file.name`](ecs://docs/reference/ecs-log.md) fields. Note that you also have to set `includeLocation="true"` on your loggers and appenders if you are using the async ones. |
238
+
|`includeMarkers`| boolean |`false`| Log [Markers](https://logging.apache.org/log4j/2.0/manual/markers.md) as [`tags`](ecs://reference/ecs-base.md)|
239
+
|`stackTraceAsArray`| boolean |`false`| Serializes the [`error.stack_trace`](ecs://reference/ecs-error.md) as a JSON array where each element is in a new line to improve readability. Note that this requires a slightly more complex [Filebeat configuration](#setup-stack-trace-as-array). |
240
+
|`includeOrigin`| boolean |`false`| If `true`, adds the [`log.origin.file.name`](ecs://reference/ecs-log.md) fields. Note that you also have to set `includeLocation="true"` on your loggers and appenders if you are using the async ones. |
241
241
242
242
To include any custom field in the output, use following syntax:
243
243
@@ -300,8 +300,8 @@ Instead of the usual layout class `"org.apache.log4j.PatternLayout"`, use `"co.e
300
300
|`serviceEnvironment`| String || Sets the `service.environment` field so you can filter your logs by a particular service environment |
301
301
|`serviceNodeName`| String || Sets the `service.node.name` field so you can filter your logs by a particular node of your clustered service |
302
302
|`eventDataset`| String |`${serviceName}`| Sets the `event.dataset` field used by the machine learning job of the Logs app to look for anomalies in the log rate. |
303
-
|`stackTraceAsArray`| boolean |`false`| Serializes the [`error.stack_trace`](ecs://docs/reference/ecs-error.md) as a JSON array where each element is in a new line to improve readability.Note that this requires a slightly more complex [Filebeat configuration](#setup-stack-trace-as-array). |
304
-
|`includeOrigin`| boolean |`false`| If `true`, adds the [`log.origin.file.name`](ecs://docs/reference/ecs-log.md) fields.Note that you also have to set `<param name="LocationInfo" value="true"/>` if you are using `AsyncAppender`. |
303
+
|`stackTraceAsArray`| boolean |`false`| Serializes the [`error.stack_trace`](ecs://reference/ecs-error.md) as a JSON array where each element is in a new line to improve readability.Note that this requires a slightly more complex [Filebeat configuration](#setup-stack-trace-as-array). |
304
+
|`includeOrigin`| boolean |`false`| If `true`, adds the [`log.origin.file.name`](ecs://reference/ecs-log.md) fields.Note that you also have to set `<param name="LocationInfo" value="true"/>` if you are using `AsyncAppender`. |
305
305
306
306
To include any custom field in the output, use following syntax:
|`serviceEnvironment`| String || Sets the `service.environment` field so you can filter your logs by a particular service environment |
339
339
|`serviceNodeName`| String || Sets the `service.node.name` field so you can filter your logs by a particular node of your clustered service |
340
340
|`eventDataset`| String |`${serviceName}`| Sets the `event.dataset` field used by the machine learning job of the Logs app to look for anomalies in the log rate. |
341
-
|`stackTraceAsArray`| boolean |`false`| Serializes the [`error.stack_trace`](ecs://docs/reference/ecs-error.md) as a JSON array where each element is in a new line to improve readability. Note that this requires a slightly more complex Filebeat configuration. |
342
-
|`includeOrigin`| boolean |`false`| If `true`, adds the [`log.origin.file.name`](ecs://docs/reference/ecs-log.md) fields. Note that JUL does not stores line number and `log.origin.file.line` will have *1* value. |
341
+
|`stackTraceAsArray`| boolean |`false`| Serializes the [`error.stack_trace`](ecs://reference/ecs-error.md) as a JSON array where each element is in a new line to improve readability. Note that this requires a slightly more complex Filebeat configuration. |
342
+
|`includeOrigin`| boolean |`false`| If `true`, adds the [`log.origin.file.name`](ecs://reference/ecs-log.md) fields. Note that JUL does not stores line number and `log.origin.file.line` will have *1* value. |
343
343
|`additionalFields`| String || Adds additional static fields to all log events. The fields are specified as comma-separated key-value pairs. Example: `co.elastic.logging.jul.EcsFormatter.additionalFields=key1=value1,key2=value2`. |
|`serviceEnvironment`| String || Sets the `service.environment` field so you can filter your logs by a particular service environment |
370
370
|`serviceNodeName`| String || Sets the `service.node.name` field so you can filter your logs by a particular node of your clustered service |
371
371
|`eventDataset`| String |`${serviceName}`| Sets the `event.dataset` field used by the machine learning job of the Logs app to look for anomalies in the log rate. |
372
-
|`stackTraceAsArray`| boolean |`false`| Serializes the [`error.stack_trace`](ecs://docs/reference/ecs-error.md) as a JSON array where each element is in a new line to improve readability. Note that this requires a slightly more complex [Filebeat configuration](#setup-stack-trace-as-array). |
373
-
|`includeOrigin`| boolean |`false`| If `true`, adds the [`log.origin.file.name`](ecs://docs/reference/ecs-log.md) fields. |
372
+
|`stackTraceAsArray`| boolean |`false`| Serializes the [`error.stack_trace`](ecs://reference/ecs-error.md) as a JSON array where each element is in a new line to improve readability. Note that this requires a slightly more complex [Filebeat configuration](#setup-stack-trace-as-array). |
373
+
|`includeOrigin`| boolean |`false`| If `true`, adds the [`log.origin.file.name`](ecs://reference/ecs-log.md) fields. |
374
374
|`additionalFields`| String || Adds additional static fields to all log events. The fields are specified as comma-separated key-value pairs. Example: `additionalFields=key1=value1,key2=value2`. |
375
375
::::::
376
376
@@ -386,7 +386,7 @@ If you’re using the Elastic APM Java agent, log correlation is enabled by defa
386
386
:::::::{tab-set}
387
387
388
388
::::::{tab-item} Log file
389
-
1. Follow the [Filebeat quick start](beats://docs/reference/filebeat/filebeat-installation-configuration.md)
389
+
1. Follow the [Filebeat quick start](beats://reference/filebeat/filebeat-installation-configuration.md)
390
390
2. Add the following configuration to your `filebeat.yaml` file.
391
391
392
392
For Filebeat 7.16+
@@ -412,7 +412,7 @@ processors: <5>
412
412
2. Values from the decoded JSON object overwrite the fields that {{filebeat}} normally adds (type, source, offset, etc.) in case of conflicts.
413
413
3. {{filebeat}} adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
414
414
4. {{filebeat}} will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
415
-
5. Processors enhance your data. See [processors](beats://docs/reference/filebeat/filtering-enhancing-data.md) to learn more.
415
+
5. Processors enhance your data. See [processors](beats://reference/filebeat/filtering-enhancing-data.md) to learn more.
416
416
417
417
418
418
For Filebeat < 7.16
@@ -436,8 +436,8 @@ processors:
436
436
437
437
::::::{tab-item} Kubernetes
438
438
1. Make sure your application logs to stdout/stderr.
439
-
2. Follow the [Run Filebeat on Kubernetes](beats://docs/reference/filebeat/running-on-kubernetes.md) guide.
440
-
3. Enable [hints-based autodiscover](beats://docs/reference/filebeat/configuration-autodiscover-hints.md) (uncomment the corresponding section in `filebeat-kubernetes.yaml`).
439
+
2. Follow the [Run Filebeat on Kubernetes](beats://reference/filebeat/running-on-kubernetes.md) guide.
440
+
3. Enable [hints-based autodiscover](beats://reference/filebeat/configuration-autodiscover-hints.md) (uncomment the corresponding section in `filebeat-kubernetes.yaml`).
441
441
4. Add these annotations to your pods that log using ECS loggers. This will make sure the logs are parsed appropriately.
442
442
443
443
```yaml
@@ -454,8 +454,8 @@ annotations:
454
454
455
455
::::::{tab-item} Docker
456
456
1. Make sure your application logs to stdout/stderr.
457
-
2. Follow the [Run Filebeat on Docker](beats://docs/reference/filebeat/running-on-docker.md) guide.
0 commit comments