Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docs for new central log source setting #4384

Merged
merged 2 commits into from
Oct 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 15 additions & 18 deletions docs/en/observability/categorize-logs.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ log messages are the same or very similar, so classifying them can reduce
millions of log lines into just a few categories.

Within the {logs-app}, the *Categories* page enables you to identify patterns in
your log events quickly. Instead of manually identifying similar logs, the logs
categorization view lists log events that have been grouped based on their
your log events quickly. Instead of manually identifying similar logs, the logs
categorization view lists log events that have been grouped based on their
messages and formats so that you can take action quicker.

NOTE: This feature makes use of {ml} {anomaly-jobs}. To set up jobs, you must
Expand All @@ -25,47 +25,44 @@ more details, refer to {ml-docs}/setup.html[Set up {ml-features}].

Create a {ml} job to categorize log messages automatically. {ml-cap} observes
the static parts of the message, clusters similar messages, classifies them into
message categories, and detects unusually high message counts in the categories.

[role="screenshot"]
image::images/log-create-categorization-job.jpg[Configure log categorization job]
message categories, and detects unusually high message counts in the categories.

// lint ignore ml
1. Select *Categories*, and you are prompted to use {ml} to create
1. Select *Categories*, and you are prompted to use {ml} to create
log rate categorizations.
2. Choose a time range for the {ml} analysis. By default, the {ml} job analyzes
2. Choose a time range for the {ml} analysis. By default, the {ml} job analyzes
log messages no older than four weeks and continues indefinitely.
3. Add the indices that contain the logs you want to examine.
4. Click *Create ML job*. The job is created, and it starts to run. It takes a few
minutes for the {ml} robots to collect the necessary data. After the job
3. Add the indices that contain the logs you want to examine. By default, Machine Learning analyzes messages in all log indices that match the patterns set in the *logs source* advanced setting. Update this setting by going to *Management* → *Advanced Settings* and searching for _logs source_.
4. Click *Create ML job*. The job is created, and it starts to run. It takes a few
minutes for the {ml} robots to collect the necessary data. After the job
processed the data, you can view the results.

[discrete]
[[analyze-log-categories]]
== Analyze log categories

The *Categories* page lists all the log categories from the selected indices.
You can filter the categories by indices. The screenshot below shows the
The *Categories* page lists all the log categories from the selected indices.
You can filter the categories by indices. The screenshot below shows the
categories from the `elastic.agent` log.

[role="screenshot"]
image::images/log-categories.jpg[Log categories]

The category row contains the following information:
The category row contains the following information:

* message count: shows how many messages belong to the given category.
* trend: indicates how the occurrence of the messages changes in time.
* category name: it is the name of the category and is derived from the message
* category name: it is the name of the category and is derived from the message
text.
* datasets: the name of the datasets where the categories are present.
* maximum anomaly score: the highest anomaly score in the category.

To view a log message under a particular category, click
the arrow at the end of the row. To further examine a message, it
To view a log message under a particular category, click
the arrow at the end of the row. To further examine a message, it
can be viewed in the corresponding log event on the *Stream* page or displayed in its context.

[role="screenshot"]
image::images/log-opened.png[Opened log category]

For more information about categorization, go to
{ml-docs}/ml-configuring-categories.html[Detecting anomalous categories of data].
{ml-docs}/ml-configuring-categories.html[Detecting anomalous categories of data].
37 changes: 13 additions & 24 deletions docs/en/observability/configure-logs-sources.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,8 @@
Specify the source configuration for logs in the
{kibana-ref}/logs-ui-settings-kb.html[{logs-app} settings] in the
{kibana-ref}/settings.html[{kib} configuration file].
By default, the configuration uses the `filebeat-*` index pattern to query the data.
The configuration also defines field settings for things like timestamps
and container names, and the default columns displayed in the logs stream.
By default, the configuration uses the index patterns stored in the {kib} log sources advanced setting to query the data.
The configuration also defines the default columns displayed in the logs stream.

If your logs have custom index patterns, use non-default field settings, or contain
parsed fields that you want to expose as individual columns, you can override the
Expand All @@ -20,32 +19,22 @@ default configuration settings.
+
. Click *Settings*.
+
|===
|===

| *Name* | Name of the source configuration.
| *Name* | Name of the source configuration.

| *{ipm-cap}* | {kib} index patterns or index name patterns in the {es} indices
to read log data from.

Each log source now integrates with {kib} index patterns which support creating and
querying {kibana-ref}/managing-data-views.html[runtime fields]. You can continue
to use log sources configured to use an index name pattern, such as `filebeat-*`,
instead of a {kib} index pattern. However, some features like those depending on
runtime fields may not be available.
| *{kib} log sources advanced setting* | Use index patterns stored in the {kib} *log sources* advanced setting, which provides a centralized place to store and query log index patterns.
Update this setting by going to *Stack Management* → *Advanced Settings* and searching for _logs sources_.

Instead of entering an index pattern name,
click *Use {kib} index patterns* and select the `filebeat-*` log index pattern.

| *{data-source-cap}* | This is a new configuration option that can be used
instead of index pattern. The Logs UI can now integrate with {data-sources} to
| *{data-source-cap} (deprecated)* | The Logs UI integrates with {data-sources} to
configure the used indices by clicking *Use {data-sources}*.

| *Fields* | Configuring fields input has been deprecated. You should adjust your indexing using the
<<logs-app-fields,{logs-app} fields>>, which use the {ecs-ref}/index.html[Elastic Common Schema (ECS) specification].
| *Log indices (deprecated)* | {kib} index patterns or index name patterns in the {es} indices
to read log data from.

| *Log columns* | Columns that are displayed in the logs *Stream* page.

|===
|===
+
. When you have completed your changes, click *Apply*.

Expand All @@ -63,16 +52,16 @@ with other data source configurations.

By default, the *Stream* page within the {logs-app} displays the following columns.

|===
|===

| *Timestamp* | The timestamp of the log entry from the `timestamp` field.
| *Timestamp* | The timestamp of the log entry from the `timestamp` field.

| *Message* | The message extracted from the document.
The content of this field depends on the type of log message.
If no special log message type is detected, the {ecs-ref}/ecs-base.html[Elastic Common Schema (ECS)]
base field, `message`, is used.

|===
|===

1. To add a new column to the logs stream, select *Settings > Add column*.
2. In the list of available fields, select the field you want to add.
Expand Down
4 changes: 3 additions & 1 deletion docs/en/observability/explore-logs.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,9 @@ Viewing data in Logs Explorer requires `read` privileges for *Discover* and *Int
[[find-your-logs]]
== Find your logs

By default, Logs Explorer shows all of your logs.
By default, Logs Explorer shows all of your logs, according to the index patterns set in the *logs source* advanced setting.
Update this setting by going to *Stack Management* → *Advanced Settings* and searching for __.

If you need to focus on logs from a specific integration, select the integration from the logs menu:

[role="screenshot"]
Expand Down
2 changes: 1 addition & 1 deletion docs/en/observability/inspect-log-anomalies.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Create a {ml} job to detect anomalous log entry rates automatically.

1. Select *Anomalies*, and you'll be prompted to create a {ml} job which will carry out the log rate analysis.
2. Choose a time range for the {ml} analysis.
3. Add the Indices that contain the logs you want to analyze.
3. Add the indices that contain the logs you want to examine. By default, Machine Learning analyzes messages in all log indices that match the patterns set in the *logs source* advanced setting. Update this setting by going to *Management* → *Advanced Settings* and searching for _logs source_.
4. Click *Create {ml-init} job*.
5. You're now ready to explore your log partitions.

Expand Down
4 changes: 3 additions & 1 deletion docs/en/serverless/logging/view-and-monitor-logs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,9 @@ For more on assigning Kibana privileges, refer to the [((kib)) privileges](((kib

## Find your logs

By default, Logs Explorer shows all of your logs.
By default, Logs Explorer shows all of your logs according to the index patterns set in the **logs source** advanced setting.
Update this setting by going to *Management* → *Advanced Settings* and searching for _logs source_.

If you need to focus on logs from a specific integrations, select the integration from the logs menu:

<DocImage size="l" url="../images/log-menu.png" alt="Screen capture of log menu" />
Expand Down
Loading