Skip to content

Commit

Permalink
Merge branch 'main' into enable-execution-context
Browse files Browse the repository at this point in the history
  • Loading branch information
kibanamachine authored Nov 30, 2021
2 parents 46fe0b7 + 57134d4 commit 554b2fe
Show file tree
Hide file tree
Showing 353 changed files with 6,631 additions and 1,807 deletions.
2 changes: 1 addition & 1 deletion config/kibana.yml
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@

# Logs queries sent to Elasticsearch.
#logging.loggers:
# - name: elasticsearch.queries
# - name: elasticsearch.query
# level: debug

# Logs http responses.
Expand Down
6 changes: 6 additions & 0 deletions dev_docs/tutorials/data/search.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,12 @@ setTimeout(() => {
}, 1000);
```

<DocCallOut color="danger" title="Cancel your searches if results are no longer needed">
Users might no longer be interested in search results. For example, they might start a new search
or leave your app without waiting for the results. You should handle such cases by using
`AbortController` with search API.
</DocCallOut>

#### Search strategies

By default, the search service uses the DSL query and aggregation syntax and returns the response from Elasticsearch as is. It also provides several additional basic strategies, such as Async DSL (`x-pack` default) and EQL.
Expand Down
12 changes: 7 additions & 5 deletions docs/api/saved-objects/import.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,13 @@ Saved objects can only be imported into the same version, a newer minor on the s

|=======
| Exporting version | Importing version | Compatible?
| 6.7.0 | 6.8.1 | Yes
| 6.8.1 | 7.3.0 | Yes
| 7.3.0 | 7.11.1 | Yes
| 7.11.1 | 7.6.0 | No
| 6.8.1 | 8.0.0 | No
| 6.7.x | 6.8.x | Yes
| 6.x.x | 7.x.x | Yes
| 7.x.x | 8.x.x | Yes
| 7.1.x | 7.15.x | Yes
| 7.x.x | 6.x.x | No
| 7.15.x | 7.1.x | No
| 6.x.x | 8.x.x | No
|=======

[[saved-objects-api-import-request]]
Expand Down
8 changes: 4 additions & 4 deletions docs/concepts/data-views.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[data-views]]
=== Create a data view

{kib} requires a data view to access the {es} data that you want to explore.
{kib} requires a data view to access the {es} data that you want to explore.
A data view selects the data to use and allows you to define properties of the fields.

A data view can point to one or more indices, {ref}/data-streams.html[data stream], or {ref}/alias.html[index aliases].
Expand Down Expand Up @@ -37,7 +37,7 @@ If you loaded your own data, follow these steps to create a data view.
. Click *Create data view*.

[role="screenshot"]
image:management/index-patterns/images/create-index-pattern.png["Create data view"]
image:management/index-patterns/images/create-data-view.png["Create data view"]

. Start typing in the *name* field, and {kib} looks for the names of
indices, data streams, and aliases that match your input.
Expand Down Expand Up @@ -87,11 +87,11 @@ For an example, refer to <<rollup-data-tutorial,Create and visualize rolled up d
==== Create a data view that searches across clusters

If your {es} clusters are configured for {ref}/modules-cross-cluster-search.html[{ccs}],
you can create an index pattern to search across the clusters of your choosing. Use the
you can create a {data-source} to search across the clusters of your choosing. Use the
same syntax that you use in a raw {ccs} request in {es}:

```ts
<cluster-names>:<pattern>
<cluster-names>:<data-view>
```

To query {ls} indices across two {es} clusters
Expand Down
5 changes: 1 addition & 4 deletions docs/concepts/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,6 @@ image:concepts/images/global-search.png["Global search showing matches to apps a
{kib} requires a data view to tell it which {es} data you want to access,
and whether the data is time-based. A data view can point to one or more {es}
data streams, indices, or index aliases by name.
For example, `logs-elasticsearch-prod-*` is an index pattern,
and it is time-based with a time field of `@timestamp`. The time field is not editable.

Data views are typically created by an administrator when sending data to {es}.
You can <<data-views,create or update data views>> in *Stack Management*, or by using a script
Expand Down Expand Up @@ -129,8 +127,7 @@ Previously, {kib} used the {ref}/search-aggregations-bucket-terms-aggregation.ht
Structured filters are a more interactive way to create {es} queries,
and are commonly used when building dashboards that are shared by multiple analysts.
Each filter can be disabled, inverted, or pinned across all apps.
The structured filters are the only way to use the {es} Query DSL in JSON form,
or to target a specific index pattern for filtering. Each of the structured
Each of the structured
filters is combined with AND logic on the rest of the query.

[role="screenshot"]
Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/save-query.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ image:concepts/images/saved-query.png["Example of the saved query management pop

Saved queries are different than <<save-open-search,saved searches>>,
which include the *Discover* configuration&mdash;selected columns in the document table, sort order, and
index pattern&mdash;in addition to the query.
{data-source}&mdash;in addition to the query.
Saved searches are primarily used for adding search results to a dashboard.

[role="xpack"]
Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/set-time-filter.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
=== Set the time range
Display data within a
specified time range when your index contains time-based events, and a time-field is configured for the
selected <<data-views, data view>>.
selected <<data-views, {data-source}>>.
The default time range is 15 minutes, but you can customize
it in <<advanced-options,Advanced Settings>>.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,9 @@ readonly links: {
readonly canvas: {
readonly guide: string;
};
readonly cloud: {
readonly indexManagement: string;
};
readonly dashboard: {
readonly guide: string;
readonly drilldowns: string;
Expand Down Expand Up @@ -55,10 +58,64 @@ readonly links: {
readonly install: string;
readonly start: string;
};
readonly appSearch: {
readonly apiRef: string;
readonly apiClients: string;
readonly apiKeys: string;
readonly authentication: string;
readonly crawlRules: string;
readonly curations: string;
readonly duplicateDocuments: string;
readonly entryPoints: string;
readonly guide: string;
readonly indexingDocuments: string;
readonly indexingDocumentsSchema: string;
readonly logSettings: string;
readonly metaEngines: string;
readonly nativeAuth: string;
readonly precisionTuning: string;
readonly relevanceTuning: string;
readonly resultSettings: string;
readonly searchUI: string;
readonly security: string;
readonly standardAuth: string;
readonly synonyms: string;
readonly webCrawler: string;
readonly webCrawlerEventLogs: string;
};
readonly enterpriseSearch: {
readonly base: string;
readonly appSearchBase: string;
readonly workplaceSearchBase: string;
readonly configuration: string;
readonly licenseManagement: string;
readonly mailService: string;
readonly usersAccess: string;
};
readonly workplaceSearch: {
readonly box: string;
readonly confluenceCloud: string;
readonly confluenceServer: string;
readonly customSources: string;
readonly customSourcePermissions: string;
readonly documentPermissions: string;
readonly dropbox: string;
readonly externalIdentities: string;
readonly gitHub: string;
readonly gettingStarted: string;
readonly gmail: string;
readonly googleDrive: string;
readonly indexingSchedule: string;
readonly jiraCloud: string;
readonly jiraServer: string;
readonly nativeAuth: string;
readonly oneDrive: string;
readonly permissions: string;
readonly salesforce: string;
readonly security: string;
readonly serviceNow: string;
readonly sharePoint: string;
readonly slack: string;
readonly standardAuth: string;
readonly synch: string;
readonly zendesk: string;
};
readonly heartbeat: {
readonly base: string;
Expand Down

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions docs/getting-started/quick-start-guide.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ When you've finished, you'll know how to:

[float]
=== Required privileges
You must have `read`, `write`, and `manage` privileges on the `kibana_sample_data_*` indices.
You must have `read`, `write`, and `manage` privileges on the `kibana_sample_data_*` indices.
Learn how to <<tutorial-secure-access-to-kibana, secure access to {kib}>>, or refer to {ref}/security-privileges.html[Security privileges] for more information.

[float]
Expand All @@ -37,7 +37,7 @@ image::images/addData_sampleDataCards_7.15.0.png[Add data UI for the sample data
[[explore-the-data]]
== Explore the data

*Discover* displays the data in an interactive histogram that shows the distribution of data, or documents, over time, and a table that lists the fields for each document that matches the index pattern. To view a subset of the documents, you can apply filters to the data, and customize the table to display only the fields you want to explore.
*Discover* displays the data in an interactive histogram that shows the distribution of data, or documents, over time, and a table that lists the fields for each document that matches the {data-source}. To view a subset of the documents, you can apply filters to the data, and customize the table to display only the fields you want to explore.

. Open the main menu, then click *Discover*.

Expand Down Expand Up @@ -65,7 +65,7 @@ image::images/tutorial-discover-3.png[Discover table that displays only the prod

A dashboard is a collection of panels that you can use to view and analyze the data. Panels contain visualizations, interactive controls, text, and more.

. Open the main menu, then click *Dashboard*.
. Open the main menu, then click *Dashboard*.

. Click *[eCommerce] Revenue Dashboard*.
+
Expand Down Expand Up @@ -104,7 +104,7 @@ The treemap appears as the last visualization panel on the dashboard.
[[interact-with-the-data]]
=== Interact with the data

You can interact with the dashboard data using controls that allow you to apply dashboard-level filters. Interact with the *[eCommerce] Controls* panel to view the women's clothing data from the Gnomehouse manufacturer.
You can interact with the dashboard data using controls that allow you to apply dashboard-level filters. Interact with the *[eCommerce] Controls* panel to view the women's clothing data from the Gnomehouse manufacturer.

. From the *Manufacturer* dropdown, select *Gnomehouse*.

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
8 changes: 4 additions & 4 deletions docs/migration/migrate_8_0.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ If you are currently using one of these settings in your Kibana config, please r
==== Default logging timezone is now the system's timezone
*Details:* In prior releases the timezone used in logs defaulted to UTC. We now use the host machine's timezone by default.

*Impact:* To restore the previous behavior, in kibana.yml use the pattern layout, with a date modifier:
*Impact:* To restore the previous behavior, in kibana.yml use the pattern layout, with a {kibana-ref}/logging-configuration.html#date-format[date modifier]:
[source,yaml]
-------------------
logging:
Expand Down Expand Up @@ -100,7 +100,7 @@ See https://github.com/elastic/kibana/pull/87939 for more details.

[float]
==== Logging destination is specified by the appender
*Details:* Previously log destination would be `stdout` and could be changed to `file` using `logging.dest`. With the new logging configuration, you can specify the destination using appenders.
*Details:* Previously log destination would be `stdout` and could be changed to `file` using `logging.dest`. With the new logging configuration, you can specify the destination using {kibana-ref}/logging-configuration.html#logging-appenders[appenders].

*Impact:* To restore the previous behavior and log records to *stdout*, in `kibana.yml` use an appender with `type: console`.
[source,yaml]
Expand Down Expand Up @@ -131,7 +131,7 @@ logging:

[float]
==== Set log verbosity with root
*Details:* Previously logging output would be specified by `logging.silent` (none), `logging.quiet` (error messages only) and `logging.verbose` (all). With the new logging configuration, set the minimum required log level.
*Details:* Previously logging output would be specified by `logging.silent` (none), `logging.quiet` (error messages only) and `logging.verbose` (all). With the new logging configuration, set the minimum required {kibana-ref}/logging-configuration.html#log-level[log level].

*Impact:* To restore the previous behavior, in `kibana.yml` specify `logging.root.level`:
[source,yaml]
Expand Down Expand Up @@ -188,7 +188,7 @@ logging:
==== Configure log rotation with the rolling-file appender
*Details:* Previously log rotation would be enabled when `logging.rotate.enabled` was true.

*Impact:* To restore the previous behavior, in `kibana.yml` use the `rolling-file` appender.
*Impact:* To restore the previous behavior, in `kibana.yml` use the {kibana-ref}/logging-configuration.html#rolling-file-appender[`rolling-file`] appender.

[source,yaml]
-------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/settings/logging-settings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ The following table serves as a quick reference for different logging configurat
| Allows you to specify a fileName to write log records to disk. To write <<log-to-file-example,all log records to file>>, add the file appender to `root.appenders`. If configured, you also need to specify <<log-to-file-example, `logging.appenders.file.pathName`>>.

| `logging.appenders[].rolling-file:`
| Similar to Log4j's `RollingFileAppender`, this appender will log to a file and rotate if following a rolling strategy when the configured policy triggers. There are currently two policies supported: `size-limit` and `time-interval`.
| Similar to https://logging.apache.org/log4j/2.x/[Log4j's] `RollingFileAppender`, this appender will log to a file and rotate if following a rolling strategy when the configured policy triggers. There are currently two policies supported: <<size-limit-triggering-policy, `size-limit`>> and <<time-interval-triggering-policy, `time-interval`>>.

| `logging.appenders[].<appender-name>.type`
| The appender type determines where the log messages are sent. Options are `console`, `file`, `rewrite`, `rolling-file`. Required.
Expand Down
2 changes: 1 addition & 1 deletion docs/user/dashboard/lens.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,7 @@ In the legend, click the field, then choose one of the following options:
[[configure-the-visualization-components]]
==== Configure the visualization components

Each visualiztion type comes with a set of components that you access from the editor toolbar.
Each visualization type comes with a set of components that you access from the editor toolbar.

The following component menus are available:

Expand Down
Loading

0 comments on commit 554b2fe

Please sign in to comment.