Skip to content

[DOCS] Clarify Recovery Settings for Shard Relocation #40329

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Apr 26, 2019
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
67 changes: 37 additions & 30 deletions docs/reference/modules/indices/recovery.asciidoc
Original file line number Diff line number Diff line change
@@ -1,33 +1,40 @@
[[recovery]]
=== Indices Recovery

<<cat-recovery,Peer recovery>> is the process used to build a new copy of a
shard on a node by copying data from the primary. {es} uses this peer recovery
process to rebuild shard copies that were lost if a node has failed, and uses
the same process when migrating a shard copy between nodes to rebalance the
cluster or to honor any changes to the <<modules-cluster,shard allocation
settings>>.

The following _expert_ setting can be set to manage the resources consumed by
peer recoveries:

`indices.recovery.max_bytes_per_sec`::
Limits the total inbound and outbound peer recovery traffic on each node.
Since this limit applies on each node, but there may be many nodes
performing peer recoveries concurrently, the total amount of peer recovery
traffic within a cluster may be much higher than this limit. If you set
this limit too high then there is a risk that ongoing peer recoveries will
consume an excess of bandwidth (or other resources) which could destabilize
the cluster. Defaults to `40mb`.

`indices.recovery.max_concurrent_file_chunks`::
Controls the number of file chunk requests that can be sent in parallel per recovery.
As multiple recoveries are already running in parallel (controlled by
cluster.routing.allocation.node_concurrent_recoveries), increasing this expert-level
setting might only help in situations where peer recovery of a single shard is not
reaching the total inbound and outbound peer recovery traffic as configured by
indices.recovery.max_bytes_per_sec, but is CPU-bound instead, typically when using
transport-level security or compression. Defaults to `2`.

This setting can be dynamically updated on a live cluster with the
<<cluster-update-settings,cluster-update-settings>> API.
Peer recovery syncs data from a primary shard to a new or
existing shard copy.

Peer recovery automatically occurs when {es}:

* Recreates a shard lost during node failure
* Relocates a shard to another node due to a cluster rebalance or changes to the
<<modules-cluster, shard allocation settings>>

You can view a list of in-progress and completed recoveries using the
<<cat-recovery, cat recovery API>>.

[float]
==== Peer recovery settings

`indices.recovery.max_bytes_per_sec` (<<cluster-update-settings,Dynamic>>)::
Limits total inbound and outbound recovery traffic for each node.
Defaults to `40mb`.
+
This limit applies to nodes only. If multiple nodes in a cluster perform
recoveries at the same time, the cluster's total recovery traffic may exceed
this limit.
+
If this limit is too high, ongoing recoveries may consume an excess
of bandwidth and other resources, which can destabilize the cluster.

[float]
==== Expert peer recovery settings
You can use the following _expert_ setting to manage resources for peer
recoveries.

`indices.recovery.max_concurrent_file_chunks` (<<cluster-update-settings,Dynamic>>, Expert)::
Number of file chunk requests sent in parallel for each recovery. Defaults to
`2`.
+
You can increase the value of this setting when the recovery of a single shard
is not reaching the traffic limit set by `indices.recovery.max_bytes_per_sec`.