Skip to content

Commit

Permalink
How-to docs for increasing the total number of shards per node (elast…
Browse files Browse the repository at this point in the history
…ic#86214)

Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
Co-authored-by: Leaf-Lin <39002973+Leaf-Lin@users.noreply.github.com>
  • Loading branch information
3 people authored May 10, 2022
1 parent 8bbc7c2 commit 21785c9
Show file tree
Hide file tree
Showing 11 changed files with 519 additions and 1 deletion.
2 changes: 2 additions & 0 deletions docs/reference/cluster.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,8 @@ include::cluster/get-settings.asciidoc[]

include::cluster/health.asciidoc[]

include::health/health.asciidoc[]

include::cluster/reroute.asciidoc[]

include::cluster/state.asciidoc[]
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/health/health.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cluster-health-status]
`components`::
(object) Information about the health of the cluster components.

[[cluster-health-api-example]]
[[health-api-example]]
==== {api-examples-title}

[source,console]
Expand Down
1 change: 1 addition & 0 deletions docs/reference/how-to.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,4 @@ include::how-to/fix-common-cluster-issues.asciidoc[]
include::how-to/size-your-shards.asciidoc[]

include::how-to/use-elasticsearch-for-time-series-data.asciidoc[]

2 changes: 2 additions & 0 deletions docs/reference/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,8 @@ include::commands/index.asciidoc[]

include::how-to.asciidoc[]

include::troubleshooting.asciidoc[]

include::rest-api/index.asciidoc[]

include::migration/index.asciidoc[]
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
++++
<div class="tabs" data-tab-group="host">
<div role="tablist" aria-label="Cluster shards limit">
<button role="tab"
aria-selected="true"
aria-controls="cloud-tab-cluster-total-shards"
id="cloud-cluster-total-shards">
Elasticsearch Service
</button>
<button role="tab"
aria-selected="false"
aria-controls="self-managed-tab-cluster-total-shards"
id="self-managed-cluster-total-shards"
tabindex="-1">
Self-managed
</button>
</div>
<div tabindex="0"
role="tabpanel"
id="cloud-tab-cluster-total-shards"
aria-labelledby="cloud-cluster-total-shards">
++++

include::increase-cluster-shard-limit.asciidoc[tag=cloud]

++++
</div>
<div tabindex="0"
role="tabpanel"
id="self-managed-tab-cluster-total-shards"
aria-labelledby="self-managed-cluster-total-shards"
hidden="">
++++

include::increase-cluster-shard-limit.asciidoc[tag=self-managed]

++++
</div>
</div>
++++
Original file line number Diff line number Diff line change
@@ -0,0 +1,192 @@
//////////////////////////

[source,console]
--------------------------------------------------
PUT my-index-000001
--------------------------------------------------
// TESTSETUP

[source,console]
--------------------------------------------------
PUT _cluster/settings
{
"persistent" : {
"cluster.routing.allocation.total_shards_per_node" : null
}
}
DELETE my-index-000001
--------------------------------------------------
// TEARDOWN

//////////////////////////

// tag::cloud[]
In order to get the shards assigned we'll need to increase the number of shards
that can be collocated on a node in the cluster.
We'll achieve this by inspecting the system-wide `cluster.routing.allocation.total_shards_per_node`
<<cluster-get-settings, cluster setting>> and increasing the configured value.

**Use {kib}**

//tag::kibana-api-ex[]
. Log in to the {ess-console}[{ecloud} console].
+

. On the **Elasticsearch Service** panel, click the name of your deployment.
+

NOTE:
If the name of your deployment is disabled your {kib} instances might be
unhealthy, in which case please contact https://support.elastic.co[Elastic Support],
or your deployment doesn't include {kib}, in which case all you need to do is
{cloud}/ec-access-kibana.html[enable Kibana first].

. Open your deployment's side navigation menu (placed under the Elastic logo in the upper left corner)
and go to **Dev Tools > Console**.
+
[role="screenshot"]
image::images/kibana-console.png[{kib} Console,align="center"]

. Inspect the `cluster.routing.allocation.total_shards_per_node` <<cluster-get-settings, cluster setting>>
for the index with unassigned shards:
+
[source,console]
----
GET /_cluster/settings?flat_settings
----
+
The response will look like this:
+
[source,console-result]
----
{
"persistent": {
"cluster.routing.allocation.total_shards_per_node": "300" <1>
},
"transient": {}
}
----
// TESTRESPONSE[skip:the result is for illustrating purposes only as don't want to change a cluster-wide setting]
+
<1> Represents the current configured value for the total number of shards
that can reside on one node in the system.
. <<cluster-update-settings,Increase>> the value for the total number of shards
that can be assigned on one node to a higher value:
+
[source,console]
----
PUT _cluster/settings
{
"persistent" : {
"cluster.routing.allocation.total_shards_per_node" : 400 <1>
}
}
----
// TEST[continued]
+
<1> The new value for the system-wide `total_shards_per_node` configuration
is increased from the previous value of `300` to `400`.
The `total_shards_per_node` configuration can also be set to `null`, which
represents no upper bound with regards to how many shards can be
collocated on one node in the system.
//end::kibana-api-ex[]
// end::cloud[]
// tag::self-managed[]
In order to get the shards assigned you can add more nodes to your {es} cluster
and assign the index's target tier <<assign-data-tier, node role>> to the new
nodes.
To inspect which tier is an index targeting for assignment, use the <<indices-get-settings, get index setting>>
API to retrieve the configured value for the `index.routing.allocation.include._tier_preference`
setting:
[source,console]
----
GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings
----
// TEST[continued]
The reponse will look like this:
[source,console-result]
----
{
"my-index-000001": {
"settings": {
"index.routing.allocation.include._tier_preference": "data_warm,data_hot" <1>
}
}
}
----
// TESTRESPONSE[skip:the result is for illustrating purposes only]
<1> Represents a comma separated list of data tier node roles this index is allowed
to be allocated on, the first one in the list being the one with the higher priority
i.e. the tier the index is targeting.
e.g. in this example the tier preference is `data_warm,data_hot` so the index is
targeting the `warm` tier and more nodes with the `data_warm` role are needed in
the {es} cluster.
Alternatively, if adding more nodes to the {es} cluster is not desired,
inspecting the system-wide `cluster.routing.allocation.total_shards_per_node`
<<cluster-get-settings, cluster setting>> and increasing the configured value:
. Inspect the `cluster.routing.allocation.total_shards_per_node` <<cluster-get-settings, cluster setting>>
for the index with unassigned shards:
+
[source,console]
----
GET /_cluster/settings?flat_settings
----
+
The response will look like this:
+
[source,console-result]
----
{
"persistent": {
"cluster.routing.allocation.total_shards_per_node": "300" <1>
},
"transient": {}
}
----
// TESTRESPONSE[skip:the result is for illustrating purposes only as don't want to change a cluster-wide setting]
+
<1> Represents the current configured value for the total number of shards
that can reside on one node in the system.
. <<cluster-update-settings,Increase>> the value for the total number of shards
that can be assigned on one node to a higher value:
+
[source,console]
----
PUT _cluster/settings
{
"persistent" : {
"cluster.routing.allocation.total_shards_per_node" : 400 <1>
}
}
----
// TEST[continued]
+
<1> The new value for the system-wide `total_shards_per_node` configuration
is increased from the previous value of `300` to `400`.
The `total_shards_per_node` configuration can also be set to `null`, which
represents no upper bound with regards to how many shards can be
collocated on one node in the system.
// end::self-managed[]
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
++++
<div class="tabs" data-tab-group="host">
<div role="tablist" aria-label="Total shards per node">
<button role="tab"
aria-selected="true"
aria-controls="cloud-tab-total-shards"
id="cloud-total-shards">
Elasticsearch Service
</button>
<button role="tab"
aria-selected="false"
aria-controls="self-managed-tab-total-shards"
id="self-managed-total-shards"
tabindex="-1">
Self-managed
</button>
</div>
<div tabindex="0"
role="tabpanel"
id="cloud-tab-total-shards"
aria-labelledby="cloud-total-shards">
++++

include::total-shards-per-node.asciidoc[tag=cloud]

++++
</div>
<div tabindex="0"
role="tabpanel"
id="self-managed-tab-total-shards"
aria-labelledby="self-managed-total-shards"
hidden="">
++++

include::total-shards-per-node.asciidoc[tag=self-managed]

++++
</div>
</div>
++++
Loading

0 comments on commit 21785c9

Please sign in to comment.