You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/rs/administering/database-operations/slave-ha.md
+31-32Lines changed: 31 additions & 32 deletions
Original file line number
Diff line number
Diff line change
@@ -5,44 +5,42 @@ weight: $weight
5
5
alwaysopen: false
6
6
categories: ["RS"]
7
7
---
8
-
When you enable [database replication]({{< relref "/rs/concepts/high-availability/replication.md" >}})
9
-
for your database, RS replicates your data to a slave node to make sure that your
10
-
data is highly available. Whether the slave node fails or the master node fails
11
-
and the slave is promoted to master, the remaining master node is a
12
-
single point of failure.
8
+
When you enable [database replication]({{< relref "/rs/concepts/high-availability/replication.md" >}}) for your database,
9
+
RS replicates your data to a slave node to make sure that your data is highly available.
10
+
If the slave node fails or if the master node fails and the slave is promoted to master,
11
+
the remaining master node is a single point of failure.
13
12
14
-
You can configure high availability for slave shards (slave HA) so that the cluster
15
-
automatically migrates the slave shards to another available node. In practice, slave
16
-
migration creates a new slave shard and replicates the data from the master shard to the
17
-
new slave shard. For example:
13
+
You can configure high availability for slave shards (slave HA) so that the cluster automatically migrates the slave shards to another available node.
14
+
In practice, slave migration creates a new slave shard and replicates the data from the master shard to the new slave shard.
15
+
For example:
18
16
19
17
1. Node:2 has a master shard and node:3 has the corresponding the slave shard.
20
18
1. Either:
21
19
22
20
- Node:2 fails and the slave shard on node:3 is promoted to master.
23
-
- Node:3 fails and the master shard is no longer replicated.
21
+
- Node:3 fails and the master shard is no longer replicated to the slave shard on the failed node.
24
22
25
-
1. If slave HA is enabled, a new slave shard is created on an available node
26
-
that does not hold the master shard.
23
+
1. If slave HA is enabled, a new slave shard is created on an available node that does not hold the master shard.
27
24
28
25
All of the constraints of shard migration apply, such as [rack-awareness]({{< relref "/rs/concepts/high-availability/rack-zone-awareness.md" >}}).
29
26
30
27
1. The data from the master shard is replicated to the new slave shard.
31
28
32
29
## Configuring High Availability for Slave Shards
33
30
34
-
You can enable slave HA using rladmin or using the REST API either for:
31
+
Using rladmin or the REST API, slave HA is controlled on the database level and on the cluster level.
32
+
You can enable or disable slave HA for a database or for the entire cluster.
35
33
36
-
- Cluster - All databases in the cluster use slave HA
37
-
- Database - Only the specified database uses slave HA
34
+
When slave HA is enabled for both the cluster and a database,
35
+
slave shards for that database are automatically migrated to another node in the event of a master or slave shard failure.
36
+
If slave HA is disabled at the cluster level,
37
+
slave HA will not migrate slave shards even if slave HA is enabled for a database.
38
38
39
-
By default, slave HA is set to disabled at the cluster level and enabled at the
40
-
database level, with the cluster level overriding, so that:
39
+
By default, slave HA is enabled for the cluster and disabled for each database so that o enable slave HA for a database, enable slave HA for that database.
41
40
42
-
- To enable slave HA for all databases in the cluster - Enable slave HA for the cluster
43
-
- To enable slave HA for only specified databases in the cluster:
44
-
1. Enable slave HA for the cluster
45
-
1. Disable slave HA for the databases for which you do not want slave HA enabled
41
+
{{% note %}}
42
+
For Active-Active databases, slave HA is enabled for the database by default to make sure that slave shards are available for Active-Active replication.
43
+
{{% /note %}}
46
44
47
45
To enable slave HA for a cluster using rladmin, run:
48
46
@@ -58,22 +56,24 @@ You can see the current configuration options for slave HA with: `rladmin info c
58
56
59
57
### Grace Period
60
58
61
-
By default, slave HA has a 15-minute grace period after node failure and before new slave shards are created.
59
+
By default, slave HA has a 10-minute grace period after node failure and before new slave shards are created.
Slave shard migration is based on priority so that, in the case of limited memory resources, the most important slave shards are migrated first. Slave HA migrates slave shards for databases according to this order of priority:
66
+
Slave shard migration is based on priority so that, in the case of limited memory resources,
67
+
the most important slave shards are migrated first.
68
+
Slave HA migrates slave shards for databases according to this order of priority:
69
69
70
70
1. slave_ha_priority - The slave shards of the database with the higher slave_ha_priority
71
71
integer value are migrated first.
72
72
73
73
To assign priority to a database, run:
74
74
75
75
```src
76
-
rladmin tune db <bdb_uid> slave_ha_priority <positive integer>
76
+
rladmin tune db <bdb_uid> slave_ha_priority <positive integer>
77
77
```
78
78
79
79
1. CRDBs - The CRDB synchronization uses slave shards to synchronize between the replicas.
@@ -82,26 +82,25 @@ Slave shard migration is based on priority so that, in the case of limited memor
82
82
83
83
### Cooldown Periods
84
84
85
-
Both the cluster and the database have cooldown periods. After node failure, the cluster
86
-
cooldown period prevents another slave migration due to another node failure for any
87
-
databases in the cluster until the cooldown period ends (Default: 1 hour).
85
+
Both the cluster and the database have cooldown periods.
86
+
After node failure, the cluster cooldown period prevents another slave migration due to another node failure for any
87
+
database in the cluster until the cooldown period ends (Default: 1 hour).
88
88
89
-
After a database is migrated with slave HA, it cannot go through another slave migration
90
-
due to another node failure until the cooldown period for the database ends (Default: 24
91
-
hours).
89
+
After a database is migrated with slave HA,
90
+
it cannot go through another slave migration due to another node failure until the cooldown period for the database ends (Default: 2 hours).
0 commit comments