diff --git a/config-templates/geo-redundancy-deployment.yaml b/config-templates/geo-redundancy-deployment.yaml index 6761451067570..74ad7ecddca3a 100644 --- a/config-templates/geo-redundancy-deployment.yaml +++ b/config-templates/geo-redundancy-deployment.yaml @@ -29,7 +29,7 @@ server_configs: pd: replication.location-labels: ["zone","dc","rack","host"] replication.max-replicas: 5 - label-property: + label-property: # Since TiDB 5.2, the `label-property` configuration is not supported by default. To set the replica policy, use the placement rules. reject-leader: - key: "dc" value: "sha" diff --git a/geo-distributed-deployment-topology.md b/geo-distributed-deployment-topology.md index 68e4484968f3a..858f6e48acc6f 100644 --- a/geo-distributed-deployment-topology.md +++ b/geo-distributed-deployment-topology.md @@ -84,9 +84,13 @@ This section describes the key parameter configuration of the TiDB geo-distribut value: "sha" ``` + > **Note:** + > + > Since TiDB 5.2, the `label-property` configuration is not supported by default. To set the replica policy, use the [placement rules](/configure-placement-rules.md). + +For the further information about labels and the number of Raft Group replicas, see [Schedule Replicas by Topology Labels](/schedule-replicas-by-topology-labels.md). + > **Note:** > > - You do not need to manually create the `tidb` user in the configuration file. The TiUP cluster component automatically creates the `tidb` user on the target machines. You can customize the user, or keep the user consistent with the control machine. > - If you configure the deployment directory as a relative path, the cluster will be deployed in the home directory of the user. - -[Schedule Replicas by Topology Labels](/schedule-replicas-by-topology-labels.md) further explains the use of labels and the number of Raft Group replicas. diff --git a/multi-data-centers-in-one-city-deployment.md b/multi-data-centers-in-one-city-deployment.md index ec19507d81de6..3681434d41e7f 100644 --- a/multi-data-centers-in-one-city-deployment.md +++ b/multi-data-centers-in-one-city-deployment.md @@ -71,6 +71,10 @@ member leader_priority pdName2 4 member leader_priority pdName3 3 ``` +> **Note:** +> +> Since TiDB 5.2, the `label-property` configuration is not supported by default. To set the replica policy, use the [placement rules](/configure-placement-rules.md). + **Disadvantages:** - Write scenarios are still affected by network latency across DCs. This is because Raft follows the majority protocol and all written data must be replicated to at least two DCs. diff --git a/three-data-centers-in-two-cities-deployment.md b/three-data-centers-in-two-cities-deployment.md index 1c08734dc5bd1..148dff84a77e8 100644 --- a/three-data-centers-in-two-cities-deployment.md +++ b/three-data-centers-in-two-cities-deployment.md @@ -192,6 +192,10 @@ In the deployment of three DCs in two cities, to optimize performance, you need ```yaml config set label-property reject-leader dc 3 ``` + + > **Note:** + > + > Since TiDB 5.2, the `label-property` configuration is not supported by default. To set the replica policy, use the [placement rules](/configure-placement-rules.md). - Configure the priority of PD. To avoid the situation where the PD leader is in another city (IDC3), you can increase the priority of local PD (in Seattle) and decrease the priority of PD in another city (San Francisco). The larger the number, the higher the priority.