diff --git a/experimental-features-4.0.md b/experimental-features-4.0.md index 1967993ba9a97..3587a50faf977 100644 --- a/experimental-features-4.0.md +++ b/experimental-features-4.0.md @@ -10,7 +10,7 @@ This document introduces the experimental features of TiDB v4.0. It is **NOT** r ## Scheduling + Cascading Placement Rules is an experimental feature of the Placement Driver (PD) introduced in v4.0. It is a replica rule system that guides PD to generate corresponding schedules for different types of data. By combining different scheduling rules, you can finely control the attributes of any continuous data range, such as the number of replicas, the storage location, the host type, whether to participate in Raft election, and whether to act as the Raft leader. See [Cascading Placement Rules](/configure-placement-rules.md) for details. -+ Elastic scheduling is an experimental feature based on Kubernetes, which enables TiDB to dynamically scale in and out nodes. This feature can effectively mitigate the high workload during peak hours of an application and saves unnecessary overhead. See [Enable TidbCluster Auto-scaling](https://docs.pingcap.com/tidb-in-kubernetes/stable/enable-tidb-cluster-auto-scaling) for details. ++ Elastic scheduling is an experimental feature based on Kubernetes, which enables TiDB to dynamically scale out and scale in clusters. This feature can effectively mitigate the high workload during peak hours of an application and saves unnecessary overhead. See [Enable TidbCluster Auto-scaling](https://docs.pingcap.com/tidb-in-kubernetes/stable/enable-tidb-cluster-auto-scaling) for details. ## SQL feature diff --git a/scale-tidb-using-tiup.md b/scale-tidb-using-tiup.md index c6dd624d314f1..db67137aafbb2 100644 --- a/scale-tidb-using-tiup.md +++ b/scale-tidb-using-tiup.md @@ -8,7 +8,7 @@ aliases: ['/docs/dev/scale-tidb-using-tiup/','/docs/dev/how-to/scale/with-tiup/' The capacity of a TiDB cluster can be increased or decreased without interrupting the online services. -This document describes how to scale the TiDB, TiKV, PD, TiCDC, or TiFlash nodes using TiUP. If you have not installed TiUP, refer to the steps in [Install TiUP on the control machine](/upgrade-tidb-using-tiup.md#install-tiup-on-the-control-machine) and import the cluster into TiUP before you use TiUP to scale the TiDB cluster. +This document describes how to scale the TiDB, TiKV, PD, TiCDC, or TiFlash cluster using TiUP. If you have not installed TiUP, refer to the steps in [Install TiUP on the control machine](/upgrade-tidb-using-tiup.md#install-tiup-on-the-control-machine) and import the cluster into TiUP before you use TiUP to scale the TiDB cluster. To view the current cluster name list, run `tiup cluster list`. @@ -22,7 +22,7 @@ For example, if the original topology of the cluster is as follows: | 10.0.1.1 | TiKV | | 10.0.1.2 | TiKV | -## Scale out a TiDB/PD/TiKV node +## Scale out a TiDB/PD/TiKV cluster If you want to add a TiDB node to the `10.0.1.5` host, take the following steps. @@ -131,7 +131,7 @@ After the scale-out, the cluster topology is as follows: | 10.0.1.1 | TiKV | | 10.0.1.2 | TiKV | -## Scale out a TiFlash node +## Scale out a TiFlash cluster If you want to add a TiFlash node to the `10.0.1.4` host, take the following steps. @@ -183,7 +183,7 @@ After the scale-out, the cluster topology is as follows: | 10.0.1.1 | TiKV | | 10.0.1.2 | TiKV | -## Scale out a TiCDC node +## Scale out a TiCDC cluster If you want to add two TiCDC nodes to the `10.0.1.3` and `10.0.1.4` hosts, take the following steps. @@ -227,7 +227,7 @@ After the scale-out, the cluster topology is as follows: | 10.0.1.1 | TiKV | | 10.0.1.2 | TiKV | -## Scale in a TiDB/PD/TiKV node +## Scale in a TiDB/PD/TiKV cluster If you want to remove a TiKV node from the `10.0.1.5` host, take the following steps. @@ -301,7 +301,7 @@ The current topology is as follows: | 10.0.1.1 | TiKV | | 10.0.1.2 | TiKV | -## Scale in a TiFlash node +## Scale in a TiFlash cluster If you want to remove a TiFlash node from the `10.0.1.4` host, take the following steps. @@ -319,11 +319,11 @@ Before the node goes down, make sure that the number of remaining nodes in the T 2. Wait for the TiFlash replicas of the related tables to be deleted. [Check the table replication progress](/tiflash/use-tiflash.md#check-the-replication-progress) and the replicas are deleted if the replication information of the related tables is not found. -### 2. Scale in the TiFlash node +### 2. Perform the scale-in operation Next, perform the scale-in operation with one of the following solutions. -#### Solution 1: Using TiUP to scale in the TiFlash node +#### Solution 1: Use TiUP to remove a TiFlash node 1. First, confirm the name of the node to be taken down: @@ -333,7 +333,7 @@ Next, perform the scale-in operation with one of the following solutions. tiup cluster display ``` -2. Scale in the TiFlash node (assume that the node name is `10.0.1.4:9000` from Step 1): +2. Remove the TiFlash node (assume that the node name is `10.0.1.4:9000` from Step 1): {{< copyable "shell-regular" >}} @@ -341,9 +341,9 @@ Next, perform the scale-in operation with one of the following solutions. tiup cluster scale-in --node 10.0.1.4:9000 ``` -#### Solution 2: Manually scale in the TiFlash node +#### Solution 2: Manually remove a TiFlash node -In special cases (such as when a node needs to be forcibly taken down), or if the TiUP scale-in operation fails, you can manually scale in a TiFlash node with the following steps. +In special cases (such as when a node needs to be forcibly taken down), or if the TiUP scale-in operation fails, you can manually remove a TiFlash node with the following steps. 1. Use the store command of pd-ctl to view the store ID corresponding to this TiFlash node. @@ -357,7 +357,7 @@ In special cases (such as when a node needs to be forcibly taken down), or if th tiup ctl pd -u store ``` -2. Scale in the TiFlash node in pd-ctl: +2. Remove the TiFlash node in pd-ctl: * Enter `store delete ` in pd-ctl (`` is the store ID of the TiFlash node found in the previous step. @@ -436,9 +436,9 @@ The steps to manually clean up the replication rules in PD are below: curl -v -X DELETE http://:/pd/api/v1/config/rule/tiflash/table-45-r ``` -## Scale in a TiCDC node +## Scale in a TiCDC cluster -If you want to remove the TiCDC node from the `10.0.1.4` host, take the following steps. +If you want to remove the TiCDC node from the `10.0.1.4` host, take the following steps: 1. Take the node offline: diff --git a/ticdc/manage-ticdc.md b/ticdc/manage-ticdc.md index e74c27c307af2..b30b84f56f776 100644 --- a/ticdc/manage-ticdc.md +++ b/ticdc/manage-ticdc.md @@ -42,7 +42,7 @@ For details, refer to [Deploy a TiDB Cluster Using TiUP](/production-deployment- 1. First, make sure that the current TiDB version supports TiCDC; otherwise, you need to upgrade the TiDB cluster to `v4.0.0 rc.1` or later versions. -2. To deploy TiCDC, refer to [Scale out a TiCDC node](/scale-tidb-using-tiup.md#scale-out-a-ticdc-node). +2. To deploy TiCDC, refer to [Scale out a TiCDC cluster](/scale-tidb-using-tiup.md#scale-out-a-ticdc-cluster). ### Use Binary diff --git a/tiflash/tiflash-overview.md b/tiflash/tiflash-overview.md index a4d4a52d2e8c4..dc4634305c309 100644 --- a/tiflash/tiflash-overview.md +++ b/tiflash/tiflash-overview.md @@ -72,7 +72,7 @@ TiFlash shares the computing workload in the same way as the TiKV Coprocessor do ## See also - To deploy a new cluster with TiFlash nodes, see [Deploy a TiDB cluster using TiUP](/production-deployment-using-tiup.md). -- To add a TiFlash node in a deployed cluster, see [Scale out a TiFlash node](/scale-tidb-using-tiup.md#scale-out-a-tiflash-node). +- To add a TiFlash node in a deployed cluster, see [Scale out a TiFlash cluster](/scale-tidb-using-tiup.md#scale-out-a-tiflash-cluster). - [Use TiFlash](/tiflash/use-tiflash.md). - [Maintain a TiFlash cluster](/tiflash/maintain-tiflash.md). - [Tune TiFlash performance](/tiflash/tune-tiflash-performance.md). diff --git a/tiflash/troubleshoot-tiflash.md b/tiflash/troubleshoot-tiflash.md index c9c3beea622ff..91920fd14dbe5 100644 --- a/tiflash/troubleshoot-tiflash.md +++ b/tiflash/troubleshoot-tiflash.md @@ -30,7 +30,7 @@ The issue might occur due to different reasons. It is recommended that you troub ulimit -n 1000000 ``` -3. Use the PD Control tool to check whether there is any TiFlash instance that failed to go offline on the node (same IP and Port) and force the instance(s) to go offline. For detailed steps, refer to [Scale in a TiFlash node](/scale-tidb-using-tiup.md#scale-in-a-tiflash-node). +3. Use the PD Control tool to check whether there is any TiFlash instance that failed to go offline on the node (same IP and Port) and force the instance(s) to go offline. For detailed steps, refer to [Scale in a TiFlash cluster](/scale-tidb-using-tiup.md#scale-in-a-tiflash-cluster). If the above methods cannot resolve your issue, save the TiFlash log files and email to [info@pingcap.com](mailto:info@pingcap.com) for more information. @@ -94,6 +94,6 @@ In this case, you can balance the load pressure by adding more TiFlash nodes. Take the following steps to handle the data file corruption: -1. Refer to [Take a TiFlash node down](/scale-tidb-using-tiup.md#scale-in-a-tiflash-node) to take the corresponding TiFlash node down. +1. Refer to [Take a TiFlash node down](/scale-tidb-using-tiup.md#scale-in-a-tiflash-cluster) to take the corresponding TiFlash node down. 2. Delete the related data of the TiFlash node. 3. Redeploy the TiFlash node in the cluster. diff --git a/tiup/tiup-cluster.md b/tiup/tiup-cluster.md index 638df8fd8acb0..7e4db7111dbca 100644 --- a/tiup/tiup-cluster.md +++ b/tiup/tiup-cluster.md @@ -197,13 +197,13 @@ The `Status` column uses `Up` or `Down` to indicate whether the service is runni For the PD component, `|L` or `|UI` might be appended to `Up` or `Down`. `|L` indicates that the PD node is a Leader, and `|UI` indicates that [TiDB Dashboard](/dashboard/dashboard-intro.md) is running on the PD node. -## Scale in a node +## Scale in a cluster > **Note:** > > This section describes only the syntax of the scale-in command. For detailed steps of online scaling, refer to [Scale the TiDB Cluster Using TiUP](/scale-tidb-using-tiup.md). -Scaling in a node means taking the node offline. This operation removes the node from the cluster and deletes the remaining data files. +Scaling in a cluster means making some node(s) offline. This operation removes the specific node(s) from the cluster and deletes the remaining data files. Because the offline process of the TiKV and TiDB Binlog components is asynchronous (which requires removing the node through API), and the process takes a long time (which requires continuous observation on whether the node is successfully taken offline), special treatment is given to the TiKV and TiDB Binlog components. @@ -229,7 +229,7 @@ tiup cluster scale-in -N To use this command, you need to specify at least two flags: the cluster name and the node ID. The node ID can be obtained by using the `tiup cluster display` command in the previous section. -For example, to scale in the TiKV node on `172.16.5.140`, run the following command: +For example, to make the TiKV node on `172.16.5.140` offline, run the following command: {{< copyable "shell-regular" >}} @@ -266,7 +266,7 @@ ID Role Host Ports Status Data Dir After PD schedules the data on the node to other TiKV nodes, this node will be deleted automatically. -## Scale out a node +## Scale out a cluster > **Note:** > @@ -278,7 +278,7 @@ When you scale out PD, the node is added to the cluster by `join`, and the confi All services conduct correctness validation when they are scaled out. The validation results show whether the scaling-out is successful. -To scale out a TiKV node and a PD node in the `tidb-test` cluster, take the following steps: +To add a TiKV node and a PD node in the `tidb-test` cluster, take the following steps: 1. Create a `scale.yaml` file, and add IPs of the new TiKV and PD nodes: diff --git a/troubleshoot-high-disk-io.md b/troubleshoot-high-disk-io.md index 2d7e29d978ef2..6274abd261693 100644 --- a/troubleshoot-high-disk-io.md +++ b/troubleshoot-high-disk-io.md @@ -90,5 +90,5 @@ The cluster deployment tools (TiDB Ansible and TiUP) deploy the cluster with ale ## Handle I/O issues + When an I/O hotspot issue is confirmed to occur, you need to refer to Handle TiDB Hotspot Issues to eliminate the I/O hotspots. -+ When it is confirmed that the overall I/O performance has become the bottleneck, and you can determine that the I/O performance will keep falling behind in the application side, then you can take advantage of the distributed database's capability of scaling and scale out the number of TiKV nodes to have greater overall I/O throughput. ++ When it is confirmed that the overall I/O performance has become the bottleneck, and you can determine that the I/O performance will keep falling behind in the application side, then you can take advantage of the distributed database's capability of scaling and increase the number of TiKV nodes to have greater overall I/O throughput. + Adjust some of the parameters as described above, and use computing/memory resources to make up for disk storage resources.