Skip to content

Commit

Permalink
Fix an incorrect wording "scale out or in a node" (#3713)
Browse files Browse the repository at this point in the history
* Fix an incorrect wording "scale out/in a node"

* Update experimental-features-4.0.md

* Update tiflash/troubleshoot-tiflash.md

* Update tiflash/troubleshoot-tiflash.md

* Update troubleshoot-tiflash.md

* Apply suggestions from code review

Co-authored-by: Lilian Lee <lilin@pingcap.com>

* update line 11

Co-authored-by: Lilian Lee <lilin@pingcap.com>
  • Loading branch information
TomShawn and lilin90 authored Aug 27, 2020
1 parent 2dd251c commit 0255c5f
Show file tree
Hide file tree
Showing 7 changed files with 25 additions and 25 deletions.
2 changes: 1 addition & 1 deletion experimental-features-4.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This document introduces the experimental features of TiDB v4.0. It is **NOT** r
## Scheduling

+ Cascading Placement Rules is an experimental feature of the Placement Driver (PD) introduced in v4.0. It is a replica rule system that guides PD to generate corresponding schedules for different types of data. By combining different scheduling rules, you can finely control the attributes of any continuous data range, such as the number of replicas, the storage location, the host type, whether to participate in Raft election, and whether to act as the Raft leader. See [Cascading Placement Rules](/configure-placement-rules.md) for details.
+ Elastic scheduling is an experimental feature based on Kubernetes, which enables TiDB to dynamically scale in and out nodes. This feature can effectively mitigate the high workload during peak hours of an application and saves unnecessary overhead. See [Enable TidbCluster Auto-scaling](https://docs.pingcap.com/tidb-in-kubernetes/stable/enable-tidb-cluster-auto-scaling) for details.
+ Elastic scheduling is an experimental feature based on Kubernetes, which enables TiDB to dynamically scale out and scale in clusters. This feature can effectively mitigate the high workload during peak hours of an application and saves unnecessary overhead. See [Enable TidbCluster Auto-scaling](https://docs.pingcap.com/tidb-in-kubernetes/stable/enable-tidb-cluster-auto-scaling) for details.

## SQL feature

Expand Down
28 changes: 14 additions & 14 deletions scale-tidb-using-tiup.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ aliases: ['/docs/dev/scale-tidb-using-tiup/','/docs/dev/how-to/scale/with-tiup/'

The capacity of a TiDB cluster can be increased or decreased without interrupting the online services.

This document describes how to scale the TiDB, TiKV, PD, TiCDC, or TiFlash nodes using TiUP. If you have not installed TiUP, refer to the steps in [Install TiUP on the control machine](/upgrade-tidb-using-tiup.md#install-tiup-on-the-control-machine) and import the cluster into TiUP before you use TiUP to scale the TiDB cluster.
This document describes how to scale the TiDB, TiKV, PD, TiCDC, or TiFlash cluster using TiUP. If you have not installed TiUP, refer to the steps in [Install TiUP on the control machine](/upgrade-tidb-using-tiup.md#install-tiup-on-the-control-machine) and import the cluster into TiUP before you use TiUP to scale the TiDB cluster.

To view the current cluster name list, run `tiup cluster list`.

Expand All @@ -22,7 +22,7 @@ For example, if the original topology of the cluster is as follows:
| 10.0.1.1 | TiKV |
| 10.0.1.2 | TiKV |

## Scale out a TiDB/PD/TiKV node
## Scale out a TiDB/PD/TiKV cluster

If you want to add a TiDB node to the `10.0.1.5` host, take the following steps.

Expand Down Expand Up @@ -131,7 +131,7 @@ After the scale-out, the cluster topology is as follows:
| 10.0.1.1 | TiKV |
| 10.0.1.2 | TiKV |

## Scale out a TiFlash node
## Scale out a TiFlash cluster

If you want to add a TiFlash node to the `10.0.1.4` host, take the following steps.

Expand Down Expand Up @@ -183,7 +183,7 @@ After the scale-out, the cluster topology is as follows:
| 10.0.1.1 | TiKV |
| 10.0.1.2 | TiKV |
## Scale out a TiCDC node
## Scale out a TiCDC cluster
If you want to add two TiCDC nodes to the `10.0.1.3` and `10.0.1.4` hosts, take the following steps.
Expand Down Expand Up @@ -227,7 +227,7 @@ After the scale-out, the cluster topology is as follows:
| 10.0.1.1 | TiKV |
| 10.0.1.2 | TiKV |
## Scale in a TiDB/PD/TiKV node
## Scale in a TiDB/PD/TiKV cluster
If you want to remove a TiKV node from the `10.0.1.5` host, take the following steps.
Expand Down Expand Up @@ -301,7 +301,7 @@ The current topology is as follows:
| 10.0.1.1 | TiKV |
| 10.0.1.2 | TiKV |
## Scale in a TiFlash node
## Scale in a TiFlash cluster
If you want to remove a TiFlash node from the `10.0.1.4` host, take the following steps.
Expand All @@ -319,11 +319,11 @@ Before the node goes down, make sure that the number of remaining nodes in the T
2. Wait for the TiFlash replicas of the related tables to be deleted. [Check the table replication progress](/tiflash/use-tiflash.md#check-the-replication-progress) and the replicas are deleted if the replication information of the related tables is not found.
### 2. Scale in the TiFlash node
### 2. Perform the scale-in operation
Next, perform the scale-in operation with one of the following solutions.
#### Solution 1: Using TiUP to scale in the TiFlash node
#### Solution 1: Use TiUP to remove a TiFlash node
1. First, confirm the name of the node to be taken down:
Expand All @@ -333,17 +333,17 @@ Next, perform the scale-in operation with one of the following solutions.
tiup cluster display <cluster-name>
```
2. Scale in the TiFlash node (assume that the node name is `10.0.1.4:9000` from Step 1):
2. Remove the TiFlash node (assume that the node name is `10.0.1.4:9000` from Step 1):
{{< copyable "shell-regular" >}}
```shell
tiup cluster scale-in <cluster-name> --node 10.0.1.4:9000
```
#### Solution 2: Manually scale in the TiFlash node
#### Solution 2: Manually remove a TiFlash node
In special cases (such as when a node needs to be forcibly taken down), or if the TiUP scale-in operation fails, you can manually scale in a TiFlash node with the following steps.
In special cases (such as when a node needs to be forcibly taken down), or if the TiUP scale-in operation fails, you can manually remove a TiFlash node with the following steps.
1. Use the store command of pd-ctl to view the store ID corresponding to this TiFlash node.
Expand All @@ -357,7 +357,7 @@ In special cases (such as when a node needs to be forcibly taken down), or if th
tiup ctl pd -u <pd-address> store
```
2. Scale in the TiFlash node in pd-ctl:
2. Remove the TiFlash node in pd-ctl:
* Enter `store delete <store_id>` in pd-ctl (`<store_id>` is the store ID of the TiFlash node found in the previous step.
Expand Down Expand Up @@ -436,9 +436,9 @@ The steps to manually clean up the replication rules in PD are below:
curl -v -X DELETE http://<pd_ip>:<pd_port>/pd/api/v1/config/rule/tiflash/table-45-r
```

## Scale in a TiCDC node
## Scale in a TiCDC cluster

If you want to remove the TiCDC node from the `10.0.1.4` host, take the following steps.
If you want to remove the TiCDC node from the `10.0.1.4` host, take the following steps:

1. Take the node offline:

Expand Down
2 changes: 1 addition & 1 deletion ticdc/manage-ticdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ For details, refer to [Deploy a TiDB Cluster Using TiUP](/production-deployment-

1. First, make sure that the current TiDB version supports TiCDC; otherwise, you need to upgrade the TiDB cluster to `v4.0.0 rc.1` or later versions.

2. To deploy TiCDC, refer to [Scale out a TiCDC node](/scale-tidb-using-tiup.md#scale-out-a-ticdc-node).
2. To deploy TiCDC, refer to [Scale out a TiCDC cluster](/scale-tidb-using-tiup.md#scale-out-a-ticdc-cluster).

### Use Binary

Expand Down
2 changes: 1 addition & 1 deletion tiflash/tiflash-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ TiFlash shares the computing workload in the same way as the TiKV Coprocessor do
## See also

- To deploy a new cluster with TiFlash nodes, see [Deploy a TiDB cluster using TiUP](/production-deployment-using-tiup.md).
- To add a TiFlash node in a deployed cluster, see [Scale out a TiFlash node](/scale-tidb-using-tiup.md#scale-out-a-tiflash-node).
- To add a TiFlash node in a deployed cluster, see [Scale out a TiFlash cluster](/scale-tidb-using-tiup.md#scale-out-a-tiflash-cluster).
- [Use TiFlash](/tiflash/use-tiflash.md).
- [Maintain a TiFlash cluster](/tiflash/maintain-tiflash.md).
- [Tune TiFlash performance](/tiflash/tune-tiflash-performance.md).
Expand Down
4 changes: 2 additions & 2 deletions tiflash/troubleshoot-tiflash.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ The issue might occur due to different reasons. It is recommended that you troub
ulimit -n 1000000
```
3. Use the PD Control tool to check whether there is any TiFlash instance that failed to go offline on the node (same IP and Port) and force the instance(s) to go offline. For detailed steps, refer to [Scale in a TiFlash node](/scale-tidb-using-tiup.md#scale-in-a-tiflash-node).
3. Use the PD Control tool to check whether there is any TiFlash instance that failed to go offline on the node (same IP and Port) and force the instance(s) to go offline. For detailed steps, refer to [Scale in a TiFlash cluster](/scale-tidb-using-tiup.md#scale-in-a-tiflash-cluster).
If the above methods cannot resolve your issue, save the TiFlash log files and email to [info@pingcap.com](mailto:info@pingcap.com) for more information.
Expand Down Expand Up @@ -94,6 +94,6 @@ In this case, you can balance the load pressure by adding more TiFlash nodes.

Take the following steps to handle the data file corruption:

1. Refer to [Take a TiFlash node down](/scale-tidb-using-tiup.md#scale-in-a-tiflash-node) to take the corresponding TiFlash node down.
1. Refer to [Take a TiFlash node down](/scale-tidb-using-tiup.md#scale-in-a-tiflash-cluster) to take the corresponding TiFlash node down.
2. Delete the related data of the TiFlash node.
3. Redeploy the TiFlash node in the cluster.
10 changes: 5 additions & 5 deletions tiup/tiup-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,13 +197,13 @@ The `Status` column uses `Up` or `Down` to indicate whether the service is runni

For the PD component, `|L` or `|UI` might be appended to `Up` or `Down`. `|L` indicates that the PD node is a Leader, and `|UI` indicates that [TiDB Dashboard](/dashboard/dashboard-intro.md) is running on the PD node.

## Scale in a node
## Scale in a cluster

> **Note:**
>
> This section describes only the syntax of the scale-in command. For detailed steps of online scaling, refer to [Scale the TiDB Cluster Using TiUP](/scale-tidb-using-tiup.md).
Scaling in a node means taking the node offline. This operation removes the node from the cluster and deletes the remaining data files.
Scaling in a cluster means making some node(s) offline. This operation removes the specific node(s) from the cluster and deletes the remaining data files.

Because the offline process of the TiKV and TiDB Binlog components is asynchronous (which requires removing the node through API), and the process takes a long time (which requires continuous observation on whether the node is successfully taken offline), special treatment is given to the TiKV and TiDB Binlog components.

Expand All @@ -229,7 +229,7 @@ tiup cluster scale-in <cluster-name> -N <node-id>

To use this command, you need to specify at least two flags: the cluster name and the node ID. The node ID can be obtained by using the `tiup cluster display` command in the previous section.

For example, to scale in the TiKV node on `172.16.5.140`, run the following command:
For example, to make the TiKV node on `172.16.5.140` offline, run the following command:

{{< copyable "shell-regular" >}}

Expand Down Expand Up @@ -266,7 +266,7 @@ ID Role Host Ports Status Data Dir

After PD schedules the data on the node to other TiKV nodes, this node will be deleted automatically.

## Scale out a node
## Scale out a cluster

> **Note:**
>
Expand All @@ -278,7 +278,7 @@ When you scale out PD, the node is added to the cluster by `join`, and the confi

All services conduct correctness validation when they are scaled out. The validation results show whether the scaling-out is successful.

To scale out a TiKV node and a PD node in the `tidb-test` cluster, take the following steps:
To add a TiKV node and a PD node in the `tidb-test` cluster, take the following steps:

1. Create a `scale.yaml` file, and add IPs of the new TiKV and PD nodes:

Expand Down
2 changes: 1 addition & 1 deletion troubleshoot-high-disk-io.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,5 +90,5 @@ The cluster deployment tools (TiDB Ansible and TiUP) deploy the cluster with ale
## Handle I/O issues

+ When an I/O hotspot issue is confirmed to occur, you need to refer to Handle TiDB Hotspot Issues to eliminate the I/O hotspots.
+ When it is confirmed that the overall I/O performance has become the bottleneck, and you can determine that the I/O performance will keep falling behind in the application side, then you can take advantage of the distributed database's capability of scaling and scale out the number of TiKV nodes to have greater overall I/O throughput.
+ When it is confirmed that the overall I/O performance has become the bottleneck, and you can determine that the I/O performance will keep falling behind in the application side, then you can take advantage of the distributed database's capability of scaling and increase the number of TiKV nodes to have greater overall I/O throughput.
+ Adjust some of the parameters as described above, and use computing/memory resources to make up for disk storage resources.

0 comments on commit 0255c5f

Please sign in to comment.