Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tiup: fix the scale-out yaml example #13327

Merged
merged 1 commit into from
Apr 24, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 23 additions & 23 deletions scale-tidb-using-tiup.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,12 +38,12 @@ This section exemplifies how to add a TiDB node to the `10.0.1.5` host.
> * If multiple instances are deployed on a single machine, you need to allocate different ports and directories for them. If the ports or directories have conflicts, you will receive a notification during deployment or scaling.
> * Since TiUP v1.0.0, the scale-out configuration inherits the global configuration of the original cluster.

Add the scale-out topology configuration in the `scale-out.yaml` file:
Add the scale-out topology configuration in the `scale-out.yml` file:

{{< copyable "shell-regular" >}}

```shell
vi scale-out.yaml
vi scale-out.yml
```

{{< copyable "" >}}
Expand All @@ -54,8 +54,8 @@ This section exemplifies how to add a TiDB node to the `10.0.1.5` host.
ssh_port: 22
port: 4000
status_port: 10080
deploy_dir: /data/deploy/install/deploy/tidb-4000
log_dir: /data/deploy/install/log/tidb-4000
deploy_dir: /tidb-deploy/tidb-4000
log_dir: /tidb-deploy/tidb-4000/log
```

Here is a TiKV configuration file template:
Expand All @@ -68,9 +68,9 @@ This section exemplifies how to add a TiDB node to the `10.0.1.5` host.
ssh_port: 22
port: 20160
status_port: 20180
deploy_dir: /data/deploy/install/deploy/tikv-20160
data_dir: /data/deploy/install/data/tikv-20160
log_dir: /data/deploy/install/log/tikv-20160
deploy_dir: /tidb-deploy/tikv-20160
data_dir: /tidb-data/tikv-20160
log_dir: /tidb-deploy/tikv-20160/log
```

Here is a PD configuration file template:
Expand All @@ -84,12 +84,12 @@ This section exemplifies how to add a TiDB node to the `10.0.1.5` host.
name: pd-1
client_port: 2379
peer_port: 2380
deploy_dir: /data/deploy/install/deploy/pd-2379
data_dir: /data/deploy/install/data/pd-2379
log_dir: /data/deploy/install/log/pd-2379
deploy_dir: /tidb-deploy/pd-2379
data_dir: /tidb-data/pd-2379
log_dir: /tidb-deploy/pd-2379/log
```

To view the configuration of the current cluster, run `tiup cluster edit-config <cluster-name>`. Because the parameter configuration of `global` and `server_configs` is inherited by `scale-out.yaml` and thus also takes effect in `scale-out.yaml`.
To view the configuration of the current cluster, run `tiup cluster edit-config <cluster-name>`. Because the parameter configuration of `global` and `server_configs` is inherited by `scale-out.yml` and thus also takes effect in `scale-out.yml`.

2. Run the scale-out command:

Expand All @@ -100,28 +100,28 @@ This section exemplifies how to add a TiDB node to the `10.0.1.5` host.
{{< copyable "shell-regular" >}}

```shell
tiup cluster check <cluster-name> scale-out.yaml --cluster --user root [-p] [-i /home/root/.ssh/gcp_rsa]
tiup cluster check <cluster-name> scale-out.yml --cluster --user root [-p] [-i /home/root/.ssh/gcp_rsa]
```

2. Enable automatic repair:

{{< copyable "shell-regular" >}}

```shell
tiup cluster check <cluster-name> scale-out.yaml --cluster --apply --user root [-p] [-i /home/root/.ssh/gcp_rsa]
tiup cluster check <cluster-name> scale-out.yml --cluster --apply --user root [-p] [-i /home/root/.ssh/gcp_rsa]
```

3. Run the `scale-out` command:

{{< copyable "shell-regular" >}}

```shell
tiup cluster scale-out <cluster-name> scale-out.yaml [-p] [-i /home/root/.ssh/gcp_rsa]
tiup cluster scale-out <cluster-name> scale-out.yml [-p] [-i /home/root/.ssh/gcp_rsa]
```

In the preceding commands:

- `scale-out.yaml` is the scale-out configuration file.
- `scale-out.yml` is the scale-out configuration file.
- `--user root` indicates logging in to the target machine as the `root` user to complete the cluster scale out. The `root` user is expected to have `ssh` and `sudo` privileges to the target machine. Alternatively, you can use other users with `ssh` and `sudo` privileges to complete the deployment.
- `[-i]` and `[-p]` are optional. If you have configured login to the target machine without password, these parameters are not required. If not, choose one of the two parameters. `[-i]` is the private key of the root user (or other users specified by `--user`) that has access to the target machine. `[-p]` is used to input the user password interactively.

Expand Down Expand Up @@ -158,9 +158,9 @@ This section exemplifies how to add a TiFlash node to the `10.0.1.4` host.
> - Confirm that the current TiDB version supports using TiFlash. Otherwise, upgrade your TiDB cluster to v5.0 or later versions.
> - Run the `tiup ctl:v<CLUSTER_VERSION> pd -u http://<pd_ip>:<pd_port> config set enable-placement-rules true` command to enable the Placement Rules feature. Or run the corresponding command in [pd-ctl](/pd-control.md).

1. Add the node information to the `scale-out.yaml` file:
1. Add the node information to the `scale-out.yml` file:

Create the `scale-out.yaml` file to add the TiFlash node information.
Create the `scale-out.yml` file to add the TiFlash node information.

{{< copyable "" >}}

Expand All @@ -176,7 +176,7 @@ This section exemplifies how to add a TiFlash node to the `10.0.1.4` host.
{{< copyable "shell-regular" >}}

```shell
tiup cluster scale-out <cluster-name> scale-out.yaml
tiup cluster scale-out <cluster-name> scale-out.yml
```

> **Note:**
Expand Down Expand Up @@ -207,28 +207,28 @@ After the scale-out, the cluster topology is as follows:

This section exemplifies how to add two TiCDC nodes to the `10.0.1.3` and `10.0.1.4` hosts.

1. Add the node information to the `scale-out.yaml` file:
1. Add the node information to the `scale-out.yml` file:

Create the `scale-out.yaml` file to add the TiCDC node information.
Create the `scale-out.yml` file to add the TiCDC node information.

{{< copyable "" >}}

```ini
cdc_servers:
- host: 10.0.1.3
gc-ttl: 86400
data_dir: /data/deploy/install/data/cdc-8300
data_dir: /tidb-data/cdc-8300
- host: 10.0.1.4
gc-ttl: 86400
data_dir: /data/deploy/install/data/cdc-8300
data_dir: /tidb-data/cdc-8300
```

2. Run the scale-out command:

{{< copyable "shell-regular" >}}

```shell
tiup cluster scale-out <cluster-name> scale-out.yaml
tiup cluster scale-out <cluster-name> scale-out.yml
```

> **Note:**
Expand Down
10 changes: 5 additions & 5 deletions ticdc/deploy-ticdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,25 +50,25 @@ More references:

The method of scaling out a TiCDC cluster is similar to that of deploying one. It is recommended to use TiUP to perform the scale-out.

1. Create a `scale-out.yaml` file to add the TiCDC node information. The following is an example:
1. Create a `scale-out.yml` file to add the TiCDC node information. The following is an example:

```shell
cdc_servers:
- host: 10.1.1.1
gc-ttl: 86400
data_dir: /data/deploy/install/data/cdc-8300
data_dir: /tidb-data/cdc-8300
- host: 10.1.1.2
gc-ttl: 86400
data_dir: /data/deploy/install/data/cdc-8300
data_dir: /tidb-data/cdc-8300
- host: 10.0.1.4:8300
gc-ttl: 86400
data_dir: /data/deploy/install/data/cdc-8300
data_dir: /tidb-data/cdc-8300
```

2. Run the scale-out command on the TiUP control machine:

```shell
tiup cluster scale-out <cluster-name> scale-out.yaml
tiup cluster scale-out <cluster-name> scale-out.yml
```

For more use cases, see [Scale out a TiCDC cluster](/scale-tidb-using-tiup.md#scale-out-a-ticdc-cluster).
Expand Down
8 changes: 4 additions & 4 deletions tidb-cloud/migrate-from-op-tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,22 +81,22 @@ You need to [deploy TiCDC](https://docs.pingcap.com/tidb/dev/deploy-ticdc) to re

1. Confirm whether the current TiDB version supports TiCDC. TiDB v4.0.8.rc.1 and later versions support TiCDC. You can check the TiDB version by executing `select tidb_version();` in the TiDB cluster. If you need to upgrade it, see [Upgrade TiDB Using TiUP](https://docs.pingcap.com/tidb/dev/deploy-ticdc#upgrade-ticdc-using-tiup).

2. Add the TiCDC component to the TiDB cluster. See [Add or scale out TiCDC to an existing TiDB cluster using TiUP](https://docs.pingcap.com/tidb/dev/deploy-ticdc#add-or-scale-out-ticdc-to-an-existing-tidb-cluster-using-tiup). Edit the `scale-out.yaml` file to add TiCDC:
2. Add the TiCDC component to the TiDB cluster. See [Add or scale out TiCDC to an existing TiDB cluster using TiUP](https://docs.pingcap.com/tidb/dev/deploy-ticdc#add-or-scale-out-ticdc-to-an-existing-tidb-cluster-using-tiup). Edit the `scale-out.yml` file to add TiCDC:

```yaml
cdc_servers:
- host: 10.0.1.3
gc-ttl: 86400
data_dir: /data/deploy/install/data/cdc-8300
data_dir: /tidb-data/cdc-8300
- host: 10.0.1.4
gc-ttl: 86400
data_dir: /data/deploy/install/data/cdc-8300
data_dir: /tidb-data/cdc-8300
```

3. Add the TiCDC component and check the status.

```shell
tiup cluster scale-out <cluster-name> scale-out.yaml
tiup cluster scale-out <cluster-name> scale-out.yml
tiup cluster display <cluster-name>
```

Expand Down
8 changes: 4 additions & 4 deletions tiup/tiup-component-cluster-check.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,10 +145,10 @@ tiup cluster check <topology.yml | cluster-name> [flags]

> **Note:**
>
> `tiup cluster check` also supports repairing the `scale-out.yaml` file for an existing cluster with the following command format:
> `tiup cluster check` also supports repairing the `scale-out.yml` file for an existing cluster with the following command format:
>
>```shell
> tiup cluster check <cluster-name> scale-out.yaml --cluster --apply --user root [-p] [-i /home/root/.ssh/gcp_rsa]
> tiup cluster check <cluster-name> scale-out.yml --cluster --apply --user root [-p] [-i /home/root/.ssh/gcp_rsa]
>```

### --cluster
Expand All @@ -165,10 +165,10 @@ tiup cluster check <topology.yml | cluster-name> [flags]
> **Note:**
>
> - If the `tiup cluster check <cluster-name>` command is used, you must add the `--cluster` option: `tiup cluster check <cluster-name> --cluster`.
> - `tiup cluster check` also supports checking the `scale-out.yaml` file for an existing cluster with the following command format:
> - `tiup cluster check` also supports checking the `scale-out.yml` file for an existing cluster with the following command format:
>
> ```shell
> tiup cluster check <cluster-name> scale-out.yaml --cluster --user root [-p] [-i /home/root/.ssh/gcp_rsa]
> tiup cluster check <cluster-name> scale-out.yml --cluster --user root [-p] [-i /home/root/.ssh/gcp_rsa]
> ```

### -N, --node
Expand Down