You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: op-guide/ansible-deployment.md
+97-76Lines changed: 97 additions & 76 deletions
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,13 @@
1
1
---
2
-
title: Ansible Deployment
2
+
title: Deploy TiDB Using Ansible
3
3
category: operations
4
4
---
5
5
6
-
# Ansible Deployment
6
+
# Deploy TiDB Using Ansible
7
7
8
8
## Overview
9
9
10
-
Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
10
+
Ansible is an IT automation tool that can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
11
11
12
12
[TiDB-Ansible](https://github.com/pingcap/tidb-ansible) is a TiDB cluster deployment tool developed by PingCAP, based on Ansible playbook. TiDB-Ansible enables you to quickly deploy a new TiDB cluster which includes PD, TiDB, TiKV, and the cluster monitoring modules.
13
13
@@ -20,92 +20,105 @@ You can use the TiDB-Ansible configuration file to set up the cluster topology,
20
20
- Cleaning environment
21
21
- Configuring monitoring modules
22
22
23
-
24
23
## Prepare
25
24
26
25
Before you start, make sure that you have:
27
26
28
-
1. Several target machines with the following requirements:
29
-
30
-
- 4 or more machines. At least 3 instances for TiKV. Do not deploy TiKV together with TiDB or PD on the same machine. See [Software and Hardware Requirements](recommendation.md).
27
+
1. Several target machines that meet the following requirements:
31
28
32
-
- Recommended Operating system:
29
+
- 4 or more machines
30
+
31
+
A standard TiDB cluster contains 6 machines. You can use 4 machines for testing.
33
32
34
-
- CentOS 7.3 or later Linux
35
-
- x86_64 architecture (AMD64)
36
-
- ext4 filesystem
33
+
- CentOS 7.3 (64 bit) or later with Python 2.7 installed, x86_64 architecture (AMD64), ext4 filesystem
37
34
38
-
Use ext4 filesystem for your data disks. Mount ext4 filesystem with the `nodelalloc` mount option. See [Mount the data disk ext4 filesystem with options](#mount-the-data-disk-ext4-filesystem-with-options).
35
+
Use ext4 filesystem for your data disks. Mount ext4 filesystem with the `nodelalloc` mount option. See [Mount the data disk ext4 filesystem with options](#mount-the-data-disk-ext4-filesystem-with-options).
39
36
40
-
-The network between machines. Turn off the firewalls and iptables when deploying and turn them on after the deployment.
37
+
-Network between machines.
41
38
42
-
- The same time and time zone for all machines with the NTP service on to synchronize the correct time. See [How to check whether the NTP service is normal](#how-to-check-whether-the-ntp-service-is-normal).
39
+
- Same time and time zone for all machines with the NTP service on to synchronize the correct time
40
+
41
+
See [How to check whether the NTP service is normal](#how-to-check-whether-the-ntp-service-is-normal).
43
42
44
-
- Create a normal `tidb` user account as the user who runs the service. The `tidb` user can sudo to the root user without a password. See [How to configure SSH mutual trust and sudo without password](#how-to-configure-ssh-mutual-trust-and-sudo-without-password).
43
+
- Create a normal `tidb` user account as the user who runs the service
44
+
45
+
The `tidb` user can sudo to the root user without a password. See [How to configure SSH mutual trust and sudo without password](#how-to-configure-ssh-mutual-trust-and-sudo-without-password).
45
46
46
47
> **Note:** When you deploy TiDB using Ansible, use SSD disks for the data directory of TiKV and PD nodes.
47
48
48
49
2. A Control Machine with the following requirements:
49
50
50
-
- The Control Machine can be one of the managed nodes.
51
-
- It is recommended to install CentOS 7.3 or later version of Linux operating system (Python 2.7 involved by default).
52
-
- The Control Machine must have access to the Internet in order to download TiDB and related packages.
53
-
- Configure mutual trust of `ssh authorized_key`. In the Control Machine, you can login to the deployment target machine using `tidb` user account without a password. See [How to configure SSH mutual trust and sudo without password](#how-to-configure-ssh-mutual-trust-and-sudo-without-password).
51
+
> **Note:** The Control Machine can be one of the target machines.
52
+
53
+
- CentOS 7.3 (64 bit) or later with Python 2.7 installed
54
+
- Access to the Internet
55
+
- Git installed
56
+
- SSH Mutual Trust configured
57
+
58
+
In the Control Machine, you can log in to the deployment target machine using the `tidb` user account without a password. See [How to configure SSH mutual trust and sudo without password](#how-to-configure-ssh-mutual-trust-and-sudo-without-password).
54
59
55
-
## Download TiDB-Ansible to the Control Machine
60
+
## Step 1: Download TiDB-Ansible to the Control Machine
56
61
57
-
Login to the Control Machine using the `tidb` user account and enter the `/home/tidb` directory. Use the following command to download the corresponding version of TiDB-Ansible from GitHub [TiDB-Ansible project](https://github.com/pingcap/tidb-ansible). The default folder name is `tidb-ansible`. The following are examples of downloading various versions, and you can turn to the official team for advice on which version to choose.
62
+
1. Log in to the Control Machine using the `tidb` user account and enter the `/home/tidb` directory.
58
63
59
-
Download the 1.0 GA version:
64
+
2.Download the corresponding TiDB-Ansible version. The default folder name is `tidb-ansible`.
If you have questions regarding which version to use, email to info@pingcap.com for more information or [file an issue](https://github.com/pingcap/tidb-ansible/issues/new).
72
79
73
-
Download the master version:
80
+
## Step 2: Install Ansible and dependencies on the Control Machine
1. Install Ansible and the dependencies on the Control Machine:
78
83
79
-
## Install Ansible and dependencies in the Control Machine
84
+
```bash
85
+
sudo yum -y install epel-release
86
+
sudo yum -y install python-pip curl
87
+
cd tidb-ansible
88
+
sudo pip install -r ./requirements.txt
89
+
```
80
90
81
-
Use `pip` to install Ansible and dependencies on the Control Machine of CentOS 7 system. After installation, you can use `ansible --version` to view the Ansible version. Currently releases-1.0 depends on Ansible 2.4, while release-2.0 and the master version are compatible with Ansible 2.4 and Ansible 2.5.
91
+
Ansible and related dependencies are inthe `tidb-ansible/requirements.txt` file.
82
92
83
-
Ansible and related dependencies are recorded in the `tidb-ansible/requirements.txt` file. Install Ansible and dependencies as follows, otherwise compatibility issue occurs.
93
+
2. View the version of Ansible:
84
94
85
-
```bash
86
-
$ sudo yum -y install epel-release
87
-
$ sudo yum -y install python-pip curl
88
-
$ cd tidb-ansible
89
-
$ sudo pip install -r ./requirements.txt
90
-
$ ansible --version
91
-
ansible 2.5.0
92
-
```
95
+
```bash
96
+
ansible --version
97
+
```
98
+
99
+
Currently, the 1.0 GA version depends on Ansible 2.4, while the 2.0 GA version and the master version are compatible with Ansible 2.4 and Ansible 2.5.
93
100
94
101
For other systems, see [Install Ansible](ansible-deployment.md#install-ansible).
95
102
96
-
## Orchestrate the TiDB cluster
103
+
## Step 3: Edit the `inventory.ini` file to orchestrate the TiDB cluster
104
+
105
+
Edit the `tidb-ansible/inventory.ini` file to orchestrate the TiDB cluster. The standard TiDB cluster contains 6 machines: 2 TiDB modes, 3 PD nodes and 3 TiKV nodes.
106
+
107
+
- Deploy at least 3 instances for TiKV.
108
+
- Do not deploy TiKV together with TiDB or PD on the same machine.
109
+
- Use the first TiDB machine as the monitoring machine.
110
+
111
+
>**Note:** It is required to use the internal IP address to deploy.
97
112
98
-
The file path of `inventory.ini`: `tidb-ansible/inventory.ini`.
113
+
You can choose one of the following two types of cluster topology according to your scenario:
99
114
100
-
> **Note:** Use the internal IP address to deploy the cluster.
115
+
- [The cluster topology of a single TiKV instance on each TiKV node](#option-1-use-the-cluster-topology-of-a-single-tikv-instance-on-each-tikv-node)
101
116
102
-
The standard cluster has 6 machines:
117
+
In most cases, it is recommended to deploy one TiKV instance on each TiKV node forbetter performance. However, if the CPU and memory of your TiKV machines are much better than the requiredin [Hardware and Software Requirements](../op-guide/recommendation.md), and you have more than two disks in one node or the capacity of one SSD is larger than 2 TB, you can deploy no more than 2 TiKV instances on a single TiKV node.
103
118
104
-
- 2 TiDB nodes, the first TiDB machine is used as a monitor
105
-
- 3 PD nodes
106
-
- 3 TiKV nodes
119
+
- [The cluster topology of multiple TiKV instances on each TiKV node](#option-2-use-the-cluster-topology-of-multiple-tikv-instances-on-each-tikv-node)
107
120
108
-
### The cluster topology of single TiKV instance on a single machine
121
+
### Option 1: Use the cluster topology of a single TiKV instance on each TiKV node
109
122
110
123
| Name | Host IP | Services |
111
124
|:------|:------------|:-----------|
@@ -146,10 +159,9 @@ The standard cluster has 6 machines:
146
159
172.16.10.6
147
160
```
148
161
162
+
### Option 2: Use the cluster topology of multiple TiKV instances on each TiKV node
149
163
150
-
### The cluster topology of multiple TiKV instances on a single machine
151
-
152
-
Take two TiKV instances as an example:
164
+
Take two TiKV instances on each TiKV node as an example:
153
165
154
166
| Name | Host IP | Services |
155
167
|:------|:------------|:-----------|
@@ -203,39 +215,52 @@ location_labels = ["host"]
203
215
204
216
**Edit the parameters in the service configuration file:**
205
217
206
-
1. For multiple TiKV instances, edit the `end-point-concurrency` and `block-cache-size`parameters in `tidb-ansible/conf/tikv.yml`:
218
+
1. For the cluster topology of multiple TiKV instances on each TiKV node, you need to edit the `block-cache-size`parameterin`tidb-ansible/conf/tikv.yml`:
207
219
208
-
-`end-point-concurrency`: keep the number lower than CPU Vcores
209
220
- `rocksdb defaultcf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 30%
210
221
- `rocksdb writecf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 45%
211
222
- `rocksdb lockcf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 2.5% (128 MB at a minimum)
212
223
- `raftdb defaultcf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 2.5% (128 MB at a minimum)
213
224
214
-
2.If multiple TiKV instances are deployed on a same physical disk, edit the `capacity` parameter in `conf/tikv.yml`:
225
+
2. For the cluster topology of multiple TiKV instances on each TiKV node, you need to edit the `high-concurrency`, `normal-concurrency` and `low-concurrency` parameters inthe `tidb-ansible/conf/tikv.yml` file:
215
226
216
-
-`capacity`: (DISK - log space) / TiKV instance number (the unit is GB)
227
+
```
228
+
readpool:
229
+
coprocessor:
230
+
# Notice: if CPU_NUM > 8, default thread pool size for coprocessors
231
+
# will be set to CPU_NUM * 0.8.
232
+
# high-concurrency: 8
233
+
# normal-concurrency: 8
234
+
# low-concurrency: 8
235
+
```
217
236
218
-
### Description of inventory.ini variables
237
+
Recommended configuration: `number of instances * parameter value = CPU_Vcores * 0.8`.
219
238
220
-
#### Description of the deployment directory
239
+
3. If multiple TiKV instances are deployed on a same physical disk, edit the `capacity` parameter in`conf/tikv.yml`:
221
240
222
-
You can configure the deployment directory using the `deploy_dir` variable. The global variable is set to `/home/tidb/deploy` by default, and it applies to all services. If the data disk is mounted on the `/data1` directory, you can set it to `/data1/deploy`. For example:
241
+
- `capacity`: (total disk capacity - log space) / TiKV instance number (the unit is GB)
223
242
224
-
```
243
+
## Step 4: Edit variables in the `inventory.ini` file
244
+
245
+
Edit the `deploy_dir` variable to configure the deployment directory.
246
+
247
+
The global variable is set to `/home/tidb/deploy` by default, and it applies to all services. If the data disk is mounted on the `/data1` directory, you can set it to `/data1/deploy`. For example:
248
+
249
+
```bash
225
250
## Global variables
226
251
[all:vars]
227
252
deploy_dir = /data1/deploy
228
253
```
229
254
230
-
To set a deployment directory separately for a service, you can configure host variables when configuring the service host list. Take the TiKV node as an example and it is similar for other services. You must add the first column alias to avoid confusion when the services are mixedly deployed.
255
+
**Note:**To separately setthe deployment directory fora service, you can configure the host variable while configuring the service host listin the `inventory.ini` file. It is required to add the first column alias, to avoid confusion in scenarios of mixed services deployment.
> **Note:**To enable the following control variables, use the capitalized `True`. To disable the following control variables, use the capitalized `False`.
263
+
To enable the following control variables, use the capitalized `True`. To disable the following control variables, use the capitalized `False`.
| grafana_admin_user | the username of Grafana administrator; default `admin`|
257
282
| grafana_admin_password | the password of Grafana administrator account; default `admin`; used to import Dashboard and create the API key using Ansible; update this variable after you modify it through Grafana web |
258
283
259
-
## Deploy the TiDB cluster
284
+
## Step 5: Deploy the TiDB cluster
260
285
261
286
When `ansible-playbook` runs Playbook, the default concurrent number is 5. If many deployment target machines are deployed, you can add the `-f` parameter to specify the concurrency, such as `ansible-playbook deploy.yml -f 10`.
262
287
263
-
The following example uses the `tidb` user account as the user who runs the service.
264
-
265
-
To deploy TiDB using a normal user account, take the following steps:
288
+
The following example uses `tidb` as the user who runs the service.
266
289
267
290
1. Edit the `tidb-ansible/inventory.ini` file to make sure `ansible_user = tidb`.
268
291
@@ -318,9 +341,7 @@ To deploy TiDB using a normal user account, take the following steps:
318
341
319
342
## Test the cluster
320
343
321
-
> **Note:** Because TiDB is compatible with MySQL, you must use MySQL client to connect to TiDB directly.
322
-
323
-
It is recommended to configure load balancing to provide uniform SQL interface.
344
+
Because TiDB is compatible with MySQL, you must use the MySQL client to connect to TiDB directly. It is recommended to configure load balancing to provide uniform SQL interface.
324
345
325
346
1. Connect to the TiDB cluster using the MySQL client.
326
347
@@ -336,7 +357,7 @@ It is recommended to configure load balancing to provide uniform SQL interface.
336
357
http://172.16.10.1:3000
337
358
```
338
359
339
-
The default account and password: `admin`/`admin`.
360
+
>**Note**: The default account and password: `admin`/`admin`.
Copy file name to clipboardExpand all lines: tikv/deploy-tikv-using-ansible.md
+5-3Lines changed: 5 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -23,11 +23,11 @@ This guide describes how to install and deploy TiKV using Ansible. Ansible is an
23
23
24
24
- Network between machines
25
25
26
-
- Same time and time zone for all machines with the NTP service on to synchronize the correct time.
26
+
- Same time and time zone for all machines with the NTP service on to synchronize the correct time
27
27
28
28
See [How to check whether the NTP service is normal](../op-guide/ansible-deployment#how-to-check-whether-the-ntp-service-is-normal).
29
29
30
-
- Create a normal `tidb` user account as the user who runs the service.
30
+
- Create a normal `tidb` user account as the user who runs the service
31
31
32
32
The `tidb` user can sudo to the root user without a password. See [How to configure SSH mutual trust and sudo without password](../op-guide/ansible-deployment#how-to-configure-ssh-mutual-trust-and-sudo-without-password).
33
33
@@ -42,7 +42,7 @@ This guide describes how to install and deploy TiKV using Ansible. Ansible is an
42
42
- Git installed
43
43
- SSH Mutual Trust configured
44
44
45
-
In the Control Machine, you can log in to the deployment target machine using `tidb` user account without a password. See [How to configure SSH mutual trust and sudo without password](../op-guide/ansible-deployment#how-to-configure-ssh-mutual-trust-and-sudo-without-password).
45
+
In the Control Machine, you can log in to the deployment target machine using the `tidb` user account without a password. See [How to configure SSH mutual trust and sudo without password](../op-guide/ansible-deployment#how-to-configure-ssh-mutual-trust-and-sudo-without-password).
46
46
47
47
## Step 1: Download TiDB-Ansible to the Control Machine
48
48
@@ -227,6 +227,8 @@ Edit the parameters in the service configuration file:
227
227
228
228
1. Edit the `deploy_dir` variable to configure the deployment directory.
229
229
230
+
The global variable is set to `/home/tidb/deploy` by default, and it applies to all services. If the data disk is mounted on the `/data1` directory, you can set it to `/data1/deploy`. For example:
0 commit comments