Skip to content

Commit

Permalink
Tiup 5.0: update deploy, upgrade, maintain docs (pingcap#5099)
Browse files Browse the repository at this point in the history
  • Loading branch information
TomShawn authored Mar 29, 2021
1 parent 343e916 commit 52b0926
Show file tree
Hide file tree
Showing 16 changed files with 371 additions and 766 deletions.
12 changes: 6 additions & 6 deletions TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
+ Deploy
+ [Software and Hardware Requirements](/hardware-and-software-requirements.md)
+ [Environment Configuration Checklist](/check-before-deployment.md)
+ Topology Patterns
+ Plan Cluster Topology
+ [Minimal Topology](/minimal-deployment-topology.md)
+ [TiFlash Topology](/tiflash-deployment-topology.md)
+ [TiCDC Topology](/ticdc-deployment-topology.md)
Expand All @@ -31,11 +31,12 @@
+ [Cross-DC Topology](/geo-distributed-deployment-topology.md)
+ [Hybrid Topology](/hybrid-deployment-topology.md)
+ Install and Start
+ Linux OS
+ [Use TiUP (Recommended)](/production-deployment-using-tiup.md)
+ [Use TiUP Offline (Recommended)](/production-offline-deployment-using-tiup.md)
+ [Deploy in Kubernetes](https://docs.pingcap.com/tidb-in-kubernetes/stable)
+ [Use TiUP (Recommended)](/production-deployment-using-tiup.md)
+ [Deploy in Kubernetes](https://docs.pingcap.com/tidb-in-kubernetes/stable)
+ [Verify Cluster Status](/post-installation-check.md)
+ Test Cluster Performance
+ [Test TiDB Using Sysbench](/benchmark/benchmark-tidb-using-sysbench.md)
+ [Test TiDB Using TPC-C](/benchmark/benchmark-tidb-using-tpcc.md)
+ Migrate
+ [Overview](/migration-overview.md)
+ Migrate from MySQL
Expand All @@ -49,7 +50,6 @@
+ Maintain
+ Upgrade
+ [Use TiUP (Recommended)](/upgrade-tidb-using-tiup.md)
+ [Use TiUP Offline (Recommended)](/upgrade-tidb-using-tiup-offline.md)
+ [Use TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/v1.1/upgrade-a-tidb-cluster)
+ Scale
+ [Use TiUP (Recommended)](/scale-tidb-using-tiup.md)
Expand Down
124 changes: 25 additions & 99 deletions benchmark/benchmark-tidb-using-sysbench.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,54 +5,19 @@ aliases: ['/docs/dev/benchmark/benchmark-tidb-using-sysbench/','/docs/dev/benchm

# How to Test TiDB Using Sysbench

In this test, Sysbench 1.0.14 and TiDB 3.0 Beta are used. It is recommended to use Sysbench 1.0 or later, which can be [downloaded here](https://github.com/akopytov/sysbench/releases/tag/1.0.14).

## Test environment

- [Hardware recommendations](/hardware-and-software-requirements.md)

- The TiDB cluster is deployed according to the [TiDB Deployment Guide](https://pingcap.com/docs/v3.0/online-deployment-using-ansible/). Suppose there are 3 servers in total. It is recommended to deploy 1 TiDB instance, 1 PD instance and 1 TiKV instance on each server. As for disk space, supposing that there are 32 tables and 10M rows of data on each table, it is recommended that the disk space where TiKV's data directory resides is larger than 512 GB.

The number of concurrent connections to a single TiDB cluster is recommended to be under 500. If you need to increase the concurrency pressure on the entire system, you can add TiDB instances to the cluster whose number depends on the pressure of the test.

IDC machines:

| Type | Name |
|:---- |:---- |
| OS | Linux (CentOS 7.3.1611) |
| CPU | 40 vCPUs, Intel® Xeon® CPU E5-2630 v4 @ 2.20GHz |
| RAM | 128GB |
| DISK | Intel Optane SSD P4800X 375G * 1 |
| NIC | 10Gb Ethernet |
It is recommended to use Sysbench 1.0 or later, which can be [downloaded here](https://github.com/akopytov/sysbench/releases/tag/1.0.14).

## Test plan

### TiDB version information

| Component | GitHash |
|:---- |:---- |
| TiDB | 7a240818d19ae96e4165af9ea35df92466f59ce6 |
| TiKV | e26ceadcdfe94fb6ff83b5abb614ea3115394bcd |
| PD | 5e81548c3c1a1adab056d977e7767307a39ecb70 |

### Cluster topology

| Machine IP | Deployment instance |
|:---- |:---- |
| 172.16.30.31 | 3*sysbench |
| 172.16.30.33 | 1\*tidb 1\*pd 1\*tikv |
| 172.16.30.34 | 1\*tidb 1\*pd 1\*tikv |
| 172.16.30.35 | 1\*tidb 1\*pd 1\*tikv |

### TiDB configuration

Higher log level means fewer logs to be printed and thus positively influences TiDB performance. Enable `prepared plan cache` in the TiDB configuration to lower the cost of optimizing execution plan. Specifically, you can add the following command in the TiDB configuration file:
Higher log level means fewer logs to be printed and thus positively influences TiDB performance. Enable `prepared plan cache` in the TiDB configuration to lower the cost of optimizing execution plan. Specifically, you can add the following command in the TiUP configuration file:

```toml
[log]
level = "error"
[prepared-plan-cache]
enabled = true
```yaml
server_configs:
tidb:
log.level: "error"
prepared-plan-cache.enabled: true
```
### TiKV configuration
Expand All @@ -63,22 +28,22 @@ There are multiple Column Families on TiKV cluster which are mainly used to stor
Default CF : Write CF = 4 : 1
Configuring the block cache of RocksDB on TiKV should be based on the machine’s memory size, in order to make full use of the memory. To deploy a TiKV cluster on a 40GB virtual machine, it is suggested to configure the block cache as follows:
Configuring the block cache of RocksDB on TiKV should be based on the machine’s memory size, in order to make full use of the memory. To deploy a TiKV cluster on a 40GB virtual machine, it is recommended to configure the block cache as follows:
```toml
log-level = "error"
[rocksdb.defaultcf]
block-cache-size = "24GB"
[rocksdb.writecf]
block-cache-size = "6GB"
```yaml
server_configs:
tikv:
log-level: "error"
rocksdb.defaultcf.block-cache-size: "24GB"
rocksdb.writecf.block-cache-size: "6GB"
```
For TiDB 3.0 or later versions, you can also use the shared block cache to configure:
You can also configure TiKV to share block cache:
```toml
log-level = "error"
[storage.block-cache]
capacity = "30GB"
```yaml
server_configs:
tikv:
storage.block-cache.capacity: "30GB"
```
For more detailed information on TiKV performance tuning, see [Tune TiKV Performance](/tune-tikv-memory-performance.md).
Expand All @@ -87,7 +52,7 @@ For more detailed information on TiKV performance tuning, see [Tune TiKV Perform
> **Note:**
>
> This test was performed without load balancing tools such as HAproxy. We run the Sysbench test on individual TiDB node and added the results up. The load balancing tools and the parameters of different versions might also impact the performance.
> The test in this document was performed without load balancing tools such as HAproxy. We run the Sysbench test on individual TiDB node and added the results up. The load balancing tools and the parameters of different versions might also impact the performance.
### Sysbench configuration
Expand Down Expand Up @@ -123,6 +88,10 @@ db-driver=mysql

### Data import

> **Note:**
>
> If you enable the optimistic transaction model (TiDB uses the pessimistic transaction model by default), TiDB rolls back transactions when a concurrency conflict is found. Setting `tidb_disable_txn_auto_retry` to `off` turns on the automatic retry mechanism after meeting a transaction conflict, which can prevent Sysbench from quitting because of the transaction conflict error.
Before importing the data, it is necessary to make some settings to TiDB. Execute the following command in MySQL client:

{{< copyable "sql" >}}
Expand All @@ -131,7 +100,7 @@ Before importing the data, it is necessary to make some settings to TiDB. Execut
set global tidb_disable_txn_auto_retry = off;
```

Then exit the client. TiDB uses an optimistic transaction model that rolls back transactions when a concurrency conflict is found. Setting `tidb_disable_txn_auto_retry` to `off` turns on the automatic retry mechanism after meeting a transaction conflict, which can prevent Sysbench from quitting because of the transaction conflict error.
Then exit the client.

Restart MySQL client and execute the following SQL statement to create a database `sbtest`:

Expand Down Expand Up @@ -204,49 +173,6 @@ sysbench --config-file=config oltp_update_index --tables=32 --table-size=1000000
sysbench --config-file=config oltp_read_only --tables=32 --table-size=10000000 run
```

## Test results

32 tables are tested, each with 10M of data.

Sysbench test was carried on each of the tidb-servers. And the final result was a sum of all the results.

### oltp_point_select

| Type | Thread | TPS | QPS | avg.latency(ms) | .95.latency(ms) | max.latency(ms) |
|:---- |:---- |:---- |:---- |:----------------|:----------------- |:---- |
| point_select | 3\*8 | 67502.55 | 67502.55 | 0.36 | 0.42 | 141.92 |
| point_select | 3\*16 | 120141.84 | 120141.84 | 0.40 | 0.52 | 20.99 |
| point_select | 3\*32 | 170142.92 | 170142.92 | 0.58 | 0.99 | 28.08 |
| point_select | 3\*64 | 195218.54 | 195218.54 | 0.98 | 2.14 | 21.82 |
| point_select | 3\*128 | 208189.53 | 208189.53 | 1.84 | 4.33 | 31.02 |

![oltp_point_select](/media/oltp_point_select.png)

### oltp_update_index

| Type | Thread | TPS | QPS | avg.latency(ms) | .95.latency(ms) | max.latency(ms) |
|:---- |:---- |:---- |:---- |:----------------|:----------------- |:---- |
| oltp_update_index | 3\*8 | 9668.98 | 9668.98 | 2.51 | 3.19 | 103.88|
| oltp_update_index | 3\*16 | 12834.99 | 12834.99 | 3.79 | 5.47 | 176.90 |
| oltp_update_index | 3\*32 | 15955.77 | 15955.77 | 6.07 | 9.39 | 4787.14 |
| oltp_update_index | 3\*64 | 18697.17 | 18697.17 | 10.34 | 17.63 | 4539.04 |
| oltp_update_index | 3\*128 | 20446.81 | 20446.81 | 18.98 | 40.37 | 5394.75 |
| oltp_update_index | 3\*256 | 23563.03 | 23563.03 | 32.86 | 78.60 | 5530.69 |

![oltp_update_index](/media/oltp_update_index.png)

### oltp_read_only

| Type | Thread | TPS | QPS | avg.latency(ms) | .95.latency(ms) | max.latency(ms) |
|:---- |:---- |:---- |:---- |:----------------|:----------------- |:---- |
| oltp_read_only | 3\*8 | 2411.00 | 38575.96 | 9.92 | 20.00 | 92.23 |
| oltp_read_only | 3\*16 | 3873.53 | 61976.50 | 12.25 | 16.12 | 56.94 |
| oltp_read_only | 3\*32 | 5066.88 | 81070.16 | 19.42 | 26.20 | 123.41 |
| oltp_read_only | 3\*64 | 5466.36 | 87461.81 | 34.65 | 63.20 | 231.19 |
| oltp_read_only | 3\*128 | 6684.16 | 106946.59 | 57.29 | 97.55 | 180.85 |

![oltp_read_only](/media/oltp_read_only.png)

## Common issues

### TiDB and TiKV are both properly configured under high concurrency, why is the overall performance still low?
Expand Down
Loading

0 comments on commit 52b0926

Please sign in to comment.