From e55416590f35f0e1f49041dfbd279e0774541f23 Mon Sep 17 00:00:00 2001 From: Lilian Lee Date: Wed, 1 Jul 2020 14:15:28 +0800 Subject: [PATCH] *: add tuing TiKV thread pool performance (#2989) * *: add tuing TiKV thread pool performance * Update category * Update a link * Apply suggestions from code review Co-authored-by: Keke Yi <40977455+yikeke@users.noreply.github.com> Co-authored-by: yikeke Co-authored-by: Keke Yi <40977455+yikeke@users.noreply.github.com> Co-authored-by: ti-srebot <66930949+ti-srebot@users.noreply.github.com> --- TOC.md | 3 +- benchmark/benchmark-tidb-using-sysbench.md | 2 +- benchmark/benchmark-tidb-using-tpcc.md | 2 +- br/backup-and-restore-tool.md | 2 +- faq/tidb-faq.md | 4 +- production-deployment-from-binary-tarball.md | 2 +- tidb-best-practices.md | 2 +- ...ance.md => tune-tikv-memory-performance.md | 8 +- tune-tikv-thread-performance.md | 86 +++++++++++++++++++ 9 files changed, 99 insertions(+), 12 deletions(-) rename tune-tikv-performance.md => tune-tikv-memory-performance.md (98%) create mode 100644 tune-tikv-thread-performance.md diff --git a/TOC.md b/TOC.md index 339a2ef3de2d3..34c018dce56da 100644 --- a/TOC.md +++ b/TOC.md @@ -90,7 +90,8 @@ + Performance Tuning + Software Tuning + Configuration - + [TiKV Tuning](/tune-tikv-performance.md) + + [Tune TiKV Threads](/tune-tikv-thread-performance.md) + + [Tune TiKV Memory](/tune-tikv-memory-performance.md) + [TiKV Follower Read](/follower-read.md) + [TiFlash Tuning](/tiflash/tune-tiflash-performance.md) + [Coprocessor Cache](/coprocessor-cache.md) diff --git a/benchmark/benchmark-tidb-using-sysbench.md b/benchmark/benchmark-tidb-using-sysbench.md index e72b362d2c67b..086bf1107c5a1 100644 --- a/benchmark/benchmark-tidb-using-sysbench.md +++ b/benchmark/benchmark-tidb-using-sysbench.md @@ -88,7 +88,7 @@ sync-log = false capacity = "30GB" ``` -For more detailed information on TiKV performance tuning, see [Tune TiKV Performance](/tune-tikv-performance.md). +For more detailed information on TiKV performance tuning, see [Tune TiKV Performance](/tune-tikv-memory-performance.md). ## Test process diff --git a/benchmark/benchmark-tidb-using-tpcc.md b/benchmark/benchmark-tidb-using-tpcc.md index 94aa57046a40f..ec75c1c500da6 100644 --- a/benchmark/benchmark-tidb-using-tpcc.md +++ b/benchmark/benchmark-tidb-using-tpcc.md @@ -105,7 +105,7 @@ enabled = true ### Configure TiKV -You can use the basic configuration at the beginning. Then after the test is run, you can adjust it based on the metrics on Grafana and the [TiKV Tuning Instructions](/tune-tikv-performance.md). +You can use the basic configuration at the beginning. Then after the test is run, you can adjust it based on the metrics on Grafana and the [Tune TiKV Thread Performance](/tune-tikv-thread-performance.md). ### Configure BenchmarkSQL diff --git a/br/backup-and-restore-tool.md b/br/backup-and-restore-tool.md index 327e4f45c24d2..9f8467ab135ef 100644 --- a/br/backup-and-restore-tool.md +++ b/br/backup-and-restore-tool.md @@ -82,7 +82,7 @@ The SST file is named in the format of `storeID_regionID_regionEpoch_keyHash_cf` - `regionID` is the Region ID; - `regionEpoch` is the version number of the Region; - `keyHash` is the Hash (sha256) value of the startKey of a range, which ensures the uniqueness of a key; -- `cf` indicates the [Column Family](/tune-tikv-performance.md#tune-tikv-performance) of RocksDB (`default` or `write` by default). +- `cf` indicates the [Column Family](/tune-tikv-memory-performance.md#tune-tikv-performance) of RocksDB (`default` or `write` by default). ### Restoration principle diff --git a/faq/tidb-faq.md b/faq/tidb-faq.md index e6f986bf77c64..d75dcff5e6e30 100644 --- a/faq/tidb-faq.md +++ b/faq/tidb-faq.md @@ -1021,7 +1021,7 @@ Recommendations: 1. Improve the hardware configuration. See [Software and Hardware Requirements](/hardware-and-software-requirements.md). 2. Improve the concurrency. The default value is 10. You can improve it to 50 and have a try. But usually the improvement is 2-4 times of the default value. 3. Test the `count` in the case of large amount of data. -4. Optimize the TiKV configuration. See [Performance Tuning for TiKV](/tune-tikv-performance.md). +4. Optimize the TiKV configuration. See [Tune TiKV Thread Performance](/tune-tikv-thread-performance.md) and [Tune TiKV Memory Performance](/tune-tikv-memory-performance.md). #### How to view the progress of the current DDL job? @@ -1085,7 +1085,7 @@ In TiDB, data is divided into Regions for management. Generally, the TiDB hotspo #### Tune TiKV performance -See [Tune TiKV Performance](/tune-tikv-performance.md). +See [Tune TiKV Thread Performance](/tune-tikv-thread-performance.md) and [Tune TiKV Memory Performance](/tune-tikv-memory-performance.md). ## Monitor diff --git a/production-deployment-from-binary-tarball.md b/production-deployment-from-binary-tarball.md index 73eb0cefcaf28..9c28a5c0fc954 100644 --- a/production-deployment-from-binary-tarball.md +++ b/production-deployment-from-binary-tarball.md @@ -202,7 +202,7 @@ Follow the steps below to start PD, TiKV, and TiDB: > **Note:** > > - If you start TiKV or deploy PD in the production environment, it is highly recommended to specify the path for the configuration file using the `--config` parameter. If the parameter is not set, TiKV or PD does not read the configuration file. -> - To tune TiKV, see [Performance Tuning for TiKV](/tune-tikv-performance.md). +> - To tune TiKV, see [Performance Tuning for TiKV](/tune-tikv-memory-performance.md). > - If you use `nohup` to start the cluster in the production environment, write the startup commands in a script and then run the script. If not, the `nohup` process might abort because it receives exceptions when the Shell command exits. For more information, see [The TiDB/TiKV/PD process aborts unexpectedly](/troubleshoot-tidb-cluster.md#the-tidbtikvpd-process-aborts-unexpectedly). For the deployment and use of TiDB monitoring services, see [Deploy Monitoring Services for the TiDB Cluster](/deploy-monitoring-services.md) and [TiDB Monitoring API](/tidb-monitoring-api.md). diff --git a/tidb-best-practices.md b/tidb-best-practices.md index 7bd3232e643ec..fbdf4cc1b4072 100644 --- a/tidb-best-practices.md +++ b/tidb-best-practices.md @@ -148,7 +148,7 @@ It is recommended to deploy the TiDB cluster using [TiUP](/production-deployment ### Data import -To improve the write performance during the import process, you can tune TiKV's parameters as stated in [Tune TiKV Performance](/tune-tikv-performance.md). +To improve the write performance during the import process, you can tune TiKV's parameters as stated in [Tune TiKV Memory Parameter Performance](/tune-tikv-memory-performance.md). ### Write diff --git a/tune-tikv-performance.md b/tune-tikv-memory-performance.md similarity index 98% rename from tune-tikv-performance.md rename to tune-tikv-memory-performance.md index a1dbca5be37a6..6780720b1bd6c 100644 --- a/tune-tikv-performance.md +++ b/tune-tikv-memory-performance.md @@ -1,11 +1,11 @@ --- -title: TiKV Memory Parameters Performance Tuning +title: Tune TiKV Memory Parameter Performance summary: Learn how to tune the TiKV parameters for optimal performance. -category: reference -aliases: ['/docs/dev/tune-tikv-performance/','/docs/dev/reference/performance/tune-tikv/'] +category: tuning +aliases: ['/docs/dev/tune-tikv-performance/','/docs/dev/reference/performance/tune-tikv/','/tidb/dev/tune-tikv-performance'] --- -# TiKV Memory Parameters Performance Tuning +# Tune TiKV Memory Parameter Performance This document describes how to tune the TiKV parameters for optimal performance. diff --git a/tune-tikv-thread-performance.md b/tune-tikv-thread-performance.md new file mode 100644 index 0000000000000..5744269393949 --- /dev/null +++ b/tune-tikv-thread-performance.md @@ -0,0 +1,86 @@ +--- +title: Tune TiKV Thread Pool Performance +summary: Learn how to tune TiKV thread pools for optimal performance. +category: tuning +--- + +# Tune TiKV Thread Pool Performance + +This document introduces TiKV internal thread pools and how to tune their performance. + +## Thread pool introduction + +In TiKV 4.0, the TiKV thread pool is mainly composed of gRPC, Scheduler, UnifyReadPool, Raftstore, Apply, RocksDB, and some scheduled tasks and detection components that do not consume much CPU. This document mainly introduces a few CPU-intensive thread pools that affect the performance of read and write requests. + +* The gRPC thread pool: it handles all network requests and forwards requests of different task types to different thread pools. + +* The Scheduler thread pool: it detects write transaction conflicts, converts requests like the two-phase commit, pessimistic locking, and transaction rollbacks into key-value pair arrays, and then sends them to the Raftstore thread for Raft log replication. + +* The Raftstore thread pool: it processes all Raft messages and the proposal to add a new log, and writing the log to a disk. When the logs in the majority of replicas are consistent, this thread pool sends the log to the Apply thread. + +* The Apply thread pool: it receives the submitted log sent from the Raftstore thread pool, parses it as a key-value request, then writes it to RocksDB, calls the callback function to notify the gRPC thread pool that the write request is complete, and returns the result to the client. + +* The RocksDB thread pool: it is a thread pool for RocksDB to compact and flush tasks. For RocksDB's architecture and `Compact` operation, refer to [RocksDB: A Persistent Key-Value Store for Flash and RAM Storage](https://github.com/facebook/rocksdb). + +* The UnifyReadPool thread pool: it is a new feature introduced in TiKV 4.0. It is a combination of the previous Coprocessor thread pool and Storage Read Pool. All read requests such as kv get, kv batch get, raw kv get, and coprocessor are executed in this thread pool. + +## TiKV read-only requests + +TiKV's read requests are divided into the following types: + +- Simple queries that specify a certain row or several rows, running in the Storage Read Pool. +- Complex aggregate calculation and range queries, running in the Coprocessor Read Pool. + +Starting from version 4.0, the above types of read requests can be configured to use the same thread pool, which reduces the number of threads and user costs. It is disabled by default (Point queries and Coprocessor requests use different thread pools by default). To enable the unified thread pool, set the `readpool.storage.use-unified-pool` configuration item to `true`. + +## Performance tuning for TiKV thread pools + +* The gRPC thread pool. + + The default size (configured by `server.grpc-concurrency`) of the gRPC thread pool is `4`. This thread pool has almost no computing overhead and is mainly responsible for network I/O and deserialization requests, so generally you do not need to adjust the default configuration. + + - If the machine deployed with TiKV has a small number (less than or equal to 8) of CPU cores, consider setting the `server.grpc-concurrency` configuration item to `2`. + - If the machine deployed with TiKV has very high configuration, TiKV undertakes a large number of read and write requests, and the value of `gRPC poll CPU` that monitors Thread CPU on Grafana exceeds 80% of `server.grpc-concurrency`, then consider increasing the value of `server.grpc-concurrency` to keep the thread pool usage rate below 80% (that is, the metric on Grafana is lower than `80% * server.grpc-concurrency`). + +* The Scheduler thread pool. + + When TiKV detects that the number of machine CPU cores is larger than or equal to 16, the default size (configured by `storage.scheduler-worker-pool-size`) of the Scheduler thread pool is `8`; when TiKV detects that the number of machine CPU cores is smaller than 16, the default size is `4`. + + This thread pool is mainly used to convert complex transaction requests into simple key-value read and write requests. However, **the Scheduler thread pool itself does not perform any write operation**. + + - If it detects a transaction conflict, then this thread pool returns the conflict result to the client in advance. + - If no conflict is detected, then this thread pool merges the key-value requests that perform write operations into a Raft log and sends it to the Raftstore thread for Raft log replication. + + Generally speaking, to avoid excessive thread switching, it is best to ensure that the utilization rate of the Scheduler thread pool is between 50% and 75%. If the thread pool size is `8`, then it is recommended to keep `TiKV-Details.Thread CPU.scheduler worker CPU` on Grafana between 400% and 600%. + +* The Raftstore thread pool. + + The Raftstore thread pool is the most complex thread pool in TiKV. The default size (configured by `raftstore.store-pool-size`) is `2`. All write requests are written into RocksDB in the way of `fsync` from the Raftstore thread, unless you manually set `raftstore.sync-log` to `false`. Setting `raftstore.sync-log` to `false` improves write performance to a certain degree, but increases the risk of data loss in the case of machine failure). + + Due to I/O, Raftstore threads cannot reach 100% CPU usage theoretically. To reduce disk writes as much as possible, you can put together multiple write requests and write them to RocksDB. It is recommended to keep the overall CPU usage below 60% (If the default number of threads is `2`, it is recommended to keep `TiKV-Details.Thread CPU.Raft store CPU` on Grafana within 120%). Do not increase the size of the Raftstore thread pool to improve write performance without thinking, because this might increase the disk burden and degrade performance. + +* The UnifyReadPool thread pool. + + The UnifyReadPool is responsible for handling all read requests. The default size (configured by `readpool.unified.max-thread-count`) is 80% of the number of the machine's CPU cores. For example, if the machine CPU has 16 cores, the default thread pool size is 12. It is recommended to adjust the CPU usage rate according to the application workloads and keep it between 60% and 90% of the thread pool size. + + If the peak value of the `TiKV-Details.Thread CPU.Unified read pool CPU` on Grafana does not exceed 800%, then it is recommended to set `readpool.unified.max-thread-count` to `10`. Too many threads can cause more frequent thread switching, and take up resources of other thread pools. + +* The RocksDB thread pool. + + The RocksDB thread pool is a thread pool for RocksDB to compact and flush tasks. Usually, you do not need to configure it. + + * If the machine has a small number of CPU cores, set both `rocksdb.max-background-jobs` and `raftdb.max-background-jobs` to `4`. + * If you encounter write stall, go to Write Stall Reason in **RocksDB-kv** on Grafana and check on the metrics that are not `0`. + + * If it is caused by reasons related to pending compaction bytes, set `rocksdb.max-sub-compactions` to `2` or `3`. This configuration item indicates the number of sub-threads allowed for a single compaction job. Its default value is `3` in TiKV 4.0 and `1` in TiKV 3.0. + * If the reason is related to memtable count, it is recommended to increase the `max-write-buffer-number` of all columns (`5` by default). + * If the reason is related to the level0 file limit, it is recommended to increase values of the following parameters to `64` or a larger number: + + ``` + rocksdb.defaultcf.level0-slowdown-writes-trigger + rocksdb.writecf.level0-slowdown-writes-trigger + rocksdb.lockcf.level0-slowdown-writes-trigger + rocksdb.defaultcf.level0-stop-writes-trigger + rocksdb.writecf.level0-stop-writes-trigger + rocksdb.lockcf.level0-stop-writes-trigger + ```