diff --git a/v1.0/.gitignore b/v1.0/.gitignore new file mode 100755 index 0000000000000..728e6000141c9 --- /dev/null +++ b/v1.0/.gitignore @@ -0,0 +1,9 @@ +# Created by .ignore support plugin (hsz.mobi) +### Example user template template +### Example user template + +# IntelliJ project files +.idea/ +*.iml +out +gen diff --git a/v1.0/FAQ.md b/v1.0/FAQ.md new file mode 100755 index 0000000000000..a5dbc1cdd22f5 --- /dev/null +++ b/v1.0/FAQ.md @@ -0,0 +1,800 @@ +--- +title: TiDB FAQ +category: faq +--- + +# TiDB FAQ + +This document lists the Most Frequently Asked Questions about TiDB. + +## About TiDB + +### TiDB introduction and architecture + +#### What is TiDB? + +TiDB is a distributed SQL database that features in horizontal scalability, high availability and consistent distributed transactions. It also enables you to use MySQL’s SQL syntax and protocol to manage and retrieve data. + +#### What is TiDB's architecture? + +The TiDB cluster has three components: the TiDB server, the PD (Placement Driver) server, and the TiKV server. For more details, see [TiDB architecture](overview/#tidb-architecture). + +#### Is TiDB based on MySQL? + +No. TiDB supports MySQL syntax and protocol, but it is a new open source database that is developed and maintained by PingCAP, Inc. + +#### What is the respective responsibility of TiDB, TiKV and PD (Placement Driver)? + +- TiDB works as the SQL computing layer, mainly responsible for parsing SQL, specifying query plan, and generating executor. +- TiKV works as a distributed Key-Value storage engine, used to store the real data. In short, TiKV is the storage engine of TiDB. +- PD works as the cluster manager of TiDB, which manages TiKV metadata, allocates timestamps, and makes decisions for data placement and load balancing. + +#### Is it easy to use TiDB? + +Yes, it is. When all the required services are started, you can use TiDB as easily as a MySQL server. You can replace MySQL with TiDB to power your applications without changing a single line of code in most cases. You can also manage TiDB using the popular MySQL management tools. + +#### How is TiDB compatible with MySQL? + +Currently, TiDB supports the majority of MySQL 5.7 syntax, but does not support trigger, stored procedures, user-defined functions, and foreign keys. For more details, see [Compatibility with MySQL](sql/mysql-compatibility.md). + +#### How is TiDB highly available? + +TiDB is self-healing. All of the three components, TiDB, TiKV and PD, can tolerate failures of some of their instances. With its strong consistency guarantee, whether it’s data machine failures or even downtime of an entire data center, your data can be recovered automatically. For more information, see [High availability](overview.md#high-availability). + +#### How is TiDB strongly consistent? + +TiDB uses the [Raft consensus algorithm](https://raft.github.io/) to ensure consistency among multiple replicas. At the bottom layer, TiDB uses a model of replication log + State Machine to replicate data. For the write requests, the data is written to a Leader and the Leader then replicates the command to its Followers in the form of log. When the majority of nodes in the cluster receive this log, this log is committed and can be applied into the State Machine. TiDB has the latest data even if a minority of the replicas are lost. + +#### Does TiDB support distributed transactions? + +Yes. The transaction model in TiDB is inspired by Google’s Percolator, a paper published in 2006. It’s mainly a two-phase commit protocol with some practical optimizations. This model relies on a timestamp allocator to assign monotone increasing timestamp for each transaction, so the conflicts can be detected. PD works as the timestamp allocator in a TiDB cluster. + +#### What programming language can I use to work with TiDB? + +Any language supported by MySQL client or driver. + +#### Can I use other Key-Value storage engines with TiDB? + +Yes. Besides TiKV, TiDB supports many popular standalone storage engines, such as GolevelDB and BoltDB. If the storage engine is a KV engine that supports transactions and it provides a client that meets the interface requirement of TiDB, then it can connect to TiDB. + +#### What's the recommended solution for the deployment of three geo-distributed data centers? + +The architecture of TiDB guarantees that it fully supports geo-distribution and multi-activeness. Your data and applications are always-on. All the outages are transparent to your applications and your data can recover automatically. The operation depends on the network latency and stability. It is recommended to keep the latency within 5ms. Currently, we already have similar use cases. For details, contact info@pingcap.com. + +#### Does TiDB provide any other knowledge resource besides the documentation? + +Currently, [TiDB documentation](https://www.pingcap.com/docs/) is the most important and timely way to get knowledge of TiDB. In addition, we also have some technical communication groups. If you have any needs, contact info@pingcap.com. + +#### What are the MySQL variables that TiDB is compatible with? + +See [The System Variables](sql/variable.md). + +#### Does TiDB support `select for update`? + +Yes. But it differs from MySQL in syntax. As a distributed database, TiDB uses the optimistic lock. `select for update` does not lock data when the transaction is started, but checks conflicts when the transaction is committed. If the check reveals conflicts, the committing transaction rolls back. + +#### Can the codec of TiDB guarantee that the UTF-8 string is memcomparable? Is there any coding suggestion if our key needs to support UTF-8? + +The character sets of TiDB use UTF-8 by default and currently only support UTF-8. The string of TiDB uses the memcomparable format. + +### TiDB techniques + +#### TiKV for data storage + +See [TiDB Internal (I) - Data Storage](https://www.pingcap.com/blog/2017-07-11-tidbinternal1/). + +#### TiDB for data computing + +See [TiDB Internal (II) - Computing](https://www.pingcap.com/blog/2017-07-11-tidbinternal2/). + +#### PD for scheduling + +See [TiDB Internal (III) - Scheduling](https://www.pingcap.com/blog/2017-07-20-tidbinternal3/). + +## Install, deploy and upgrade + +### Prepare + +#### Operating system version requirements + +| Linux OS Platform | Version | +| :-----------------------:| :----------: | +| Red Hat Enterprise Linux | 7.3 or later | +| CentOS | 7.3 or later | +| Oracle Enterprise Linux | 7.3 or later | + +##### Why it is recommended to deploy the TiDB cluster on CentOS 7? + +As an open source distributed NewSQL database with high performance, TiDB can be deployed in the Intel architecture server and major virtualization environments and runs well. TiDB supports most of the major hardware networks and Linux operating systems. For details, see [Software and Hardware Requirements](op-guide/recommendation.md) for deploying TiDB. + +#### Server requirements + +You can deploy and run TiDB on the 64-bit generic hardware server platform in the Intel x86-64 architecture. The requirements and recommendations about server hardware configuration for development, testing and production environments are as follows: + +##### Development and testing environments + +| Component | CPU | Memory | Local Storage | Network | Instance Number (Minimum Requirement) | +| :------: | :-----: | :-----: | :----------: | :------: | :----------------: | +| TiDB | 8 core+ | 16 GB+ | SAS, 200 GB+ | Gigabit network card | 1 (can be deployed on the same machine with PD) | +| PD | 8 core+ | 16 GB+ | SAS, 200 GB+ | Gigabit network card | 1 (can be deployed on the same machine with TiDB) | +| TiKV | 8 core+ | 32 GB+ | SAS, 200 GB+ | Gigabit network card | 3 | +| | | | | Total Server Number | 4 | + +##### Production environment + +| Component | CPU | Memory | Hard Disk Type | Network | Instance Number (Minimum Requirement) | +| :-----: | :------: | :------: | :------: | :------: | :-----: | +| TiDB | 16 core+ | 48 GB+ | SAS | 10 Gigabit network card (2 preferred) | 2 | +| PD | 8 core+ | 16 GB+ | SSD | 10 Gigabit network card (2 preferred) | 3 | +| TiKV | 16 core+ | 48 GB+ | SSD | 10 Gigabit network card (2 preferred) | 3 | +| Monitor | 8 core+ | 16 GB+ | SAS | Gigabit network card | 1 | +| | | | | Total Server Number | 9 | + +##### What's the purposes of 2 network cards of 10 gigabit? + +As a distributed cluster, TiDB has a high demand on time, especially for PD, because PD needs to distribute unique timestamps. If the time in the PD servers is not consistent, it takes longer waiting time when switching the PD server. The bond of two network cards guarantees the stability of data transmission, and 10 gigabit guarantees the transmission speed. Gigabit network cards are prone to meet bottlenecks, therefore it is strongly recommended to use 10 gigabit network cards. + +##### Is it feasible if we don't use RAID for SSD? + +If the resources are adequate, it is recommended to use RAID for SSD. If the resources are inadequate, it is acceptable not to use RAID for SSD. + +### Install and deploy + +#### Deploy TiDB using Ansible (recommended) + +See [Ansible Deployment](op-guide/ansible-deployment.md). + +##### Why the modified `toml` configuration for TiKV/PD does not take effect? + +You need to set the `--config` parameter in TiKV/PD to make the `toml` configuration effective. TiKV/PD does not read the configuration by default. Currently, this issue only occurs when deploying using Binary. For TiKV, edit the configuration and restart the service. For PD, the configuration file is only read when PD is started for the first time, after which you can modify the configuration using pd-ctl. For details, see [PD Control User Guide](tools/pd-control.md). + +##### Should I deploy the TiDB monitoring framework (Prometheus + Grafana) on a standalone machine or on multiple machines? What is the recommended CPU and memory? + +The monitoring machine is recommended to use standalone deployment. It is recommended to use a 8 core CPU with 16 GB+ memory and a 500 GB+ hard disk. + +##### Why the monitor cannot display all metrics? + +Check the time difference between the machine time of the monitor and the time within the cluster. If it is large, you can correct the time and the monitor will display all the metrics. + +##### What is the function of supervise/svc/svstat service? + +- supervise: the daemon process, to manage the processes +- svc: to start and stop the service +- svstat: to check the process status + +##### Description of inventory.ini variables + +| Variable | Description | +| ---- | ------- | +| cluster_name | the name of a cluster, adjustable | +| tidb_version | the version of TiDB, configured by default in TiDB-Ansible branches | +| deployment_method | the method of deployment, binary by default, Docker optional | +| process_supervision | the supervision way of processes, systemd by default, supervise optional | +| timezone | the timezone of the managed node, adjustable, `Asia/Shanghai` by default, used with the `set_timezone` variable | +| set_timezone | to edit the timezone of the managed node, True by default; False means closing | +| enable_elk | currently not supported | +| enable_firewalld | to enable the firewall, closed by default | +| enable_ntpd | to monitor the NTP service of the managed node, True by default; do not close it | +| machine_benchmark | to monitor the disk IOPS of the managed node, True by default; do not close it | +| set_hostname | to edit the hostname of the mananged node based on the IP, False by default | +| enable_binlog | whether to deploy Pump and enable the binlog, False by default, dependent on the Kafka cluster; see the `zookeeper_addrs` variable | +| zookeeper_addrs | the ZooKeeper address of the binlog Kafka cluster | +| enable_slow_query_log | to record the slow query log of TiDB into a single file: ({{ deploy_dir }}/log/tidb_slow_query.log). False by default, to record it into the TiDB log | +| deploy_without_tidb | the Key-Value mode, deploy only PD, TiKV and the monitoring service, not TiDB; set the IP of the tidb_servers host group to null in the `inventory.ini` file | + +#### Deploy TiDB offline using Ansible + +It is not recommended to deploy TiDB offline using Ansible. If the Control Machine has no access to external network, you can deploy TiDB offline using Ansible. For details, see [Offline Deployment Using Ansible](op-guide/offline-ansible-deployment.md). + +### Upgrade + +#### How to perform rolling updates using Ansible? + +- Apply rolling updates to the TiKV node (only update the TiKV service). + + ``` + ansible-playbook rolling_update.yml --tags=tikv + ``` + +- Apply rolling updates to all services. + + ``` + ansible-playbook rolling_update.yml + ``` + +#### What is the effect of rolling udpates? + +When you apply rolling updates to TiDB services, the running application is not affected. You need to configure the minimum cluster topology (TiDB * 2, PD * 3, TiKV * 3). If the Pump/Drainer service is involved in the cluster, it is recommended to stop Drainer before rolling updates. When you update TiDB, Pump is also updated. + +#### How to upgrade when I deploy TiDB using Binary? + +It is not recommended to deploy TiDB using Binary. The support for upgrading using Binary is not as friendly as using Ansible. It is recommended to deploy TiDB using Ansible. + +#### Should I upgrade TiKV or all components generally? + +Generally you should upgrade all components, because the whole version is tested together. Upgrade a single component only when an emergent issue occurs and you need to upgrade this component. + +#### What causes "Timeout when waiting for search string 200 OK" when starting or upgrading a cluster? How to deal with it? + +Possible reasons: + +- The process did not start normally. +- The port is occupied. +- The process did not stop normally. +- You use `rolling_update.yml` to upgrade the cluster when the cluster is stopped (operation error). + +Solution: + +- Log into the node to check the status of the process or port. +- Correct the incorrect operation procedure. + +## Manage the cluster + +### Daily management + +#### What are the common operations? + +| Job | Playbook | +|:----------------------------------|:-----------------------------------------| +| Start the cluster | `ansible-playbook start.yml` | +| Stop the cluster | `ansible-playbook stop.yml` | +| Destroy the cluster | `ansible-playbook unsafe_cleanup.yml` (If the deployment directory is a mount point, an error will be reported, but implementation results will remain unaffected) | +| Clean data (for test) | `ansible-playbook unsafe_cleanup_data.yml` | +| Apply rolling updates | `ansible-playbook rolling_update.yml` | +| Apply rolling updates to TiKV | `ansible-playbook rolling_update.yml --tags=tikv` | +| Apply rolling updates to components except PD | `ansible-playbook rolling_update.yml --skip-tags=pd` | +| Apply rolling updates to the monitoring components | `ansible-playbook rolling_update_monitor.yml` | + +#### How to log into TiDB? + +You can log into TiDB like logging into MySQL. For example: + +``` +mysql -h 127.0.0.1 -uroot -P4000 +``` + +#### How to modify the system variables in TiDB? + +Similar to MySQL, TiDB includes static and solid parameters. You can directly modify static parameters using `set global xxx = n`, but the new value of a parameter is only effective within the life cycle in this instance. + +#### Where and what are the data directories in TiDB (TiKV)? + +TiDB data directories are in `${[data-dir](https://pingcap.com/docs-cn/op-guide/configuration/#data-dir-1)}/data/` by default, which include four directories of backup, db, raft, and snap, used to store backup, data, Raft data, and mirror data respectively. + +#### What are the system tables in TiDB? + +Similar to MySQL, TiDB includes system tables as well, used to store the information required by the server when it runs. + +#### Where are the TiDB/PD/TiKV logs? + +By default, TiDB/PD/TiKV outputs standard error in the logs. If a log file is specified by `--log-file` during the startup, the log is output to the specified file and executes rotation daily. + +#### How to safely stop TiDB? + +If the cluster is deployed using Ansible, you can use the `ansible-playbook stop.yml` command to stop the TiDB cluster. If the cluster is not deployed using Ansible, `kill` all the services directly. The components of TiDB will do `graceful shutdown`. + +#### Can `kill` be executed in TiDB? + +- You can `kill` DML statements. First use `show processlist` to find the ID corresponding with the session, and then run `kill id`. +- You can `kill` DDL statements. First use `admin show ddl jobs` to find the ID of the DDL job you need to kill, and then run `admin cancel ddl jobs 'job_id' [, 'job_id'] ...`. For more details, see the [`ADMIN` statement](sql/admin.md#admin-statement). + +#### Does TiDB support session timeout? + +Currently, TiDB does not support session timeout in the database level. If you want to implement session timeout, use the session ID started by side records in the absence of LB (Load Balancing), and customize the session timeout on the application. After timeout, kill SQL using `kill id` on the node that starts the query. It is currently recommended to implement session timeout using applications. When the timeout time is reached, the application layer reports an exception and continues to execute subsequent program segments. + +#### What is the TiDB version management strategy for production environment? How to avoid frequent upgrade? + +Currently, TiDB has a standard management of various versions. Each release contains a detailed change log and [release notes](https://github.com/pingcap/TiDB/releases). Whether it is necessary to upgrade in the production environment depends on the application system. It is recommended to learn the details about the functional differences between the previous and later versions before upgrading. + +Take `Release Version: v1.0.3-1-ga80e796` as an example of version number description: + +- `v1.0.3` indicates the standard GA version. +- `-1` indicates the current version has one commit. +- `ga80e796` indicates the version `git-hash`. + +#### What's the difference between various TiDB master versions? How to avoid using the wrong TiDB-Ansible version? + +The TiDB community is highly active. After the GA release, the engineers have been keeping optimizing and fixing bugs. Therefore, the TiDB version is updated quite fast. If you want to keep informed of the latest version, see [TiDB Weekly update](https://pingcap.com/weekly/). + +It is recommended to deploy the TiDB cluster using the latest version of TiDB-Ansible, which will also be updated along with the TiDB version. Besides, TiDB has a unified management of the version number after GA release. You can view the version number using the following two methods: + +- `select tidb_version()` +- `tidb-server -V` + +#### Is there a graphical deployment tool for TiDB? + +Currently no. + +#### How to scale TiDB horizontally? + +As your business grows, your database might face the following three bottlenecks: + +- Lack of storage resources which means that the disk space is not enough. + +- Lack of computing resources such as high CPU occupancy. + +- Not enough write and read capacity. + +You can scale TiDB as your business grows. + +- If the disk space is not enough, you can increase the capacity simply by adding more TiKV nodes. When the new node is started, PD will migrate the data from other nodes to the new node automatically. + +- If the computing resources are not enough, check the CPU consumption situation first before adding more TiDB nodes or TiKV nodes. When a TiDB node is added, you can configure it in the Load Balancer. + +- If the capacity is not enough, you can add both TiDB nodes and TiKV nodes. + +#### Why does TiDB use gRPC instead of Thrift? Is it because Google uses it? + +Not really. We need some good features of gRPC, such as flow control, encryption and streaming. + +#### What does the 92 indicate in `like(bindo.customers.name, jason%, 92)`? + +The 92 indicates the escape character, which is ASCII 92 by default. + +### Manage the PD server + +#### The `TiKV cluster is not bootstrapped` message is displayed when I access PD. + +Most of the APIs of PD are available only when the TiKV cluster is initialized. This message is displayed if PD is accessed when PD is started while TiKV is not started when a new cluster is deployed. If this message is displayed, start the TiKV cluster. When TiKV is initialized, PD is accessible. + +#### The `etcd cluster ID mismatch` message is displayed when starting PD. + +This is because the `--initial-cluster` in the PD startup parameter contains a member that doesn't belong to this cluster. To solve this problem, check the corresponding cluster of each member, remove the wrong member, and then restart PD. + +#### What's the maximum tolerance for time synchronization error of PD? + +Theoretically, the smaller of the tolerance, the better. During leader changes, if the clock goes back, the process won't proceed until it catches up with the previous leader. PD can tolerate any synchronization error, but a larger error value means a longer period of service stop during the leader change. + +#### How does the client connection find PD? + +The client connection can only access the cluster through TiDB. TiDB connects PD and TiKV. PD and TiKV are transparent to the client. When TiDB connects to any PD, the PD tells TiDB who is the current leader. If this PD is not the leader, TiDB reconnects to the leader PD. + +#### What is the difference between the `leader-schedule-limit` and `region-schedule-limit` scheduling parameters in PD? + +- The `leader-schedule-limit` scheduling parameter is used to balance the Leader number of different TiKV servers, affecting the load of query processing. +- The `region-schedule-limit` scheduling parameter is used to balance the replica number of different TiKV servers, affecting the data amount of different nodes. + +#### Is the number of replicas in each region configurable? If yes, how to configure it? + +Yes. Currently, you can only update the global number of replicas. When started for the first time, PD reads the configuration file (conf/pd.yml) and uses the max-replicas configuration in it. If you want to update the number later, use the pd-ctl configuration command `config set max-replicas $num` and view the enabled configuration using `config show all`. The updating does not affect the applications and is configured in the background. + +Make sure that the total number of TiKV instances is always greater than or equal to the number of replicas you set. For example, 3 replicas need 3 TiKV instances at least. Additional storage requirements need to be estimated before increasing the number of replicas. For more information about pd-ctl, see [PD Control User Guide](tools/pd-control.md). + +#### How to check the health status of the whole cluster when lacking command line cluster management tools? + +You can determine the general status of the cluster using the pd-ctl tool. For detailed cluster status, you need to use the monitor to determine. + +#### How to delete the monitoring data of a cluster node that is offline? + +The offline node usually indicates the TiKV node. You can determine whether the offline process is finished by the pd-ctl or the monitor. After the node is offline, perform the following steps: + +1. Manually stop the relevant services on the offline node. +2. Delete the `node_exporter` data of the corresponding node from the Prometheus configuration file. +3. Delete the data of the corresponding node from Ansible `inventory.ini`. + +### Manage the TiDB server + +#### How to set the `lease` parameter in TiDB? + +The lease parameter (`--lease=60`) is set from the command line when starting a TiDB server. The value of the lease parameter impacts the Database Schema Changes (DDL) speed of the current session. In the testing environments, you can set the value to 1s for to speed up the testing cycle. But in the production environments, it is recommended to set the value to minutes (for example, 60) to ensure the DDL safety. + +#### Why it is very slow to run DDL statements sometimes? + +Possible reasons: + +- If you run multiple DDL statements together, the last few DDL statements might run slowly. This is because the DDL statements are executed serially in the TiDB cluster. +- After you start the cluster successfully, the first DDL operation may take a longer time to run, usually around 30s. This is because the TiDB cluster is electing the leader that processes DDL statements. +- In rolling updates or shutdown updates, the processing time of DDL statements in the first ten minutes after starting TiDB is affected by the server stop sequence (stopping PD -> TiDB), and the condition where TiDB does not clean up the registration data in time because TiDB is stopped using the `kill -9` command. When you run DDL statements during this period, for the state change of each DDL, you need to wait for 2 * lease (lease = 10s). +- If a communication issue occurs between a TiDB server and a PD server in the cluster, the TiDB server cannot get or update the version information from the PD server in time. In this case, you need to wait for 2 * lease for the state processing of each DDL. + +#### Can I use S3 as the backend storage engine in TiDB? + +No. Currently, TiDB only supports the distributed storage engine and the Goleveldb/Rocksdb/Boltdb engine. + +#### Can the `Infomation_schema` support more real information? + +The tables in `Infomation_schema` exist mainly for compatibility with MySQL, and some third-party software queries information in the tables. Currently, most of those tables are null. More parameter information is to be involved in the tables as TiDB updates later. + +For the `Infomation_schema` that TiDB currently supports, see [The TiDB System Database](sql/system-database.md). + +#### What's the explanation of the TiDB Backoff type scenario? + +In the communication process between the TiDB server and the TiKV server, the `Server is busy` or `backoff.maxsleep 20000ms` log message is displayed when processing a large volume of data. This is because the system is busy while the TiKV server processes data. At this time, usually you can view that the TiKV host resources usage rate is high. If this occurs, you can increase the server capacity according to the resources usage. + +#### What's the maximum number of concurrent connections that TiDB supports? + +The current TiDB version has no limit for the maximum number of concurrent connections. If too large concurrency leads to an increase of response time, you can increase the capacity by adding TiDB nodes. + +### Manage the TiKV server + +#### What is the recommended number of replicas in the TiKV cluster? Is it better to keep the minimum number for high availability? + +Use 3 replicas for test. If you increase the number of replicas, the performance declines but it is more secure. Whether to configure more replicas depends on the specific business needs. + +#### The `cluster ID mismatch` message is displayed when starting TiKV. + +This is because the cluster ID stored in local TiKV is different from the cluster ID specified by PD. When a new PD cluster is deployed, PD generates random cluster IDs. TiKV gets the cluster ID from PD and stores the cluster ID locally when it is initialized. The next time when TiKV is started, it checks the local cluster ID with the cluster ID in PD. If the cluster IDs don't match, the `cluster ID mismatch` message is displayed and TiKV exits. + +If you previously deploy a PD cluster, but then you remove the PD data and deploy a new PD cluster, this error occurs because TiKV uses the old data to connect to the new PD cluster. + +#### The `duplicated store address` message is displayed when starting TiKV. + +This is because the address in the startup parameter has been registered in the PD cluster by other TiKVs. This error occurs when there is no data folder under the directory that TiKV `--store` specifies, but you use the previous parameter to restart the TiKV. + +To solve this problem, use the [store delete](https://github.com/pingcap/pd/tree/master/pdctl#store-delete-) function to delete the previous store and then restart TiKV. + +#### TiKV master and slave use the same compression algorithm, why the results are different? + +Currently, some files of TiKV master have a higher compression rate, which depends on the underlying data distribution and RocksDB implementation. It is normal that the data size fluctuates occasionally. The underlying storage engine adjusts data as needed. + +#### What are the features of TiKV block cache? + +TiKV implements the Column Family (CF) feature of RocksDB. By default, the KV data is eventually stored in the 3 CFs (default, write and lock) within RocksDB. + +- The default CF stores real data and the corresponding parameter is in [rocksdb.defaultcf]. The write CF stores the data version information (MVCC) and index-related data, and the corresponding parameter is in `[rocksdb.writecf]`. The lock CF stores the lock information and the system uses the default parameter. +- The Raft RocksDB instance stores Raft logs. The default CF mainly stores Raft logs and the corresponding parameter is in `[raftdb.defaultcf]`. +- Each CF has an individual block-cache to cache data blocks and improve RocksDB read speed. The size of block-cache is controlled by the `block-cache-size` parameter. A larger value of the parameter means more hot data can be cached and is more favorable to read operation. At the same time, it consumes more system memory. +- Each CF has an individual write-buffer and the size is controlled by the `write-buffer-size` parameter. + +#### Why it occurs that "TiKV channel full"? + +- The Raftstore thread is too slow. You can view the CPU usage status of Raftstore. +- TiKV is too busy (read, write, disk I/O, etc.) and cannot manage to handle it. + +#### Why does TiKV frequently switch Region leader? + +- Network problem leads to the failure of communication between nodes. You can view the monitoring information of Report failures. +- The original main leader node fails, and cannot send information to the follower in time. +- The Raftstore thread fails. + +#### If the leader node is down, will the service be affected? How long? + +TiDB uses Raft to synchronize data among multiple replicas and guarantees the strong consistency of data. If one replica goes wrong, the other replicas can guarantee data security. The default number of replicas in each Region is 3. Based on the Raft protocol, a leader is elected in each Region, and if a single Region leader fails, a new Region leader is soon elected after a maximum of 2 * lease time (lease time is 10 seconds). + +#### What are the TiKV scenarios that take up high I/O, memory, CPU, and exceed the parameter configuration? + +Writing or reading a large volume of data in TiKV takes up high I/O, memory and CPU. Executing very complex queries costs a lot of memory and CPU resources, such as the scenario that generates large intermediate result sets. + +#### Does TiKV support SAS/SATA disks or mixed deployment of SSD/SAS disks? + +No. For OLTP scenarios, TiDB requires high I/O disks for data access and operation. As a distributed database with strong consistency, TiDB has some write amplification such as replica replication and bottom layer storage compaction. Therefore, it is recommended to use NVMe SSD as the storage disks in TiDB best practices. Besides, the mixed deployment of TiKV and PD is not supported. + +#### Is the Range of the Key data table divided before data access? + +No. It differs from the table splitting rules of MySQL. In TiKV, the table Range is dynamically split based on the size of Region. + +#### How does Region split? + +Region is not divided in advance, but it follows a Region split mechanism. When the Region size exceeds the value of the `region_split_size` parameter, split is triggered. After the split, the information is reported to PD. + +#### Does TiKV have the `innodb_flush_log_trx_commit` parameter like MySQL, to guarantee the security of data? + +Yes. Currently, the standalone storage engine uses two RocksDB instances. One instance is used to store the raft-log. When the `sync-log` parameter in TiKV is set to true, each commit is mandatorily flushed to the raft-log. If a crash occurs, you can restore the KV data using the raft-log. + +#### What is the recommended server configuration for WAL storage, such as SSD, RAID level, cache strategy of RAID card, NUMA configuration, file system, I/O scheduling strategy of the operating system? + +WAL belongs to ordered writing, and currently, we do not apply a unique configuration to it. Recommended configuration is as follows: + +- SSD +- RAID 10 preferred +- Cache strategy of RAID card and I/O scheduling strategy of the operating system: currently no specific best practices; you can use the default configuration in Linux 7 or later +- NUMA: no specific suggestion; for memory allocation strategy, you can use `interleave = all` +- File system: ext4 + +#### How is the write performance in the most strict data available mode of `sync-log = true`? + +Generally, enabling `sync-log` reduces about 30% of the performance. For the test about `sync-log = false`, see [Performance test result for TiDB using Sysbench](benchmark/sysbench.md). + +#### Can the Raft + multiple replicas in the upper layer implement complete data security? Is it required to apply the most strict mode to standalone storage? + +Raft uses strong consistency, and only when the data has been written into more than 50% of the nodes, the application returns ACK (two out of three nodes). In this case, data consistency is guaranteed. However, theoretically, two nodes might crash. Therefore, for scenarios that have a strict requirement on data security, such as scenarios in financial industry, you need to enable the `sync-log`. + +#### In data writing using the Raft protocol, multiple network roundtrips occur. What is the actual write delay? + +Theoretically, TiDB has 4 more network roundtrips than standalone databases. + +#### Does TiDB have a InnoDB memcached plugin like MySQL which can directly use the KV interface and does not need the independent cache? + +TiKV supports calling the interface separately. Theoretically, you can take an instance as the cache. Because TiDB is a distributed relational database, we do not support TiKV separately. + +#### What is the Coprocessor component used for? + +- Reduce the data transmission between TiDB and TiKV +- Make full use of the distributed computing resources of TiKV to execute computing pushdown + +### TiDB test + +#### What is the performance test result for TiDB using Sysbench? + +At the beginning, many users tend to do a benchmark test or a comparison test between TiDB and MySQL. We have also done a similar official test and find the test result is consistent at large, although the test data has some bias. Because the architecture of TiDB differs greatly from MySQL, it is hard to find a benchmark point. The suggestions are as follows: + +- Do not spend too much time on the benchmark test. Pay more attention to the difference of scenarios using TiDB. +- See the official test. For the Sysbench test and comparison test between TiDB and MySQL, see [Performance test result for TiDB using Sysbench](benchmark/sysbench.md). + +#### What's the relationship between the TiDB cluster capacity (QPS) and the number of nodes? How does TiDB compare to MySQL? + +- Within 10 nodes, the relationship between TiDB write capacity (Insert TPS) and the number of nodes is roughly 40% linear increase. Because MySQL uses single-node write, its write capacity cannot be scaled. +- In MySQL, the read capacity can be increased by adding slave, but the write capacity cannot be increased except using sharding, which has many problems. +- In TiDB, both the read and write capacity can be easily increased by adding more nodes. + +#### The performance test of MySQL and TiDB by our DBA shows that the performance of a standalone TiDB is not as good as MySQL. + +TiDB is designed for scenarios where sharding is used because the capacity of a MySQL standalone is limited, and where strong consistency and complete distributed transactions are required. One of the advantages of TiDB is pushing down computing to the storage nodes to execute concurrent computing. + +TiDB is not suitable for tables of small size (such as below ten million level), because its strength in concurrency cannot be showed with small size data and limited Region. A typical example is the counter table, in which records of a few lines are updated high frequently. In TiDB, these lines become several Key-Value pairs in the storage engine, and then settle into a Region located on a single node. The overhead of background replication to guarantee strong consistency and operations from TiDB to TiKV leads to a poorer performance than a MySQL standalone. + +### Backup and restore + +#### How to back up data in TiDB? + +Currently, the major way of backing up data in TiDB is using `mydumper`. For details, see [mydumper repository](https://github.com/maxbube/mydumper). Although the official MySQL tool `mysqldump` is also supported in TiDB to back up and restore data, its performance is poorer than `mydumper`/`loader` and it needs much more time to back up and restore large volumes of data. Therefore, it is not recommended to use `mysqldump`. + +Keep the size of the data file exported from `mydumper` as small as possible. It is recommended to keep the size within 64M. You can set value of the `-F` parameter to 64. + +You can edit the `t` parameter of `loader` based on the number of TiKV instances and load status. For example, in scenarios of three TiKV instances, you can set its value to `3 * (1 ~ n)`. When the TiKV load is very high and `backoffer.maxSleep 15000ms is exceeded` displays a lot in `loader` and TiDB logs, you can adjust the parameter to a smaller value. When the TiKV load is not very high, you can adjust the parameter to a larger value accordingly. + +## Migrate the data and traffic + +### Full data export and import + +#### Mydumper + +See the [mydumper repository](https://github.com/maxbube/mydumper). + +#### Loader + +See [Loader Instructions](tools/loader.md). + +#### How to migrate an application running on MySQL to TiDB? + +Because TiDB supports most MySQL syntax, generally you can migrate your applications to TiDB without changing a single line of code in most cases. You can use [checker](https://github.com/pingcap/tidb-tools/tree/master/checker) to check whether the Schema in MySQL is compatible with TiDB. + +#### If I accidentally import the MySQL user table into TiDB, or forget the password and cannot log in, how to deal with it? + +Restart the TiDB service, add the `-skip-grant-table=true` parameter in the configuration file. Log into the cluster without password and recreate the user, or recreate the `mysql.user` table using the following statement: + +```sql +DROP TABLE IF EXIST mysql.user; + +CREATE TABLE if not exists mysql.user ( + Host CHAR(64), + User CHAR(16), + Password CHAR(41), + Select_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Insert_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Update_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Delete_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Create_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Drop_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Process_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Grant_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + References_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Alter_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Show_db_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Super_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Create_tmp_table_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Lock_tables_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Execute_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Create_view_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Show_view_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Create_routine_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Alter_routine_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Index_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Create_user_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Event_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Trigger_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + PRIMARY KEY (Host, User)); + +INSERT INTO mysql.user VALUES ("%", "root", "", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y"); +``` + +#### How to export the data in TiDB? + +Currently, TiDB does not support `select into outfile`. You can use the following methods to export the data in TiDB: + +- See [MySQL uses mysqldump to export part of the table data](http://blog.csdn.net/xin_yu_xin/article/details/7574662) in Chinese and export data using mysqldump and the WHERE condition. +- Use the MySQL client to export the results of `select` to a file. + +#### How to migrate from DB2 or Oracle to TiDB? + +To migrate all the data or migrate incrementally from DB2 or Oracle to TiDB, see the following solution: + +- Use the official migration tool of Oracle, such as OGG, Gateway, CDC (Change Data Capture). +- Develop a program for importing and exporting data. +- Export Spool as text file, and import data using Load infile. +- Use a third-party data migration tool. + +Currently, it is recommended to use OGG. + +### Migrate the data incrementally + +#### Syncer + +##### Syncer user guide + +See [Syncer User Guide](docs/tools/syncer.md). + +##### How to configure to monitor Syncer status? + +Download and import [Syncer Json](https://github.com/pingcap/docs/blob/master/etc/Syncer.json) to Grafana. Edit the Prometheus configuration file and add the following content: + +``` +- job_name: ‘syncer_ops’ // task name + static_configs: + - targets: [’10.10.1.1:10096’] // Syncer monitoring address and port, informing Prometheus to pull the data of Syncer +``` + +Restart Prometheus. + +##### Is there a current solution to synchronizing data from TiDB to other databases like HBase and Elasticsearch? + +No. Currently, the data synchronization depends on the application itself. + +#### Wormhole + +Wormhole is a data synchronization service, which enables the user to easily synchronize all the data or synchronize incrementally using Web console. It supports multiple types of data migration, such as from MySQL to TiDB, and from MongoDB to TiDB. + +### Migrate the traffic + +#### How to migrate the traffic quickly? + +It is recommended to build a multi-source MySQL, MongoDB -> TiDB real-time synchronization environment using Syncer or Wormhole. You can migrate the read and write traffic in batches by editing the network configuration as needed. Deploy a stable network LB (HAproxy, LVS, F5, DNS, etc.) on the upper layer, in order to implement seamless migration by directly editing the network configuration. + +#### Is there a limit for the total write and read capacity in TiDB? + +The total read capacity has no limit. You can increase the read capacity by adding more TiDB servers. Generally the write capacity has no limit as well. You can increase the write capacity by adding more TiKV nodes. + +#### The error message `transaction too large` is displayed. + +As distributed transactions need to conduct two-phase commit and the bottom layer performs Raft replication, if a transaction is very large, the commit process would be quite slow and the following Raft replication flow is thus struck. To avoid this problem, we limit the transaction size: + +- Each Key-Value entry is no more than 6MB +- The total number of Key-Value entry is no more than 300,000 rows +- The total size of Key-Value entry is no more than 100MB + +There are [similar limits](https://cloud.google.com/spanner/docs/limits) on Google Cloud Spanner. + +#### How to import data in batches? + +1. When you import data, insert in batches and keep the number of rows within 10,000 for each batch. + +2. As for `insert` and `select`, you can open the hidden parameter `set @@session.tidb_batch_insert=1;`, and `insert` will execute large transactions in batches. In this way, you can avoid the timeout caused by large transactions, but this may lead to the loss of atomicity. An error in the process of execution leads to partly inserted transaction. Therefore, use this parameter only when necessary, and use it in session to avoid affecting other statements. When the transaction is finished, use `set @@session.tidb_batch_insert=0` to close it. + +3. As for `delete` and `update`, you can use `limit` plus circulation to operate. + +#### Does TiDB release space immediately after deleting data? + +`DELETE`, `TRUNCATE` and `DROP` do not release space immediately. For `TRUNCATE` and `DROP` operations, TiDB deletes the data and releases the space after reaching the GC (garbage collection) time (10 minutes by default). For the `DELETE` operation, TiDB deletes the data and does not release the space based on the GC mechanism, but reuses the space when subsequent data is committed to RocksDB and compacted. + +#### Can I execute DDL operations on the target table when loading data? + +No. None of the DDL operations can be executed on the target table when you load data, otherwise the data fails to be loaded. + +#### Does TiDB support the `replace into` syntax? + +Yes. But the `load data` does not support the `replace into` syntax. + +#### How long does it take to reclaim disk space after deleting data? + +None of the `Delete`, `Truncate` and `Drop` operations releases data immediately. For the `Truncate` and `Drop` operations, after the TiDB GC (Garbage Collection) time (10 minutes by default), the data is deleted and the space is released. For the `Delete` operation, the data is deleted but the space is not released according to TiDB GC. When data is written into RocksDB and executes `Compact`, the space is reused. + +#### Why does the query speed getting slow after deleting data? + +Deleting a large amount of data leaves a lot of useless keys, affecting the query efficiency. Currently the Region Merge feature is in development, which is expected to solve this problem. For details, see the [deleting data section in TiDB Best Practices](https://pingcap.com/blog/2017-07-24-tidbbestpractice/#write). + +#### What is the most efficient way of deleting data? + +When deleting a large amount of data, it is recommended to use `Delete * from t where xx limit 5000;`. It deletes through the loop and uses `Affected Rows == 0` as a condition to end the loop, so as not to exceed the limit of transaction size. With the prerequisite of meeting business filtering logic, it is recommended to add a strong filter index column or directly use the primary key to select the range, such as `id >= 5000*n+m and id < 5000*(n+1)+m`. + +If the amount of data that needs to be deleted at a time is very large, this loop method will get slower and slower because each deletion traverses backward. After deleting the previous data, lots of deleted flags remain for a short period (then all will be processed by Garbage Collection) and influence the following Delete statement. If possible, it is recommended to refine the Where condition. See [details in TiDB Best Practices](https://pingcap.com/blog/2017-07-24-tidbbestpractice/#write). + +#### How to improve the data loading speed in TiDB? + +- Currently Lightning is in development for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data). +- Data loading in TiDB is related to the status of disks and the whole cluster. When loading data, pay attention to metrics like the disk usage rate of the host, TiClient Error, Backoff, Thread CPU and so on. You can analyze the bottlenecks using these metrics. + +## SQL optimization + +### TiDB execution plan description + +See [Understand the Query Execution Plan](sql/understanding-the-query-execution-plan.md). + +### Statistics collection + +See [Introduction to Statistics](sql/statistics.md). + +#### How to optimize `select count(1)`? + +The `count(1)` statement counts the total number of rows in a table. Improving the degree of concurrency can significantly improve the speed. To modify the concurrency, refer to the [document](sql/tidb-specific.md#tidb_distsql_scan_concurrency). But it also depends on the CPU and I/O resources. TiDB accesses TiKV in every query. When the amount of data is small, all MySQL is in memory, and TiDB needs to conduct a network access. + +Recommendations: + +1. Improve the hardware configuration. See [Software and Hardware Requirements](op-guide/recommendation.md). +2. Improve the concurrency. The default value is 10. You can improve it to 50 and have a try. But usually the improvement is 2-4 times of the default value. +3. Test the `count` in the case of large amount of data. +4. Optimize the TiKV configuration. See [Performance Tuning for TiKV](op-guide/tune-TiKV.md). + +#### How to view the progress of adding an index? + +Use `admin show ddl` to view the current job of adding an index. + +#### How to view the DDL job? + +- `admin show ddl`: to view the running DDL job +- `admin show ddl jobs`: to view all the results in the current DDL job queue (including tasks that are running and waiting to run) and the last ten results in the completed DDL job queue + +#### Does TiDB support CBO (Cost-Based Optimization)? If yes, to what extent? + +Yes. TiDB uses the cost-based optimizer. The cost model and statistics are constantly optimized. Besides, TiDB also supports correlation algorithms like hash join and soft merge. + +## Database optimization + +### TiDB + +#### Edit TiDB options + +See [The TiDB Command Options](sql/server-command-option.md). + +### TiKV + +#### Tune TiKV performance + +See [Tune TiKV Performance](op-guide/tune-tikv.md). + +## Monitor + +### Prometheus monitoring framework + +See [Overview of the Monitoring Framework](op-guide/monitor-overview.md). + +### Key metrics of monitoring + +See [Key Metrics](op-guide/dashboard-overview-info.md). + +#### Is there a better way of monitoring the key metrics? + +The monitoring system of TiDB consists of Prometheus and Grafana. From the dashboard in Grafana, you can monitor various running metrics of TiDB which include the monitoring metrics of system resources, of client connection and SQL operation, of internal communication and Region scheduling. With these metrics, the database administrator can better understand the system running status, running bottlenecks and so on. In the practice of monitoring these metrics, we list the key metrics of each TiDB component. Generally you only need to pay attention to these common metrics. For details, see [Key Metrics](op-guide/dashboard-overview-info.md). + +#### The Prometheus monitoring data is deleted each month by default. Could I set it to two months or delete the monitoring data manually? + +Yes. Find the startup script on the machine where Prometheus is started, edit the startup parameter and restart Prometheus. + +## Troubleshoot + +### TiDB custom error messages + +#### ERROR 9001 (HY000): PD Server Timeout + +A PD request timeout. Check the status, monitoring data and log of the PD server, and the network between the TiDB server and the PD server. + +#### ERROR 9002 (HY000): TiKV Server Timeout + +A TiKV request timeout. Check the status, monitoring data and log of the TiKV server, and the network between the TiDB server and the TiKV server. + +#### ERROR 9003 (HY000): TiKV Server is Busy + +The TiKV server is busy. This usually occurs when the database load is very high. Check the status, monitoring data and log of the TiKV server. + +#### ERROR 9004 (HY000): Resolve Lock Timeout + +A lock resolving timeout. This usually occurs when a large number of transaction conflicts exist. Check the application code to see whether lock contention exists in the database. + +#### ERROR 9005 (HY000): Region is unavailable + +The accessed Region is not available. A Raft Group is not available, with possible reasons like an inadequate number of replicas. This usually occurs when the TiKV server is busy or the TiKV node is shut down. Check the status, monitoring data and log of the TiKV server. + +#### ERROR 9006 (HY000): GC Too Early + +The interval of `GC Life Time` is too short. The data that should have been read by long transactions might be deleted. You can add the `GC Life Time`. + +### MySQL native error messages + +#### ERROR 2013 (HY000): Lost connection to MySQL server during query + +- Check whether panic is in the log. +- Check whether OOM exists in dmesg using `dmesg -T | grep -i oom`. +- A long time of no access might also lead to this error. It is usually caused by TCP timeout. If TCP is not used for a long time, the operating system kills it. + +#### ERROR 1105 (HY000): other error: unknown error Wire Error(InvalidEnumValue(4004)) + +This error usually occurs when the version of TiDB does not match with the version of TiKV. To avoid version mismatch, upgrade all components when you upgrade the version. diff --git a/v1.0/LICENSE b/v1.0/LICENSE new file mode 100755 index 0000000000000..8dada3edaf50d --- /dev/null +++ b/v1.0/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/v1.0/QUICKSTART.md b/v1.0/QUICKSTART.md new file mode 100755 index 0000000000000..2e387be92f128 --- /dev/null +++ b/v1.0/QUICKSTART.md @@ -0,0 +1,724 @@ +--- +title: TiDB Quick Start Guide +category: quick start +--- + +# TiDB Quick Start Guide + +## About TiDB + +TiDB (The pronunciation is: /’taɪdiːbi:/ tai-D-B, etymology: titanium) is a Hybrid Transactional/Analytical Processing (HTAP) database. Inspired by the design of Google F1 and Google Spanner, TiDB features infinite horizontal scalability, strong consistency, and high availability. The goal of TiDB is to serve as a one-stop solution for online transactions and analyses. + +## About this guide + +This guide outlines how to perform a quick deployment of a TiDB cluster using TiDB-Ansible and walks you through the basic TiDB operations and administrations. + +## Deploy the TiDB cluster + +This section describes how to deploy a TiDB cluster. A TiDB cluster consists of different components: TiDB servers, TiKV servers, and Placement Driver (PD) servers. + +The architecture is as follows: + +![TiDB Architecture](media/tidb-architecture.png) + +For details of deploying a TiDB cluster, see [Ansible Deployment](op-guide/ansible-deployment.md). + +## Try TiDB + +This section describes some basic CRUD operations in TiDB. + +### Create, show, and drop a database + +You can use the `CREATE DATABASE` statement to create a database. + +The Syntax is as follows: + +```sql +CREATE DATABASE db_name [options]; +``` + +For example, the following statement creates a database with the name `samp_db`: + +```sql +CREATE DATABASE IF NOT EXISTS samp_db; +``` +You can use the `SHOW DATABASES` statement to show the databases: + +```sql +SHOW DATABASES; +``` + +You can use the `DROP DATABASE` statement to delete a database, for example: + +```sql +DROP DATABASE samp_db; +``` + +### Create, show, and drop a table + +Use the `CREATE TABLE` statement to create a table. The Syntax is as follows: + +```sql +CREATE TABLE table_name column_name data_type constraint; +``` + +For example: + +```sql +CREATE TABLE person ( + number INT(11), + name VARCHAR(255), + birthday DATE +); +``` + +Add `IF NOT EXISTS` to prevent an error if the table exists: + +```sql +CREATE TABLE IF NOT EXISTS person ( + number INT(11), + name VARCHAR(255), + birthday DATE +); +``` + +Use the `SHOW CREATE` statement to see the statement that creates the table. For example: + +```sql +SHOW CREATE table person; +``` + +Use the `SHOW FULL COLUMNS` statement to display the information about all the columns in a table. For example: + +```sql +SHOW FULL COLUMNS FROM person; +``` + +Use the `DROP TABLE` statement to delete a table. For example: + +```sql +DROP TABLE person; +``` +or +```sql +DROP TABLE IF EXISTS person; +``` +Use the `SHOW TABLES` statement to show all the tables in a database. For example: +```sql +SHOW TABLES FROM samp_db; +``` + +### Create, show, and drop an index + +For the columns whose value is not unique, you can use the `CREATE INDEX` or `ALTER TABLE` statements. For example: + +```sql +CREATE INDEX person_num ON person (number); +``` + +or + +```sql +ALTER TABLE person ADD INDEX person_num (number); +``` + +You can also create unique indexes for the columns whose value is unique. For example: + +```sql +CREATE UNIQUE INDEX person_num ON person (number); +``` + +or + +```sql +ALTER TABLE person ADD UNIQUE person_num on (number); +``` + +Use the `SHOW INDEX` to display all the indexes in a table: + +```sql +SHOW INDEX from person; +``` + +Use the `ALTER TABLE` or `DROP INDEX` to delete an index. Like the `CREATE INDEX` statement, `DROP INDEX` can also be embedded in the `ALTER TABLE` statement. For example: + +```sql +DROP INDEX person_num ON person; +ALTER TABLE person DROP INDEX person_num; +``` + +### Insert, select, update, and delete data + +Use the `INSERT` statement to insert data into a table. For example: + +```sql +INSERT INTO person VALUES("1","tom","20170912"); +``` + +Use the `SELECT` statement to see the data in a table. For example: + +```sql +SELECT * FROM person; ++--------+------+------------+ +| number | name | birthday | ++--------+------+------------+ +| 1 | tom | 2017-09-12 | ++--------+------+------------+ +``` + +Use the `UPDATE ` statement to update the data in a table. For example: + +```sql +UPDATE person SET birthday='20171010' WHERE name='tom'; + +SELECT * FROM person; ++--------+------+------------+ +| number | name | birthday | ++--------+------+------------+ +| 1 | tom | 2017-10-10 | ++--------+------+------------+ +``` + +Use the `DELETE` statement to delete the data in a table. For example: + +```sql +DELETE FROM person WHERE number=1; +SELECT * FROM person; +Empty set (0.00 sec) +``` + +### Create, authorize, and delete a user + +Use the `CREATE USER` statement to create a user named `tiuser` with the password `123456`: + +```sql +CREATE USER 'tiuser'@'localhost' IDENTIFIED BY '123456'; +``` + +Grant `tiuser` the privilege to retrieve the tables in the `samp_db` database: + +```sql +GRANT SELECT ON samp_db .* TO 'tiuser'@'localhost'; +``` + +Check the privileges of `tiuser`: + +```sql +SHOW GRANTS for tiuser@localhost; +``` + +Delete `tiuser`: + +```sql +DROP USER 'tiuser'@'localhost'; +``` + +## Monitor the TiDB cluster + +Open a browser to access the monitoring platform: `http://172.16.10.3:3000`. + +The default account and password are: `admin`/`admin`. + +### About the key metrics + +Service | Panel Name | Description | Normal Range +---- | ---------------- | ---------------------------------- | -------------- +PD | Storage Capacity | the total storage capacity of the TiDB cluster | +PD | Current Storage Size | the occupied storage capacity of the TiDB cluster | +PD | Store Status -- up store | the number of TiKV nodes that are up | +PD | Store Status -- down store | the number of TiKV nodes that are down | `0`. If the number is bigger than `0`, it means some node(s) are not down. +PD | Store Status -- offline store | the number of TiKV nodes that are manually offline| +PD | Store Status -- Tombstone store | the number of TiKV nodes that are Tombstone| +PD | Current storage usage | the storage occupancy rate of the TiKV cluster | If it exceeds 80%, you need to consider adding more TiKV nodes. +PD | 99% completed cmds duration seconds | the 99th percentile duration to complete a pd-server request| less than 5ms +PD | average completed cmds duration seconds | the average duration to complete a pd-server request | less than 50ms +PD | leader balance ratio | the leader ratio difference of the nodes with the biggest leader ratio and the smallest leader ratio | It is less than 5% for a balanced situation. It becomes bigger when a node is restarting. +PD | region balance ratio | the region ratio difference of the nodes with the biggest region ratio and the smallest region ratio | It is less than 5% for a balanced situation. It becomes bigger when adding or removing a node. +TiDB | handle requests duration seconds | the response time to get TSO from PD| less than 100ms +TiDB | tidb server QPS | the QPS of the cluster | application specific +TiDB | connection count | the number of connections from application servers to the database | Application specific. If the number of connections hops, you need to find out the reasons. If it drops to 0, you can check if the network is broken; if it surges, you need to check the application. +TiDB | statement count | the number of different types of statement within a given time | application specific +TiDB | Query Duration 99th percentile | the 99th percentile query time | +TiKV | 99% & 99.99% scheduler command duration | the 99th percentile and 99.99th percentile scheduler command duration| For 99%, it is less than 50ms; for 99.99%, it is less than 100ms. +TiKV | 95% & 99.99% storage async_request duration | the 95th percentile and 99.99th percentile Raft command duration | For 95%, it is less than 50ms; for 99.99%, it is less than 100ms. +TiKV | server report failure message | There might be an issue with the network or the message might not come from this cluster. | If there are large amount of messages which contains `unreachable`, there might be an issue with the network. If the message contains `store not match`, the message does not come from this cluster. +TiKV | Vote |the frequency of the Raft vote | Usually, the value only changes when there is a split. If the value of Vote remains high for a long time, the system might have a severe issue and some nodes are not working. +TiKV | 95% and 99% coprocessor request duration | the 95th percentile and the 99th percentile coprocessor request duration | Application specific. Usually, the value does not remain high. +TiKV | Pending task | the number of pending tasks | Except for PD worker, it is not normal if the value is too high. +TiKV | stall | RocksDB stall time | If the value is bigger than 0, it means that RocksDB is too busy, and you need to pay attention to IO and CPU usage. +TiKV | channel full | The channel is full and the threads are too busy. | If the value is bigger than 0, the threads are too busy. +TiKV | 95% send message duration seconds | the 95th percentile message sending time | less than 50ms +TiKV | leader/region | the number of leader/region per TiKV server| application specific + +## Scale the TiDB cluster + +The capacity of a TiDB cluster can be increased or decreased without affecting the online services. + +> **Warning:** In decreasing the capacity, if your cluster has a mixed deployment of other services, do not perform the following procedures. The following examples assume that the removed nodes have no mixed deployment of other services. + +Assume that the topology is as follows: + +| Name | Host IP | Services | +| ---- | ------- | -------- | +| node1 | 172.16.10.1 | PD1 | +| node2 | 172.16.10.2 | PD2 | +| node3 | 172.16.10.3 | PD3, Monitor | +| node4 | 172.16.10.4 | TiDB1 | +| node5 | 172.16.10.5 | TiDB2 | +| node6 | 172.16.10.6 | TiKV1 | +| node7 | 172.16.10.7 | TiKV2 | +| node8 | 172.16.10.8 | TiKV3 | +| node9 | 172.16.10.9 | TiKV4 | + +### Increase the capacity of a TiDB/TiKV node + +For example, if you want to add two TiDB nodes (node101, node102) with the IP address `172.16.10.101` and `172.16.10.102`, you can use the following procedures: + +1. Edit the `inventory.ini` file and append the node information: + + ```ini + [tidb_servers] + 172.16.10.4 + 172.16.10.5 + 172.16.10.101 + 172.16.10.102 + + [pd_servers] + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + + [tikv_servers] + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitored_servers] + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + 172.16.10.4 + 172.16.10.5 + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + 172.16.10.101 + 172.16.10.102 + + [monitoring_servers] + 172.16.10.3 + + [grafana_servers] + 172.16.10.3 + ``` + + Now the topology is as follows: + + | Name | Host IP | Services | + | ---- | ------- | -------- | + | node1 | 172.16.10.1 | PD1 | + | node2 | 172.16.10.2 | PD2 | + | node3 | 172.16.10.3 | PD3, Monitor | + | node4 | 172.16.10.4 | TiDB1 | + | node5 | 172.16.10.5 | TiDB2 | + | **node101** | **172.16.10.101**|**TiDB3** | + | **node102** | **172.16.10.102**|**TiDB4** | + | node6 | 172.16.10.6 | TiKV1 | + | node7 | 172.16.10.7 | TiKV2 | + | node8 | 172.16.10.8 | TiKV3 | + | node9 | 172.16.10.9 | TiKV4 | + +2. Initialize the newly added node: + + ``` + ansible-playbook bootstrap.yml -l 172.16.10.101,172.16.10.102 + ``` + +3. Deploy the newly added node: + + ``` + ansible-playbook deploy.yml -l 172.16.10.101,172.16.10.102 + ``` + +4. Start the newly added node: + + ``` + ansible-playbook start.yml -l 172.16.10.101,172.16.10.102 + ``` + +5. Update the Prometheus configuration and restart the cluster: + + ``` + ansible-playbook rolling_update_monitor.yml --tags=prometheus + ``` + +6. Monitor the status of the entire cluster and the newly added node by opening a browser to access the monitoring platform: `http://172.16.10.3:3000`. + +You can use the same procedure to add a TiKV node. But to add a PD node, some configuration files need to be manually updated. + +### Increase the capacity of a PD node + +For example, if you want to add a PD node (node103) with the IP address `172.16.10.103`, you can use the following procedures: + +1. Edit the `inventory.ini` file and append the node information: + + ```ini + [tidb_servers] + 172.16.10.4 + 172.16.10.5 + + [pd_servers] + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + 172.16.10.103 + + [tikv_servers] + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitored_servers] + 172.16.10.4 + 172.16.10.5 + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + 172.16.10.103 + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitoring_servers] + 172.16.10.3 + + [grafana_servers] + 172.16.10.3 + ``` + + Now the topology is as follows: + + | Name | Host IP | Services | + | ---- | ------- | -------- | + | node1 | 172.16.10.1 | PD1 | + | node2 | 172.16.10.2 | PD2 | + | node3 | 172.16.10.3 | PD3, Monitor | + | **node103** | **172.16.10.103** | **PD4** | + | node4 | 172.16.10.4 | TiDB1 | + | node5 | 172.16.10.5 | TiDB2 | + | node6 | 172.16.10.6 | TiKV1 | + | node7 | 172.16.10.7 | TiKV2 | + | node8 | 172.16.10.8 | TiKV3 | + | node9 | 172.16.10.9 | TiKV4 | + +2. Initialize the newly added node: + + ``` + ansible-playbook bootstrap.yml -l 172.16.10.103 + ``` + +3. Deploy the newly added node: + + ``` + ansible-playbook deploy.yml -l 172.16.10.103 + ``` + +4. Login the newly added PD node and edit the starting script: + + ``` + {deploy_dir}/scripts/run_pd.sh + ``` + + 1. Remove the `--initial-cluster="xxxx" \` configuration. + 2. Add `--join="http://172.16.10.1:2379" \`. The IP address (`172.16.10.1`) can be any of the existing PD IP address in the cluster. + 3. Manually start the PD service in the newly added PD node: + + ``` + {deploy_dir}/scripts/start_pd.sh + ``` + + 4. Use `pd-ctl` to check whether the new node is added successfully: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" + ``` + + > **Note:** `pd-ctl` is a command used to check the number of PD nodes. + +5. Roll upgrade the entire cluster: + + ``` + ansible-playbook rolling_update.yml + ``` + +6. Update the Prometheus configuration and restart the cluster: + + ``` + ansible-playbook rolling_update_monitor.yml --tags=prometheus + ``` + +7. Monitor the status of the entire cluster and the newly added node by opening a browser to access the monitoring platform: `http://172.16.10.3:3000`. + +### Decrease the capacity of a TiDB node + +For example, if you want to remove a TiDB node (node5) with the IP address `172.16.10.5`, you can use the following procedures: + +1. Stop all services on node5: + + ``` + ansible-playbook stop.yml -l 172.16.10.5 + ``` + +2. Edit the `inventory.ini` file and remove the node information: + + ```ini + [tidb_servers] + 172.16.10.4 + #172.16.10.5 # the removed node + + [pd_servers] + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + + [tikv_servers] + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitored_servers] + 172.16.10.4 + #172.16.10.5 # the removed node + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitoring_servers] + 172.16.10.3 + + [grafana_servers] + 172.16.10.3 + ``` + + Now the topology is as follows: + + | Name | Host IP | Services | + | ---- | ------- | -------- | + | node1 | 172.16.10.1 | PD1 | + | node2 | 172.16.10.2 | PD2 | + | node3 | 172.16.10.3 | PD3, Monitor | + | node4 | 172.16.10.4 | TiDB1 | + | **node5** | **172.16.10.5** | **TiDB2 removed** | + | node6 | 172.16.10.6 | TiKV1 | + | node7 | 172.16.10.7 | TiKV2 | + | node8 | 172.16.10.8 | TiKV3 | + | node9 | 172.16.10.9 | TiKV4 | + +3. Update the Prometheus configuration and restart the cluster: + + ``` + ansible-playbook rolling_update_monitor.yml --tags=prometheus + ``` + +4. Monitor the status of the entire cluster by opening a browser to access the monitoring platform: `http://172.16.10.3:3000`. + +### Decrease the capacity of a TiKV node + +For example, if you want to remove a TiKV node (node9) with the IP address `172.16.10.9`, you can use the following procedures: + +1. Remove the node from the cluster using `pd-ctl`: + + 1. View the store id of node9: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" -d store + ``` + + 2. Remove node9 from the cluster, assuming that the store id is 10: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" -d store delete 10 + ``` + +2. Use Grafana or `pd-ctl` to check whether the node is successfully removed: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" -d store 10 + ``` + + > **Note:** It takes some time to remove the node. If node9 does not show in the result, the node is successfully removed. + +3. After the node is successfully removed, stop the services on node9: + + ``` + ansible-playbook stop.yml -l 172.16.10.9 + ``` + +4. Edit the `inventory.ini` file and remove the node information: + + ```ini + [tidb_servers] + 172.16.10.4 + 172.16.10.5 + + [pd_servers] + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + + [tikv_servers] + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + #172.16.10.9 # the removed node + + [monitored_servers] + 172.16.10.4 + 172.16.10.5 + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + #172.16.10.9 # the removed node + + [monitoring_servers] + 172.16.10.3 + + [grafana_servers] + 172.16.10.3 + ``` + + Now the topology is as follows: + + | Name | Host IP | Services | + | ---- | ------- | -------- | + | node1 | 172.16.10.1 | PD1 | + | node2 | 172.16.10.2 | PD2 | + | node3 | 172.16.10.3 | PD3, Monitor | + | node4 | 172.16.10.4 | TiDB1 | + | node5 | 172.16.10.5 | TiDB2 | + | node6 | 172.16.10.6 | TiKV1 | + | node7 | 172.16.10.7 | TiKV2 | + | node8 | 172.16.10.8 | TiKV3 | + | **node9** | **172.16.10.9** | **TiKV4 removed** | + +5. Update the Prometheus configuration and restart the cluster: + + ``` + ansible-playbook rolling_update_monitor.yml --tags=prometheus + ``` + +6. Monitor the status of the entire cluster by opening a browser to access the monitoring platform: `http://172.16.10.3:3000`. + +### Decrease the capacity of a PD node + +For example, if you want to remove a PD node (node2) with the IP address `172.16.10.2`, you can use the following procedures: + +1. Remove the node from the cluster using `pd-ctl`: + + 1. View the name of node2: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" -d member + ``` + + 2. Remove node2 from the cluster, assuming that the name is pd2: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" -d member delete name pd2 + ``` + +2. Use Grafana or `pd-ctl` to check whether the node is successfully removed: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" -d member + ``` + +3. After the node is successfully removed, stop the services on node2: + + ``` + ansible-playbook stop.yml -l 172.16.10.2 + ``` + +4. Edit the `inventory.ini` file and remove the node information: + + ```ini + [tidb_servers] + 172.16.10.4 + 172.16.10.5 + + [pd_servers] + 172.16.10.1 + #172.16.10.2 # the removed node + 172.16.10.3 + + [tikv_servers] + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitored_servers] + 172.16.10.4 + 172.16.10.5 + 172.16.10.1 + #172.16.10.2 # the removed node + 172.16.10.3 + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitoring_servers] + 172.16.10.3 + + [grafana_servers] + 172.16.10.3 + ``` + + Now the topology is as follows: + + | Name | Host IP | Services | + | ---- | ------- | -------- | + | node1 | 172.16.10.1 | PD1 | + | **node2** | **172.16.10.2** | **PD2 removed** | + | node3 | 172.16.10.3 | PD3, Monitor | + | node4 | 172.16.10.4 | TiDB1 | + | node5 | 172.16.10.5 | TiDB2 | + | node6 | 172.16.10.6 | TiKV1 | + | node7 | 172.16.10.7 | TiKV2 | + | node8 | 172.16.10.8 | TiKV3 | + | node9 | 172.16.10.9 | TiKV4 | + +5. Update the Prometheus configuration and restart the cluster: + + ``` + ansible-playbook rolling_update_monitor.yml --tags=prometheus + ``` + +6. Monitor the status of the entire cluster by opening a browser to access the monitoring platform: `http://172.16.10.3:3000`. + +## Destroy the TiDB cluster + +Stop the cluster: + +``` +ansible-playbook stop.yml +``` + +Destroy the cluster: + +``` +ansible-playbook unsafe_cleanup.yml +``` diff --git a/v1.0/README.md b/v1.0/README.md new file mode 100755 index 0000000000000..abe231bca0447 --- /dev/null +++ b/v1.0/README.md @@ -0,0 +1,213 @@ +# TiDB Documentation + +## Documentation List + ++ About TiDB + - [TiDB Introduction](overview.md#tidb-introduction) + - [TiDB Architecture](overview.md#tidb-architecture) +- [TiDB Quick Start Guide](QUICKSTART.md) ++ TiDB User Guide + + TiDB Server Administration + - [The TiDB Server](sql/tidb-server.md) + - [The TiDB Command Options](sql/server-command-option.md) + - [The TiDB Data Directory](sql/tidb-server.md#tidb-data-directory) + - [The TiDB System Database](sql/system-database.md) + - [The TiDB System Variables](sql/variable.md) + - [The Proprietary System Variables and Syntax in TiDB](sql/tidb-specific.md) + - [The TiDB Server Logs](sql/tidb-server.md#tidb-server-logs) + - [The TiDB Access Privilege System](sql/privilege.md) + - [TiDB User Account Management](sql/user-account-management.md) + - [Use Encrypted Connections](sql/encrypted-connections.md) + + SQL Optimization + - [Understand the Query Execution Plan](sql/understanding-the-query-execution-plan.md) + - [Introduction to Statistics](sql/statistics.md) + + Language Structure + - [Literal Values](sql/literal-values.md) + - [Schema Object Names](sql/schema-object-names.md) + - [Keywords and Reserved Words](sql/keywords-and-reserved-words.md) + - [User-Defined Variables](sql/user-defined-variables.md) + - [Expression Syntax](sql/expression-syntax.md) + - [Comment Syntax](sql/comment-syntax.md) + + Globalization + - [Character Set Support](sql/character-set-support.md) + - [Character Set Configuration](sql/character-set-configuration.md) + - [Time Zone](sql/time-zone.md) + + Data Types + - [Numeric Types](sql/datatype.md#numeric-types) + - [Date and Time Types](sql/datatype.md#date-and-time-types) + - [String Types](sql/datatype.md#string-types) + - [JSON Types](sql/datatype.md#json-types) + - [The ENUM data type](sql/datatype.md#the-enum-data-type) + - [The SET Type](sql/datatype.md#the-set-type) + - [Data Type Default Values](sql/datatype.md#data-type-default-values) + + Functions and Operators + - [Function and Operator Reference](sql/functions-and-operators-reference.md) + - [Type Conversion in Expression Evaluation](sql/type-conversion-in-expression-evaluation.md) + - [Operators](sql/operators.md) + - [Control Flow Functions](sql/control-flow-functions.md) + - [String Functions](sql/string-functions.md) + - [Numeric Functions and Operators](sql/numeric-functions-and-operators.md) + - [Date and Time Functions](sql/date-and-time-functions.md) + - [Bit Functions and Operators](sql/bit-functions-and-operators.md) + - [Cast Functions and Operators](sql/cast-functions-and-operators.md) + - [Encryption and Compression Functions](sql/encryption-and-compression-functions.md) + - [Information Functions](sql/information-functions.md) + - [JSON Functions](sql/json-functions.md) + - [Aggregate (GROUP BY) Functions](sql/aggregate-group-by-functions.md) + - [Miscellaneous Functions](sql/miscellaneous-functions.md) + - [Precision Math](sql/precision-math.md) + + SQL Statement Syntax + - [Data Definition Statements](sql/ddl.md) + - [Data Manipulation Statements](sql/dml.md) + - [Transactions](sql/transaction.md) + - [Database Administration Statements](sql/admin.md) + - [Prepared SQL Statement Syntax](sql/prepare.md) + - [Utility Statements](sql/util.md) + - [TiDB SQL Syntax Diagram](https://pingcap.github.io/sqlgram/) + - [JSON Functions and Generated Column](sql/json-functions-generated-column.md) + - [Connectors and APIs](sql/connection-and-APIs.md) + - [TiDB Transaction Isolation Levels](sql/transaction-isolation.md) + - [Error Codes and Troubleshooting](sql/error.md) + - [Compatibility with MySQL](sql/mysql-compatibility.md) + + Advanced Usage + - [Read Data From History Versions](op-guide/history-read.md) ++ TiDB Operations Guide + - [Hardware and Software Requirements](op-guide/recommendation.md) + + Deploy + - [Ansible Deployment (Recommended)](op-guide/ansible-deployment.md) + - [Offline Deployment Using Ansible](op-guide/offline-ansible-deployment.md) + - [Docker Deployment](op-guide/docker-deployment.md) + - [Docker Compose Deployment](op-guide/docker-compose.md) + - [Cross-Region Deployment](op-guide/location-awareness.md) + + Configure + - [Configuration Flags](op-guide/configuration.md) + - [Enable TLS Authentication](op-guide/security.md) + - [Generate Self-signed Certificates](op-guide/generate-self-signed-certificates.md) + + Monitor + - [Overview of the Monitoring Framework](op-guide/monitor-overview.md) + - [Key Metrics](op-guide/dashboard-overview-info.md) + - [Monitor a TiDB Cluster](op-guide/monitor.md) + + Scale + - [Scale a TiDB Cluster](op-guide/horizontal-scale.md) + - [Use Ansible to Scale](QUICKSTART.md#scale-the-tidb-cluster) + - [Upgrade](op-guide/ansible-deployment.md#perform-rolling-update) + - [Tune Performance](op-guide/tune-tikv.md) + + Backup and Migrate + - [Backup and Restore](op-guide/backup-restore.md) + + Migrate + - [Migration Overview](op-guide/migration-overview.md) + - [Migrate All the Data](op-guide/migration.md#use-the-mydumper--loader-tool-to-export-and-import-all-the-data) + - [Migrate the Data Incrementally](op-guide/migration.md#use-the-syncer-tool-to-import-data-incrementally-optional) + - [Deploy TiDB Using the Binary](op-guide/binary-deployment.md) + - [Troubleshoot](trouble-shooting.md) ++ TiDB Utilities + - [Syncer User Guide](tools/syncer.md) + - [Loader User Guide](tools/loader.md) + - [TiDB-Binlog User Guide](tools/tidb-binlog-kafka.md) + - [PD Control User Guide](tools/pd-control.md) ++ The TiDB Connector for Spark + - [Quick Start Guide](tispark/tispark-quick-start-guide.md) + - [User Guide](tispark/tispark-user-guide.md) +- [Frequently Asked Questions (FAQ)](FAQ.md) +- [TiDB Best Practices](https://pingcap.github.io/blog/2017/07/24/tidbbestpractice/) +- [Releases](releases/rn.md) +- [TiDB Adopters](adopters.md) +- [TiDB Roadmap](https://github.com/pingcap/docs/blob/master/ROADMAP.md) +- [Connect with us](community.md) ++ More Resources + - [Frequently Used Tools](https://github.com/pingcap/tidb-tools) + - [PingCAP Blog](https://pingcap.com/blog/) + - [Weekly Update](https://pingcap.com/weekly/) + +## TiDB Introduction + +TiDB (The pronunciation is: /'taɪdiːbi:/ tai-D-B, etymology: titanium) is a Hybrid Transactional/Analytical Processing (HTAP) database. Inspired by the design of Google F1 and Google Spanner, TiDB features infinite horizontal scalability, strong consistency, and high availability. The goal of TiDB is to serve as a one-stop solution for online transactions and analyses. + +- __Horizontal and linear scalability__ +- __Compatible with MySQL protocol__ +- __Automatic failover and high availability__ +- __Consistent distributed transactions__ +- __Online DDL__ +- __Multiple storage engine support__ +- __Highly concurrent and real-time writing and query of large volume of data (HTAP)__ + +TiDB is designed to support both OLTP (Online Transactional Processing) and OLAP (Online Analytical Processing) scenarios. For complex OLAP scenarios, use [TiSpark](tispark/tispark-user-guide.md). + +Read the following three articles to understand TiDB techniques: + +- [Data Storage](https://pingcap.github.io/blog/2017/07/11/tidbinternal1/) +- [Computing](https://pingcap.github.io/blog/2017/07/11/tidbinternal2/) +- [Scheduling](https://pingcap.github.io/blog/2017/07/20/tidbinternal3/) + +## Roadmap + +Read the [Roadmap](https://github.com/pingcap/docs/blob/master/ROADMAP.md). + +## Connect with us + +- **Twitter**: [@PingCAP](https://twitter.com/PingCAP) +- **Reddit**: https://www.reddit.com/r/TiDB/ +- **Stack Overflow**: https://stackoverflow.com/questions/tagged/tidb +- **Mailing list**: [Google Group](https://groups.google.com/forum/#!forum/tidb-user) + +## TiDB architecture + +To better understand TiDB’s features, you need to understand the TiDB architecture. + +![image alt text](media/tidb-architecture.png) + +The TiDB cluster has three components: the TiDB server, the PD server, and the TiKV server. + +### TiDB server + +The TiDB server is in charge of the following operations: + +1. Receiving the SQL requests + +2. Processing the SQL related logics + +3. Locating the TiKV address for storing and computing data through Placement Driver (PD) + +4. Exchanging data with TiKV + +5. Returning the result + +The TiDB server is stateless. It does not store data and it is for computing only. TiDB is horizontally scalable and provides the unified interface to the outside through the load balancing components such as Linux Virtual Server (LVS), HAProxy, or F5. + +### Placement Driver server + +The Placement Driver (PD) server is the managing component of the entire cluster and is in charge of the following three operations: + +1. Storing the metadata of the cluster such as the region location of a specific key. + +2. Scheduling and load balancing regions in the TiKV cluster, including but not limited to data migration and Raft group leader transfer. + +3. Allocating the transaction ID that is globally unique and monotonic increasing. + +As a cluster, PD needs to be deployed to an odd number of nodes. Usually it is recommended to deploy to 3 online nodes at least. + +### TiKV server + +The TiKV server is responsible for storing data. From an external view, TiKV is a distributed transactional Key-Value storage engine. Region is the basic unit to store data. Each Region stores the data for a particular Key Range which is a left-closed and right-open interval from StartKey to EndKey. There are multiple Regions in each TiKV node. TiKV uses the Raft protocol for replication to ensure the data consistency and disaster recovery. The replicas of the same Region on different nodes compose a Raft Group. The load balancing of the data among different TiKV nodes are scheduled by PD. Region is also the basic unit for scheduling the load balance. + +## Features + +### Horizontal Scalability + +Horizontal scalability is the most important feature of TiDB. The scalability includes two aspects: the computing capability and the storage capacity. The TiDB server processes the SQL requests. As the business grows, the overall processing capability and higher throughput can be achieved by simply adding more TiDB server nodes. Data is stored in TiKV. As the size of the data grows, the scalability of data can be resolved by adding more TiKV server nodes. PD schedules data in Regions among the TiKV nodes and migrates part of the data to the newly added node. So in the early stage, you can deploy only a few service instances. For example, it is recommended to deploy at least 3 TiKV nodes, 3 PD nodes and 2 TiDB nodes. As business grows, more TiDB and TiKV instances can be added on-demand. + +### High availability + +High availability is another important feature of TiDB. All of the three components, TiDB, TiKV and PD, can tolerate the failure of some instances without impacting the availability of the entire cluster. For each component, See the following for more details about the availability, the consequence of a single instance failure and how to recover. + +#### TiDB + +TiDB is stateless and it is recommended to deploy at least two instances. The front-end provides services to the outside through the load balancing components. If one of the instances is down, the Session on the instance will be impacted. From the application’s point of view, it is a single request failure but the service can be regained by reconnecting to the TiDB server. If a single instance is down, the service can be recovered by restarting the instance or by deploying a new one. + +#### PD + +PD is a cluster and the data consistency is ensured using the Raft protocol. If an instance is down but the instance is not a Raft Leader, there is no impact on the service at all. If the instance is a Raft Leader, a new Leader will be elected to recover the service. During the election which is approximately 3 seconds, PD cannot provide service. It is recommended to deploy three instances. If one of the instances is down, the service can be recovered by restarting the instance or by deploying a new one. + +#### TiKV + +TiKV is a cluster and the data consistency is ensured using the Raft protocol. The number of the replicas can be configurable and the default is 3 replicas. The load of TiKV servers are balanced through PD. If one of the node is down, all the Regions in the node will be impacted. If the failed node is the Leader of the Region, the service will be interrupted and a new election will be initiated. If the failed node is a Follower of the Region, the service will not be impacted. If a TiKV node is down for a period of time (the default value is 10 minutes), PD will move the data to another TiKV node. diff --git a/v1.0/ROADMAP.md b/v1.0/ROADMAP.md new file mode 100755 index 0000000000000..29ce4beb21b55 --- /dev/null +++ b/v1.0/ROADMAP.md @@ -0,0 +1,71 @@ +--- +title: TiDB Roadmap +category: Roadmap +--- + +# TiDB Roadmap + +This document defines the roadmap for TiDB development. + +## TiDB: +- [ ] Optimizer + - [ ] Refactor Ranger + - [ ] Optimize the statistics info + - [ ] Optimize the cost model +- [ ] Executor + - [ ] Parallel Operators + - [ ] Compact Row Format to reduce memory usage + - [ ] File Sort +- [ ] Support View +- [ ] Support Window Function +- [ ] Common Table Expression +- [ ] Table Partition +- [ ] Hash time index to resolve the issue with hot regions +- [ ] Reverse Index +- [ ] Cluster Index +- [ ] Improve DDL +- [ ] Support `utf8_general_ci` collation + +## TiKV: + +- [ ] Raft + - [ ] Region merge + - [ ] Local read thread + - [ ] Multi-thread raftstore + - [ ] None voter + - [ ] Pre-vote +- [ ] RocksDB + - [ ] DeleteRange +- [ ] Transaction + - [ ] Optimize transaction conflicts +- [ ] Coprocessor + - [ ] Streaming +- [ ] Tool + - [ ] Import distributed data + - [ ] Export distributed data + - [ ] Disaster Recovery +- [ ] Flow control and degradation + +## PD: +- [ ] Improve namespace + - [ ] Different replication policies for different namespaces and tables + + - [ ] Decentralize scheduling table regions + - [ ] Scheduler supports prioritization to be more controllable + +- [ ] Use machine learning to optimize scheduling + +## TiSpark: + +- [ ] Limit / Order push-down +- [ ] Access through the DAG interface and deprecate the Select interface +- [ ] Index Join and parallel merge join +- [ ] Data Federation + +## SRE & tools: + +- [ ] Kubernetes based intergration for the on-premise version +- [ ] Dashboard UI for the on-premise version +- [ ] The cluster backup and recovery tool +- [ ] The data migration tool (Wormhole V2) +- [ ] Security and system diagnosis diff --git a/v1.0/adopters.md b/v1.0/adopters.md new file mode 100755 index 0000000000000..456de4df295ba --- /dev/null +++ b/v1.0/adopters.md @@ -0,0 +1,25 @@ +--- +title: TiDB Adopters +category: adopters +--- + +# TiDB Adopters + +This is a list of TiDB adopters in various industries. + +- [Mobike (Ridesharing)](https://mobike.com/global/) +- [Yiguo.com (E-commerce)](https://www.datanami.com/2018/02/22/hybrid-database-capturing-perishable-insights-yiguo/) +- [Phoenix TV (Media)](http://www.ifeng.com/) +- [Ping++ (Payment)](https://www.crunchbase.com/organization/ping-5) +- [Qunar.com (Travel)](https://www.crunchbase.com/organization/qunar-com) +- [LinkDoc Technology (HealthTech)](https://www.crunchbase.com/organization/linkdoc-technology) +- [Yuanfudao (EdTech)](https://www.crunchbase.com/organization/yuanfudao) +- [ZuoZhu Financial (FinTech)](http://www.zuozh.com/) +- [360 Financial (FinTech)](https://jinrong.360jie.com.cn/) +- [GAEA (Gaming)](http://gaea.com/en) +- [YOOZOO GAMES (Gaming)](http://www.yoozoo.com/en) +- [Hainan eKing Technology (Enterprise Technology)](https://www.crunchbase.com/organization/hainan-eking-technology) +- [2Dfire (FoodTech)](http://www.2dfire.com/) +- [G7 (Internet of Things)](https://www.english.g7.com.cn/) +- [Yimian Data (Big Data)](https://www.yimian.com.cn) +- [Wanda Internet Technology Group (Big Data)](http://www.wanda-tech.cn/en/) \ No newline at end of file diff --git a/v1.0/benchmark/sysbench.md b/v1.0/benchmark/sysbench.md new file mode 100755 index 0000000000000..a47d2e96dab1d --- /dev/null +++ b/v1.0/benchmark/sysbench.md @@ -0,0 +1,210 @@ +--- +title: Performace test result for TiDB using Sysbench +category: benchmark +draft: true +--- + +# Performace test result for TiDB using Sysbench + +## Test purpose + +The purpose of this test is to test the performance and horizontal scalability of TiDB in OLTP scenarios. + +> **Note**: The results of the testing might vary based on different environmental dependencies. + +## Test version, date and place + +TiDB version: v1.0.0 + +Date: October 20, 2017 + +Place: Beijing + +## Test environment + +- IDC machines: + + | Category | Detail | + | :--------| :---------| + | OS | Linux (CentOS 7.3.1611) | + | CPU | 40 vCPUs, Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz | + | RAM | 128GB | + | DISK | 1.5T SSD * 2 + Optane SSD * 1 | + +- Sysbench version: 1.0.6 + +- Test script: https://github.com/pingcap/tidb-bench/tree/cwen/not_prepared_statement/sysbench. + +## Test scenarios + +### Scenario one: RW performance test using Sysbench + +The structure of the table used for the test: + +``` sql +CREATE TABLE `sbtest` ( + `id` int(10) unsigned NOT NULL AUTO_INCREMENT, + `k` int(10) unsigned NOT NULL DEFAULT '0', + `c` char(120) NOT NULL DEFAULT '', + `pad` char(60) NOT NULL DEFAULT '', + PRIMARY KEY (`id`), + KEY `k_1` (`k`) +) ENGINE=InnoDB +``` + +The deployment and configuration details: + +``` +// TiDB deployment +172.16.20.4 4*tikv 1*tidb 1*sysbench +172.16.20.6 4*tikv 1*tidb 1*sysbench +172.16.20.7 4*tikv 1*tidb 1*sysbench +172.16.10.8 1*tidb 1*pd 1*sysbench + +// Each physical node has three disks. +data3: 2 tikv (Optane SSD) +data2: 1 tikv +data1: 1 tikv + +// TiKV configuration +sync-log = false +grpc-concurrency = 8 +grpc-raft-conn-num = 24 +[defaultcf] +block-cache-size = "12GB" +[writecf] +block-cache-size = "5GB" +[raftdb.defaultcf] +block-cache-size = "2GB" + +// MySQL deployment +// Use the semi-synchronous replication and asynchronous replication to deploy two replicas respectively. +172.16.20.4 master +172.16.20.6 slave +172.16.20.7 slave +172.16.10.8 1*sysbench +Mysql version: 5.6.37 + +// MySQL configuration +thread_cache_size = 64 +innodb_buffer_pool_size = 64G +innodb_file_per_table = 1 +innodb_flush_log_at_trx_commit = 0 +datadir = /data3/mysql +max_connections = 2000 +``` + +- OLTP RW test + + | - | Table count | Table size | Sysbench threads | TPS | QPS | Latency(avg / .95) | + | :---: | :---: | :---: | :---: | :---: | :---: | :---: | + | TiDB | 32 | 1 million | 64 * 4 | 3834 | 76692 | 67.04 ms / 110.88 ms | + | TiDB | 32 | 1 million | 128 * 4 | 4172 | 83459 | 124.00 ms / 194.21 ms | + | TiDB | 32 | 1 million | 256 * 4 | 4577 | 91547 | 228.36 ms / 334.02 ms | + | TiDB | 32 | 5 million | 256 * 4 | 4032 | 80657 | 256.62 ms / 443.88 ms | + | TiDB | 32 | 10 million | 256 * 4 | 3811 | 76233 | 269.46 ms / 505.20 ms | + | Mysql | 32 | 1 million | 64 | 2392 | 47845 | 26.75 ms / 73.13 ms | + | Mysql | 32 | 1 million | 128 | 2493 | 49874 | 51.32 ms / 173.58 ms | + | Mysql | 32 | 1 million | 256 | 2561 | 51221 | 99.95 ms / 287.38 ms | + | Mysql | 32 | 5 million | 256 | 1902 | 38045 | 134.56 ms / 363.18 ms | + | Mysql | 32 | 10 million | 256 | 1770 | 35416 | 144.55 ms / 383.33 ms | + +![](../media/sysbench-01.png) + +![](../media/sysbench-02.png) + +- `Select` RW test + + | - | Table count | Table size | Sysbench threads | QPS | Latency(avg / .95) | + | :---: | :---: | :---: | :---: | :---: | :---: | + | TiDB | 32 | 1 million | 64 * 4 | 160299 | 1.61ms / 50.06 ms | + | TiDB | 32 | 1 million | 128 * 4 | 183347 | 2.85 ms / 8.66 ms | + | TiDB | 32 | 1 million | 256 * 4 | 196515 | 5.42 ms / 14.43 ms | + | TiDB | 32 | 5 million | 256 * 4 | 187628 | 5.66 ms / 15.04 ms | + | TiDB | 32 | 10 million | 256 * 4 | 187440 | 5.65 ms / 15.37 ms | + | Mysql | 32 | 1 million | 64 | 359572 | 0.18 ms / 0.45 ms | + | Mysql | 32 | 1 million | 128 | 410426 |0.31 ms / 0.74 ms | + | Mysql | 32 | 1 million | 256 | 396867 | 0.64 ms / 1.58 ms | + | Mysql | 32 | 5 million | 256 | 386866 | 0.66 ms / 1.64 ms | + | Mysql | 32 | 10 million | 256 | 388273 | 0.66 ms / 1.64 ms | + +![](../media/sysbench-03.png) + +![](../media/sysbench-04.png) + +- `Insert` RW test + + | - | Table count | Table size | Sysbench threads | QPS | Latency(avg / .95) | + | :---: | :---: | :---: | :---: | :---: | :---: | + | TiDB | 32 | 1 million | 64 * 4 | 25308 | 10.12 ms / 25.40 ms | + | TiDB | 32 | 1 million | 128 * 4 | 28773 | 17.80 ms / 44.58 ms | + | TiDB | 32 | 1 million | 256 * 4 | 32641 | 31.38 ms / 73.47 ms | + | TiDB | 32 | 5 million | 256 * 4 | 30430 | 33.65 ms / 79.32 ms | + | TiDB | 32 | 10 million | 256 * 4 | 28925 | 35.41 ms / 78.96 ms | + | Mysql | 32 | 1 million | 64 | 14806 | 4.32 ms / 9.39 ms | + | Mysql | 32 | 1 million | 128 | 14884 | 8.58 ms / 21.11 ms | + | Mysql | 32 | 1 million | 256 | 14508 | 17.64 ms / 44.98 ms | + | Mysql | 32 | 5 million | 256 | 10593 | 24.16 ms / 82.96 ms | + | Mysql | 32 | 10 million | 256 | 9813 | 26.08 ms / 94.10 ms | + +![](../media/sysbench-05.png) + +![](../media/sysbench-06.png) + +### Scenario two: TiDB horizontal scalability test + +The deployment and configuration details: + +``` +// TiDB deployment +172.16.20.3 4*tikv +172.16.10.2 1*tidb 1*pd 1*sysbench + +// Each physical node has three disks. +data3: 2 tikv (Optane SSD) +data2: 1 tikv +data1: 1 tikv + +// TiKV configuration +sync-log = false +grpc-concurrency = 8 +grpc-raft-conn-num = 24 +[defaultcf] +block-cache-size = "12GB" +[writecf] +block-cache-size = "5GB" +[raftdb.defaultcf] +block-cache-size = "2GB" +``` + +- OLTP RW test + + | - | Table count | Table size | Sysbench threads | TPS | QPS | Latency(avg / .95) | + | :---: | :---: | :---: | :---: | :---: | :---: | :---: | + | 1 TiDB physical node | 32 | 1 million | 256 * 1 | 2495 | 49902 | 102.42 ms / 125.52 ms | + | 2 TiDB physical nodes | 32 | 1 million | 256 * 2 | 5007 | 100153 | 102.23 ms / 125.52 ms | + | 4 TiDB physical nodes | 32 | 1 million | 256 * 4 | 8984 | 179692 | 114.96 ms / 176.73 ms | + | 6 TiDB physical nodes | 32 | 5 million | 256 * 6 | 12953 | 259072 | 117.80 ms / 200.47 ms | + +![](../media/sysbench-07.png) + +- `Select` RW test + + | - | Table count | Table size | Sysbench threads | QPS | Latency(avg / .95) | + | :---: | :---: | :---: | :---: | :---: | :---: | + | 1 TiDB physical node | 32 | 1 million | 256 * 1 | 71841 | 3.56 ms / 8.74 ms | + | 2 TiDB physical nodes | 32 | 1 million | 256 * 2 | 146615 | 3.49 ms / 8.74 ms | + | 4 TiDB physical nodes | 32 | 1 million | 256 * 4 | 289933 | 3.53 ms / 8.74 ms | + | 6 TiDB physical nodes | 32 | 5 million | 256 * 6 | 435313 | 3.55 ms / 9.17 ms | + +![](../media/sysbench-08.png) + +- `Insert` RW test + + | - | Table count | Table size | Sysbench threads | QPS | Latency(avg / .95) | + | :---: | :---: | :---: | :---: | :---: | :---: | + | 3 TiKV physical node | 32 | 1 million |256 * 3 | 40547 | 18.93 ms / 38.25 ms | + | 5 TiKV physical nodes | 32 | 1 million | 256 * 3 | 60689 | 37.96 ms / 29.9 ms | + | 7 TiKV physical nodes | 32 | 1 million | 256 * 3 | 80087 | 9.62 ms / 21.37 ms | + +![](../media/sysbench-09.png) diff --git a/v1.0/circle.yml b/v1.0/circle.yml new file mode 100755 index 0000000000000..a0e90c9a5346b --- /dev/null +++ b/v1.0/circle.yml @@ -0,0 +1,36 @@ +version: 2 + +jobs: + build: + docker: + - image: andelf/doc-build:0.1.9 + working_directory: ~/pingcap/docs + steps: + - checkout + + - run: + name: "Special Check for Golang User - YOUR TAB SUCKS" + command: grep -RP '\t' * | tee | grep '.md' && exit 1; echo ok + + - run: + name: "Merge Makedown Files" + command: python3 scripts/merge_by_toc.py + + - run: + name: "Generate PDF" + command: scripts/generate_pdf.sh + + - deploy: + name: "Publish PDF" + command: | + if [ "${CIRCLE_BRANCH}" == "master" ]; then + sudo bash -c 'echo "119.188.128.5 uc.qbox.me" >> /etc/hosts'; + python3 scripts/upload.py output.pdf tidb-manual-en.pdf; + fi + + - run: + name: "Copy Generated PDF" + command: mkdir /tmp/artifacts && cp output.pdf doc.md /tmp/artifacts + + - store_artifacts: + path: /tmp/artifacts diff --git a/v1.0/community.md b/v1.0/community.md new file mode 100755 index 0000000000000..6b023b0007900 --- /dev/null +++ b/v1.0/community.md @@ -0,0 +1,11 @@ +--- +title: Connect with us +category: community +--- + +# Connect with us + +- **Twitter**: [@PingCAP](https://twitter.com/PingCAP) +- **Reddit**: https://www.reddit.com/r/TiDB/ +- **Stack Overflow**: https://stackoverflow.com/questions/tagged/tidb +- **Mailing list**: [Google Group](https://groups.google.com/forum/#!forum/tidb-user) diff --git a/v1.0/dev-guide/deployment.md b/v1.0/dev-guide/deployment.md new file mode 100755 index 0000000000000..c3b5c5f791221 --- /dev/null +++ b/v1.0/dev-guide/deployment.md @@ -0,0 +1,15 @@ +# Build for deployment + +## Overview + +Note: **The easiest way to deploy TiDB is to use the official binary package directly, see [Binary Deployment](../op-guide/binary-deployment.md).** + +If you want to build the TiDB project, deploy the binaries to other machines and run them, you can follow this guide. + +Check the [supported platforms](./requirements.md#supported-platforms) and [prerequisites](./requirements.md#prerequisites) first. + +## Building and installing TiDB components + +You can use the [build script](../scripts/build.sh) to build and install TiDB components in the `bin` directory. + +You can use the [update script](../scripts/update.sh) to update all the TiDB components to the latest version. \ No newline at end of file diff --git a/v1.0/dev-guide/development.md b/v1.0/dev-guide/development.md new file mode 100755 index 0000000000000..c25383b9b8fef --- /dev/null +++ b/v1.0/dev-guide/development.md @@ -0,0 +1,68 @@ +# Build For Development + +## Overview + +If you want to develop the TiDB project, you can follow this guide. + +Before you begin, check the [supported platforms](./requirements.md#supported-platforms) and [prerequisites](./requirements.md#prerequisites) first. + +## Build TiKV + +After you install the RocksDB shared library, you can build TiKV directly without `ROCKSDB_SYS_STATIC`. + ++ Get the TiKV source code. + + ```bash + git clone https://github.com/pingcap/tikv.git + ``` ++ Enter the source directory to build and install the binary in the `bin` directory. + + ```bash + make + ``` + ++ Run unit test. + + ```bash + make test + ``` + +## Build TiDB + ++ Make sure the GOPATH environment is set correctly. + ++ Get the TiDB source code. + + ```bash + git clone https://github.com/pingcap/tidb.git $GOPATH/src/github.com/pingcap/tidb + ``` + ++ Enter `$GOPATH/src/github.com/pingcap/tidb` to build and install the binary in the `bin` directory. + + ```bash + make + ``` ++ Run unit test. + + ```bash + make test + ``` + +## Build PD + ++ Get the PD source code. + + ```bash + git clone https://github.com/pingcap/pd.git $GOPATH/src/github.com/pingcap/pd + ``` + ++ Enter `$GOPATH/src/github.com/pingcap/pd` to build and install the binary in the `bin` directory. + + ```bash + make + ``` ++ Run unit test. + + ```bash + make test + ``` diff --git a/v1.0/dev-guide/requirements.md b/v1.0/dev-guide/requirements.md new file mode 100755 index 0000000000000..2342efd8a3085 --- /dev/null +++ b/v1.0/dev-guide/requirements.md @@ -0,0 +1,21 @@ +# Build requirements + +## Supported platforms + +The following table lists TiDB support for common architectures and operating systems. + +|Architecture|Operating System|Status| +|------------|----------------|------| +|AMD64|Linux Ubuntu (14.04+)|Stable| +|AMD64|Linux CentOS (7+)|Stable| +|AMD64|Mac OSX|Experimental| + +## Prerequisites + ++ Go [1.8+](https://golang.org/doc/install) ++ Rust [nightly version](https://www.rust-lang.org/downloads.html) ++ GCC 4.8+ with static library + +The [check requirement script](../scripts/check_requirement.sh) can help you check prerequisites and +install the missing ones automatically. + diff --git a/v1.0/etc/DiskPerformance.json b/v1.0/etc/DiskPerformance.json new file mode 100755 index 0000000000000..95921b5f4fe08 --- /dev/null +++ b/v1.0/etc/DiskPerformance.json @@ -0,0 +1,935 @@ +{ + "__inputs": [ + { + "name": "DS_USER-CREDITS", + "label": "user-credits", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + }, + { + "type": "panel", + "id": "text", + "name": "Text", + "version": "" + } + ], + "annotations": { + "list": [] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 1, + "hideControls": true, + "id": null, + "links": [], + "refresh": false, + "rows": [ + { + "collapse": false, + "height": "250px", + "panels": [ + { + "content": "You can click on an individual disk device on the legend to filter on it or multiple ones by holding Alt button.", + "datasource": "${DS_USER-CREDITS}", + "editable": true, + "error": false, + "height": "50px", + "id": 8, + "links": [], + "mode": "text", + "span": 12, + "style": {}, + "title": "", + "transparent": true, + "type": "text" + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows average latency for Reads and Writes IO Devices. Higher than typical latency for highly loaded storage indicates saturation (overload) and is frequent cause of performance problems. Higher than normal latency also can indicate internal storage problems.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 11, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": true, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "(rate(node_disk_read_time_ms{device=~\"$device\", instance=\"$host\"}[$interval]) / rate(node_disk_reads_completed{device=~\"$device\", instance=\"$host\"}[$interval])) or (irate(node_disk_read_time_ms{device=~\"$device\", instance=\"$host\"}[5m]) / irate(node_disk_reads_completed{device=~\"$device\", instance=\"$host\"}[5m]))", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Read: {{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "(rate(node_disk_write_time_ms{device=~\"$device\", instance=\"$host\"}[$interval]) / rate(node_disk_writes_completed{device=~\"$device\", instance=\"$host\"}[$interval])) or (irate(node_disk_write_time_ms{device=~\"$device\", instance=\"$host\"}[5m]) / irate(node_disk_writes_completed{device=~\"$device\", instance=\"$host\"}[5m]))", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Write: {{ device }}", + "metric": "", + "refId": "B", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk Latency", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ms", + "label": "", + "logBase": 2, + "max": null, + "min": 0, + "show": true + }, + { + "format": "ms", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows amount of physical IOs (reads and writes) different devices are serving. Spikes in number of IOs served often corresponds to performance problems due to IO subsystem overload.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 15, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_reads_completed{device=~\"$device\", instance=\"$host\"}[$interval]) or irate(node_disk_reads_completed{device=~\"$device\", instance=\"$host\"}[5m])", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Read: {{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_writes_completed{device=~\"$device\", instance=\"$host\"}[$interval]) or irate(node_disk_writes_completed{device=~\"$device\", instance=\"$host\"}[5m])", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Write: {{ device }}", + "metric": "", + "refId": "B", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk Operations", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "iops", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows volume of reads and writes the storage is handling. This can be better measure of IO capacity usage for network attached and SSD storage as it is often bandwidth limited. Amount of data being written to the disk can be used to estimate Flash storage life time.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 16, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_bytes_read{device=~\"$device\", instance=\"$host\"}[$interval]) or irate(node_disk_bytes_read{device=~\"$device\", instance=\"$host\"}[5m])", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Read: {{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_bytes_written{device=~\"$device\", instance=\"$host\"}[$interval]) or irate(node_disk_bytes_written{device=~\"$device\", instance=\"$host\"}[5m])", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Write: {{ device }}", + "metric": "", + "refId": "B", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk Bandwidth", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows how much disk was loaded for reads or writes as average number of outstanding requests at different period of time. High disk load is a good measure of actual storage utilization. Different storage types handle load differently - some will show latency increases on low loads others can handle higher load with no problems.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 14, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": true, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_read_time_ms{device=~\"$device\", instance=\"$host\"}[$interval])/1000 or irate(node_disk_read_time_ms{device=~\"$device\", instance=\"$host\"}[5m])/1000", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Read: {{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_write_time_ms{device=~\"$device\", instance=\"$host\"}[$interval])/1000 or irate(node_disk_write_time_ms{device=~\"$device\", instance=\"$host\"}[5m])/1000", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Write: {{ device }}", + "metric": "", + "refId": "B", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk Load", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows disk Utilization as percent of the time when there was at least one IO request in flight. It is designed to match utilization available in iostat tool. It is not very good measure of true IO Capacity Utilization. Consider looking at IO latency and Disk Load Graphs instead.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 17, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "sort": "avg", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_io_time_ms{device=~\"$device\", instance=\"$host\"}[$interval])/1000 or irate(node_disk_io_time_ms{device=~\"$device\", instance=\"$host\"}[5m])/1000", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "{{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk IO Utilization", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows how effectively Operating System is able to merge logical IO requests into physical requests. This is a good measure of the IO locality which can be used for workload characterization.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 18, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": true, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "(1 + rate(node_disk_reads_merged{device=~\"$device\", instance=\"$host\"}[$interval]) / rate(node_disk_reads_completed{device=~\"$device\", instance=\"$host\"}[$interval])) or (1 + irate(node_disk_reads_merged{device=~\"$device\", instance=\"$host\"}[5m]) / irate(node_disk_reads_completed{device=~\"$device\", instance=\"$host\"}[5m]))", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Read Ratio: {{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "(1 + rate(node_disk_writes_merged{device=~\"$device\", instance=\"$host\"}[$interval]) / rate(node_disk_writes_completed{device=~\"$device\", instance=\"$host\"}[$interval])) or (1 + irate(node_disk_writes_merged{device=~\"$device\", instance=\"$host\"}[5m]) / irate(node_disk_writes_completed{device=~\"$device\", instance=\"$host\"}[5m]))", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Write Ratio: {{ device }}", + "metric": "", + "refId": "B", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk Operations Merge Ratio", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": { + "Read IO size: sdb": "#2F575E", + "Read: sdb": "#3F6833" + }, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows average size of a single disk operation.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 20, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": true, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_sectors_read{instance=\"$host\", device=~\"$device\"}[$interval]) * 512 / rate(node_disk_reads_completed{instance=\"$host\", device=~\"$device\"}[$interval]) or irate(node_disk_sectors_read{instance=\"$host\", device=~\"$device\"}[5m]) * 512 / irate(node_disk_reads_completed{instance=\"$host\", device=~\"$device\"}[5m]) ", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Read size: {{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_sectors_written{instance=\"$host\", device=~\"$device\"}[$interval]) * 512 / rate(node_disk_writes_completed{instance=\"$host\", device=~\"$device\"}[$interval]) or irate(node_disk_sectors_written{instance=\"$host\", device=~\"$device\"}[5m]) * 512 / irate(node_disk_writes_completed{instance=\"$host\", device=~\"$device\"}[5m]) ", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Write size: {{ device }}", + "metric": "", + "refId": "B", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk IO Size", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Disk Stats", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [ + { + "allFormat": "glob", + "auto": true, + "auto_count": 200, + "auto_min": "1s", + "current": { + "text": "auto", + "value": "$__auto_interval" + }, + "datasource": "Prometheus", + "hide": 0, + "includeAll": false, + "label": "Interval", + "multi": false, + "multiFormat": "glob", + "name": "interval", + "options": [ + { + "selected": true, + "text": "auto", + "value": "$__auto_interval" + }, + { + "selected": false, + "text": "1s", + "value": "1s" + }, + { + "selected": false, + "text": "5s", + "value": "5s" + }, + { + "selected": false, + "text": "1m", + "value": "1m" + }, + { + "selected": false, + "text": "5m", + "value": "5m" + }, + { + "selected": false, + "text": "1h", + "value": "1h" + }, + { + "selected": false, + "text": "6h", + "value": "6h" + }, + { + "selected": false, + "text": "1d", + "value": "1d" + } + ], + "query": "1s,5s,1m,5m,1h,6h,1d", + "refresh": 2, + "type": "interval" + }, + { + "allFormat": "glob", + "allValue": null, + "current": {}, + "datasource": "${DS_USER-CREDITS}", + "hide": 0, + "includeAll": false, + "label": "Host", + "multi": false, + "multiFormat": "regex values", + "name": "host", + "options": [], + "query": "label_values(node_disk_reads_completed, instance)", + "refresh": 1, + "refresh_on_load": false, + "regex": "", + "sort": 1, + "tagValuesQuery": "instance", + "tags": [], + "tagsQuery": "up", + "type": "query", + "useTags": false + }, + { + "allFormat": "glob", + "allValue": null, + "current": {}, + "datasource": "${DS_USER-CREDITS}", + "hide": 0, + "includeAll": true, + "label": "Device", + "multi": true, + "multiFormat": "regex values", + "name": "device", + "options": [], + "query": "label_values(node_disk_reads_completed{instance=\"$host\", device!~\"dm-.+\"}, device)", + "refresh": 1, + "refresh_on_load": false, + "regex": "", + "sort": 1, + "tagValuesQuery": "instance", + "tags": [], + "tagsQuery": "up", + "type": "query", + "useTags": false + } + ] + }, + "time": { + "from": "now-12h", + "to": "now" + }, + "timepicker": { + "collapse": false, + "enable": true, + "notice": false, + "now": true, + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "status": "Stable", + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ], + "type": "timepicker" + }, + "timezone": "browser", + "title": "Disk Performance", + "version": 1 +} \ No newline at end of file diff --git a/v1.0/etc/Drainer.json b/v1.0/etc/Drainer.json new file mode 100755 index 0000000000000..7185065829ceb --- /dev/null +++ b/v1.0/etc/Drainer.json @@ -0,0 +1,1070 @@ +{ + "__inputs": [ + { + "name": "DS_Drainer", + "label": "Drainer", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + }, + { + "type": "panel", + "id": "singlestat", + "name": "Singlestat", + "version": "" + } + ], + "annotations": { + "list": [] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "hideControls": false, + "id": null, + "links": [], + "refresh": "5s", + "rows": [ + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 7, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "irate(binlog_pump_rpc_counter[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} : {{method}}", + "metric": "binlog_cistern_rpc_duration_seconds_bucket", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "RPC QPS(pump)", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 3, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, rate(binlog_pump_rpc_duration_seconds_bucket[1m]))", + "intervalFactor": 2, + "legendFormat": "{{instance}} : {{method}}", + "refId": "B", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% RPC Latency(pump)", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "New row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "format": "none", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 34, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "binlog_drainer_window{marker=\"upper\", }/(2^18*10^3)", + "intervalFactor": 2, + "legendFormat": "", + "metric": "binlog_drainer_window", + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "slave upper boundary", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "avg" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "format": "none", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 40, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "binlog_drainer_window{marker=\"lower\", }/(2^18*10^3)", + "intervalFactor": 2, + "legendFormat": "", + "metric": "binlog_drainer_window", + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "slave lower boundary", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "avg" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "format": "none", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 37, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 2, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "binlog_drainer_position{}/((2^18)*1000)", + "intervalFactor": 2, + "legendFormat": "", + "metric": "binlog_drainer_position", + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "slave position", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "avg" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "format": "none", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 28, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 2, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "binlog_drainer_error_binlog_count{}", + "intervalFactor": 2, + "legendFormat": "", + "metric": "binlog_drainer_error_binlog_count", + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "error binlogs", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "avg" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "format": "none", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 29, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 2, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "binlog_drainer_query_tikv_count{}", + "intervalFactor": 2, + "metric": "binlog_drainer_query_tikv_count", + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "slave tikv query", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "avg" + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "New row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 38, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "(binlog_drainer_window{marker=\"upper\", } - ignoring(marker)binlog_drainer_position{})/(2^18*10^3)", + "intervalFactor": 2, + "legendFormat": "", + "metric": "binlog_drainer_position", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "synchronization delay", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "绉�", + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "New row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 6, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(binlog_drainer_event{}[1m])", + "intervalFactor": 2, + "legendFormat": "", + "metric": "binlog_drainer_event", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Drainer Event", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 10, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 15, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, rate(binlog_drainer_txn_duration_time_bucket[1m]))", + "intervalFactor": 2, + "legendFormat": "{{instance}}:{{job}}", + "metric": "binlog_drainer_txn_duration_time_bucket", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% drainer txn latency", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "New row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 9, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "go_goroutines{job=\"binlog\"}", + "intervalFactor": 2, + "metric": "go_goroutines", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Goroutine", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 39, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "go_memstats_heap_inuse_bytes{job=\"binlog\"}", + "intervalFactor": 2, + "metric": "go_goroutines", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Memory", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bits", + "label": "", + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "New row", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [] + }, + "time": { + "from": "now-5m", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ] + }, + "timezone": "browser", + "title": "Drainer", + "version": 1 +} \ No newline at end of file diff --git a/v1.0/etc/Syncer.json b/v1.0/etc/Syncer.json new file mode 100755 index 0000000000000..db8ef34108afe --- /dev/null +++ b/v1.0/etc/Syncer.json @@ -0,0 +1,791 @@ +{ + "__inputs": [ + { + "name": "DS_BIGDATA-CLUSTER", + "label": "bigdata-cluster", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + } + ], + "annotations": { + "list": [] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "hideControls": false, + "id": null, + "links": [], + "refresh": "5s", + "rows": [ + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "fill": 1, + "id": 1, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "irate(syncer_binlog_events_total[1m])", + "intervalFactor": 2, + "legendFormat": "{{job}} - {{type}}", + "metric": "syncer_binlog_events_total", + "refId": "A", + "step": 20 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "binlog events", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Dashboard Row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "fill": 1, + "id": 2, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": false, + "show": true, + "sort": "current", + "sortDesc": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "syncer_binlog_pos{node=\"syncer\"}", + "intervalFactor": 2, + "legendFormat": "{{job}} {{node}}", + "metric": "", + "refId": "A", + "step": 30 + }, + { + "expr": "syncer_binlog_pos{node=\"master\"}", + "intervalFactor": 2, + "legendFormat": "{{job}} {{node}}", + "refId": "B", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "binlog pos", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "fill": 1, + "id": 4, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "syncer_binlog_file{node=\"master\"}", + "intervalFactor": 2, + "legendFormat": "{{job}} {{node}}", + "refId": "A", + "step": 30 + }, + { + "expr": "syncer_binlog_file{node=\"syncer\"}", + "intervalFactor": 2, + "legendFormat": "{{job}} {{node}}", + "refId": "B", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "syncer_binlog_file", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Dashboard Row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "decimals": null, + "fill": 1, + "id": 5, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "syncer_gtid", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 20 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "syncer_gtid", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Dashboard Row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 2 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": " syncer_binlog_file{node=\"master\"} - ON(instance, job) syncer_binlog_file{node=\"syncer\"} ", + "intervalFactor": 10, + "legendFormat": "{{job}}", + "refId": "A", + "step": 50 + }, + "params": [ + "A", + "5m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "name": "syncer_binlog_file alert", + "noDataState": "no_data", + "notifications": [ + { + "id": 1 + } + ] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "fill": 1, + "id": 6, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": " syncer_binlog_file{node=\"master\"} - ON(instance, job) syncer_binlog_file{node=\"syncer\"} ", + "intervalFactor": 10, + "legendFormat": "{{job}}", + "refId": "A", + "step": 100 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 2 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "syncer_binlog_file", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Binlog file", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "fill": 1, + "id": 3, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "irate(syncer_binlog_skipped_events_total[1m])", + "intervalFactor": 2, + "legendFormat": "{{job}} {{type}}", + "metric": "syncer_binlog_skipped_events_total", + "refId": "A", + "step": 20 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "binlog skipped events", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Dashboard Row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 20 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "params": [ + "A", + "5m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "name": "syncer_txn_costs_gauge_in_second alert", + "noDataState": "no_data", + "notifications": [ + { + "id": 1 + } + ] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "fill": 1, + "id": 7, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "syncer_txn_costs_gauge_in_second", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 20 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 20 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "syncer_txn_costs_gauge_in_second", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Dashboard Row", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [] + }, + "time": { + "from": "now-3h", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ] + }, + "timezone": "browser", + "title": "Syncer", + "version": 24 +} \ No newline at end of file diff --git a/v1.0/etc/node.json b/v1.0/etc/node.json new file mode 100755 index 0000000000000..5444feb20b4d1 --- /dev/null +++ b/v1.0/etc/node.json @@ -0,0 +1,2490 @@ +{ + "__inputs": [ + { + "name": "DS_TEST-CLUSTER", + "label": "test-cluster", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + }, + { + "type": "panel", + "id": "singlestat", + "name": "Singlestat", + "version": "" + } + ], + "annotations": { + "list": [ + { + "datasource": "${DS_TEST-CLUSTER}", + "enable": true, + "expr": "ALERTS{instance=\"$host\", alertstate=\"firing\"}", + "iconColor": "rgb(252, 5, 0)", + "name": "Alert", + "tagKeys": "severity", + "textFormat": "{{ instance }} : {{alertstate}}", + "titleFormat": "{{ alertname }}" + }, + { + "datasource": "${DS_TEST-CLUSTER}", + "enable": true, + "expr": "ALERTS{instance=\"$host\",alertstate=\"pending\"}", + "iconColor": "rgb(228, 242, 9)", + "name": "Warning", + "tagKeys": "severity", + "textFormat": "{{ instance }} : {{ alertstate }}", + "titleFormat": "{{ alertname }}" + } + ] + }, + "description": "Prometheus for system metrics. \r\nLoad, CPU, RAM, network, process ... ", + "editable": true, + "gnetId": 159, + "graphTooltip": 1, + "hideControls": false, + "id": null, + "links": [ + { + "asDropdown": false, + "icon": "external link", + "tags": [], + "type": "dashboards" + } + ], + "refresh": "30s", + "rows": [ + { + "collapse": false, + "height": "250px", + "panels": [ + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": true, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "format": "s", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "height": "50px", + "id": 19, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "s", + "postfixFontSize": "80%", + "prefix": "", + "prefixFontSize": "80%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "calculatedInterval": "10m", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_time{instance=\"$host\"} - node_boot_time{instance=\"$host\"}", + "interval": "5m", + "intervalFactor": 1, + "legendFormat": "", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_time%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20node_boot_time%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%2243200s%22%2C%22end_input%22%3A%222015-9-18%2013%3A25%22%2C%22step_input%22%3A%22%22%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 300 + } + ], + "thresholds": "300,3600", + "title": "System Uptime", + "transparent": false, + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [], + "valueName": "current" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "format": "none", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "height": "55px", + "id": 25, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "80%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "count(node_cpu{mode=\"user\", instance=\"$host\"})", + "interval": "5m", + "intervalFactor": 1, + "refId": "A", + "step": 300 + } + ], + "thresholds": "", + "title": "Virtual CPUs", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "current" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "format": "bytes", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "height": "55px", + "id": 26, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "80%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "node_memory_MemAvailable{instance=\"$host\"}", + "interval": "", + "intervalFactor": 1, + "legendFormat": "", + "metric": "node_memory_MemAvailable", + "refId": "A", + "step": 30 + } + ], + "thresholds": "", + "title": "RAM available", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "current" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": true, + "colors": [ + "rgba(50, 172, 45, 0.97)", + "rgba(237, 129, 40, 0.89)", + "rgba(245, 54, 54, 0.9)" + ], + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 0, + "editable": true, + "error": false, + "format": "percent", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "height": "50px", + "id": 9, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "80%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": true, + "lineColor": "rgb(31, 120, 193)", + "show": true + }, + "targets": [ + { + "calculatedInterval": "10m", + "datasourceErrors": {}, + "errors": {}, + "expr": "(node_memory_MemAvailable{instance=\"$host\"} or (node_memory_MemFree{instance=\"$host\"} + node_memory_Buffers{instance=\"$host\"} + node_memory_Cached{instance=\"$host\"})) / node_memory_MemTotal{instance=\"$host\"} * 100", + "interval": "5m", + "intervalFactor": 1, + "legendFormat": "", + "metric": "node_mem", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%20%2F%20node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20*%20100%22%2C%22range_input%22%3A%2243201s%22%2C%22end_input%22%3A%222015-9-15%2013%3A54%22%2C%22step_input%22%3A%22%22%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 300 + } + ], + "thresholds": "90,95", + "title": "Memory Available", + "transparent": false, + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [], + "valueName": "current" + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 6, + "grid": {}, + "height": "260px", + "id": 2, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "sum(rate(node_cpu{instance=\"$host\"}[$interval])) by (mode) * 100 / count_scalar(node_cpu{mode=\"user\", instance=\"$host\"}) or sum(irate(node_cpu{instance=\"$host\"}[5m])) by (mode) * 100 / count_scalar(node_cpu{mode=\"user\", instance=\"$host\"})", + "intervalFactor": 1, + "legendFormat": "{{ mode }}", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22sum(rate(node_cpu%7Binstance%3D%5C%22%24host%5C%22%7D%5B%24interval%5D))%20by%20(mode)%20*%20100%22%2C%22range_input%22%3A%223600s%22%2C%22end_input%22%3A%222015-10-22%2015%3A27%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 1 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "CPU Usage", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percent", + "label": "", + "logBase": 1, + "max": 100, + "min": 0, + "show": true + }, + { + "format": "short", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 18, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "color": "#E24D42", + "instance": "Load 1m" + }, + { + "color": "#E0752D", + "instance": "Load 5m" + }, + { + "color": "#E5AC0E", + "instance": "Load 15m" + } + ], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "10s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_load1{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Load 1m", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_load1%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%223601s%22%2C%22end_input%22%3A%222015-10-22%2015%3A27%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Afalse%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 1, + "target": "" + }, + { + "calculatedInterval": "10s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_load5{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Load 5m", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_load5%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%223600s%22%2C%22end_input%22%3A%222015-10-22%2015%3A27%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Afalse%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 1, + "target": "" + }, + { + "calculatedInterval": "10s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_load15{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Load 15m", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_load15%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%223600s%22%2C%22end_input%22%3A%222015-10-22%2015%3A27%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Afalse%2C%22tab%22%3A0%7D%5D", + "refId": "C", + "step": 1, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Load Average", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "none", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "System Stats", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "300px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 6, + "grid": {}, + "height": "", + "id": 6, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "color": "#0A437C", + "instance": "Used" + }, + { + "color": "#5195CE", + "instance": "Available" + }, + { + "color": "#052B51", + "instance": "Total", + "legend": false, + "stack": false + } + ], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_MemTotal{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Total", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "C", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_MemTotal{instance=\"$host\"} - (node_memory_MemAvailable{instance=\"$host\"} or (node_memory_MemFree{instance=\"$host\"} + node_memory_Buffers{instance=\"$host\"} + node_memory_Cached{instance=\"$host\"}))", + "intervalFactor": 1, + "legendFormat": "Used", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_MemAvailable{instance=\"$host\"} or (node_memory_MemFree{instance=\"$host\"} + node_memory_Buffers{instance=\"$host\"} + node_memory_Cached{instance=\"$host\"})", + "intervalFactor": 1, + "legendFormat": "Available", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Memory", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 6, + "grid": {}, + "height": "", + "id": 29, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_MemTotal{instance=\"$host\"} - (node_memory_MemFree{instance=\"$host\"} + node_memory_Buffers{instance=\"$host\"} + node_memory_Cached{instance=\"$host\"})", + "intervalFactor": 1, + "legendFormat": "Used", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_MemFree{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Free", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_Buffers{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Buffers", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "D", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_Cached{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Cached", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "E", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Memory Distribution", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": true, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 24, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "color": "#EF843C", + "instance": "Forks" + } + ], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_forks{instance=\"$host\"}[$interval]) or irate(node_forks{instance=\"$host\"}[5m])", + "intervalFactor": 1, + "legendFormat": "Forks", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_procs_running%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%2243200s%22%2C%22end_input%22%3A%222015-9-18%2013%3A46%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Forks", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "none", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": true, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 20, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "color": "#E24D42", + "instance": "Processes blocked waiting for I/O to complete" + }, + { + "color": "#6ED0E0", + "instance": "Processes in runnable state" + } + ], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_procs_running{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Processes in runnable state", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_procs_running%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%2243200s%22%2C%22end_input%22%3A%222015-9-18%2013%3A46%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_procs_blocked{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Processes blocked waiting for I/O to complete", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_procs_blocked%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%2243200s%22%2C%22end_input%22%3A%222015-9-18%2013%3A46%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Processes", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "none", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 27, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_context_switches{instance=\"$host\"}[$interval]) or irate(node_context_switches{instance=\"$host\"}[5m])", + "intervalFactor": 1, + "legendFormat": "Context Switches", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_procs_running%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%2243200s%22%2C%22end_input%22%3A%222015-9-18%2013%3A46%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Context Switches", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "none", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 28, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "color": "#D683CE", + "instance": "Interrupts" + } + ], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_intr{instance=\"$host\"}[$interval]) or irate(node_intr{instance=\"$host\"}[5m])", + "intervalFactor": 1, + "legendFormat": "Interrupts", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_procs_running%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%2243200s%22%2C%22end_input%22%3A%222015-9-18%2013%3A46%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Interrupts", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "none", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 6, + "grid": {}, + "id": 21, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_network_receive_bytes{instance=\"$host\", device!=\"lo\"}[$interval]) or irate(node_network_receive_bytes{instance=\"$host\", device!=\"lo\"}[5m])", + "intervalFactor": 1, + "legendFormat": "Inbound: {{ device }}", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_network_transmit_bytes{instance=\"$host\", device!=\"lo\"}[$interval]) or irate(node_network_transmit_bytes{instance=\"$host\", device!=\"lo\"}[5m])", + "intervalFactor": 1, + "legendFormat": "Outbound: {{ device }}", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Network Traffic", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": true, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 6, + "grid": {}, + "id": 22, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "sort": "min", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "sum(increase(node_network_receive_bytes{instance=\"$host\", device!=\"lo\"}[1h]))", + "interval": "1h", + "intervalFactor": 1, + "legendFormat": "Received", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 3600, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "sum(increase(node_network_transmit_bytes{instance=\"$host\", device!=\"lo\"}[1h]))", + "interval": "1h", + "intervalFactor": 1, + "legendFormat": "Sent", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 3600, + "target": "" + } + ], + "thresholds": [], + "timeFrom": "24h", + "timeShift": null, + "title": "Network Utilization Hourly", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 6, + "grid": {}, + "id": 23, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "color": "#584477", + "instance": "Used" + }, + { + "color": "#AEA2E0", + "instance": "Free" + } + ], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_SwapTotal{instance=\"$host\"} - node_memory_SwapFree{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Used", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_SwapFree{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Free", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Swap", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 30, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_vmstat_pswpin{instance=\"$host\"}[$interval]) * 4096 or irate(node_vmstat_pswpin{instance=\"$host\"}[5m]) * 4096", + "intervalFactor": 1, + "legendFormat": "Swap In", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_vmstat_pswpout{instance=\"$host\"}[$interval]) * 4096 or irate(node_vmstat_pswpout{instance=\"$host\"}[5m]) * 4096", + "intervalFactor": 1, + "legendFormat": "Swap Out", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Swap Activity", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "description": "Number of TCP sockets in state inuse.", + "fill": 1, + "id": 32, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "node_sockstat_TCP_inuse{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "TCP In Use", + "metric": "", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "TCP In Use", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "New row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 31, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_vmstat_pgpgin{instance=\"$host\"}[$interval]) * 1024 or irate(node_vmstat_pgpgin{instance=\"$host\"}[5m]) * 1024", + "intervalFactor": 1, + "legendFormat": "Read", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 1, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_vmstat_pgpgout{instance=\"$host\"}[$interval]) * 1024 or irate(node_vmstat_pgpgout{instance=\"$host\"}[5m]) * 1024", + "intervalFactor": 1, + "legendFormat": "Write", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 1, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "I/O Throughput", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 35, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 200, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_io_time_ms{instance=\"$host\"}[1m]) / 1000", + "intervalFactor": 1, + "legendFormat": "{{ device }}", + "metric": "node_disk_io_time_ms", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "I/O Util", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 36, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 200, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_io_now{instance=\"$host\"}[1m])", + "intervalFactor": 1, + "legendFormat": "{{ device }}", + "metric": "node_disk_io_time_ms", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "I/O in Progress", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 37, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 200, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_read_time_ms{instance=\"$host\"}[1m]) / rate(node_disk_reads_completed{instance=\"$host\"}[1m])", + "intervalFactor": 1, + "legendFormat": "{{ device }}", + "metric": "node_disk_io_time_ms", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "I/O Average Read Time", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 38, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 200, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_write_time_ms{instance=\"$host\"}[1m]) / rate(node_disk_writes_completed{instance=\"$host\"}[1m])", + "intervalFactor": 1, + "legendFormat": "{{ device }}", + "metric": "node_disk_io_time_ms", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "I/O Average Write Time", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "I/O", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "fill": 1, + "id": 33, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "node_filefd_allocated{instance=\"$host\"}", + "intervalFactor": 2, + "legendFormat": "Allocated File Descriptor", + "metric": "node_filefd_allocated", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Allocated File Descriptor", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "fill": 1, + "id": 34, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "node_filefd_maximum{instance=\"$host\"}", + "intervalFactor": 2, + "legendFormat": "Maximum File Descriptor", + "metric": "node_filefd_maximum", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Maximum File Descriptor", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Dashboard Row", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [ + { + "allFormat": "glob", + "auto": true, + "auto_count": 200, + "auto_min": "1s", + "current": { + "text": "5s", + "value": "5s" + }, + "datasource": "test-cluster", + "hide": 0, + "includeAll": false, + "label": "Interval", + "multi": false, + "multiFormat": "glob", + "name": "interval", + "options": [ + { + "selected": false, + "text": "auto", + "value": "$__auto_interval" + }, + { + "selected": false, + "text": "1s", + "value": "1s" + }, + { + "selected": true, + "text": "5s", + "value": "5s" + }, + { + "selected": false, + "text": "1m", + "value": "1m" + }, + { + "selected": false, + "text": "5m", + "value": "5m" + }, + { + "selected": false, + "text": "1h", + "value": "1h" + }, + { + "selected": false, + "text": "6h", + "value": "6h" + }, + { + "selected": false, + "text": "1d", + "value": "1d" + } + ], + "query": "1s,5s,1m,5m,1h,6h,1d", + "refresh": 2, + "type": "interval" + }, + { + "allFormat": "glob", + "allValue": null, + "current": {}, + "datasource": "${DS_TEST-CLUSTER}", + "hide": 0, + "includeAll": false, + "label": "Host", + "multi": false, + "multiFormat": "regex values", + "name": "host", + "options": [], + "query": "label_values(node_boot_time,instance)", + "refresh": 1, + "refresh_on_load": false, + "regex": "", + "sort": 3, + "tagValuesQuery": "instance", + "tags": [], + "tagsQuery": "up", + "type": "query", + "useTags": false + } + ] + }, + "time": { + "from": "now-1h", + "to": "now" + }, + "timepicker": { + "collapse": false, + "enable": true, + "notice": false, + "now": true, + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "status": "Stable", + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ], + "type": "timepicker" + }, + "timezone": "browser", + "title": "TiDB Cluster - node", + "version": 0 +} diff --git a/v1.0/etc/overview.json b/v1.0/etc/overview.json new file mode 100755 index 0000000000000..c00ce7401ee46 --- /dev/null +++ b/v1.0/etc/overview.json @@ -0,0 +1,2747 @@ +{ + "__inputs": [ + { + "name": "DS_TIDB-CLUSTER", + "label": "${DS_TIDB-CLUSTER}", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + }, + { + "type": "panel", + "id": "singlestat", + "name": "Singlestat", + "version": "" + }, + { + "type": "panel", + "id": "table", + "name": "Table", + "version": "" + } + ], + "annotations": { + "list": [] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "hideControls": false, + "id": null, + "links": [], + "refresh": "30s", + "rows": [ + { + "collapse": false, + "height": 250, + "panels": [ + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": null, + "editable": true, + "error": false, + "format": "bytes", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": false + }, + "id": 27, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "null", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 4, + "sparkline": { + "fillColor": "rgba(77, 135, 25, 0.18)", + "full": true, + "lineColor": "rgb(21, 179, 65)", + "show": true + }, + "targets": [ + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"storage_capacity\"}", + "intervalFactor": 2, + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "Storage Capacity", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "current" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "format": "bytes", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "hideTimeOverride": false, + "id": 28, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "null", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 4, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": true, + "lineColor": "rgb(31, 120, 193)", + "show": true + }, + "targets": [ + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"storage_size\"}", + "intervalFactor": 2, + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "Current Storage Size", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "current" + }, + { + "columns": [ + { + "text": "Current", + "value": "current" + } + ], + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fontSize": "120%", + "hideTimeOverride": false, + "id": 18, + "links": [], + "pageSize": null, + "repeat": null, + "scroll": false, + "showHeader": true, + "sort": { + "col": null, + "desc": false + }, + "span": 4, + "styles": [ + { + "dateFormat": "YYYY-MM-DD HH:mm:ss", + "pattern": "Metric", + "sanitize": false, + "type": "string" + }, + { + "colorMode": "cell", + "colors": [ + "rgba(50, 172, 45, 0.97)", + "rgba(237, 129, 40, 0.89)", + "rgba(245, 54, 54, 0.9)" + ], + "decimals": 0, + "pattern": "Current", + "thresholds": [ + "1", + "2" + ], + "type": "number", + "unit": "short" + } + ], + "targets": [ + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"store_up_count\"}", + "interval": "", + "intervalFactor": 2, + "legendFormat": "Up Stores", + "metric": "pd_cluster_status", + "refId": "A", + "step": 2 + }, + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"store_down_count\"}", + "intervalFactor": 2, + "legendFormat": "Down Stores", + "refId": "B", + "step": 2 + }, + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"store_offline_count\"}", + "intervalFactor": 2, + "legendFormat": "Offline Stores", + "refId": "C", + "step": 2 + }, + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"store_tombstone_count\"}", + "intervalFactor": 2, + "legendFormat": "Tombstone Stores", + "refId": "D", + "step": 2 + } + ], + "title": "Store Status", + "transform": "timeseries_aggregations", + "transparent": false, + "type": "table" + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0.8 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "avg(pd_cluster_status{type=\"storage_size\"}) / avg(pd_cluster_status{type=\"storage_capacity\"})", + "hide": false, + "intervalFactor": 4, + "legendFormat": "used ratio", + "refId": "B", + "step": 4 + }, + "params": [ + "B", + "5m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Storage used space is above 80%.", + "name": "Current Storage Usage alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 22, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "used ratio", + "yaxis": 2 + } + ], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"storage_size\"}", + "hide": false, + "intervalFactor": 2, + "legendFormat": "strage size", + "refId": "A", + "step": 2 + }, + { + "expr": "avg(pd_cluster_status{type=\"storage_size\"}) / avg(pd_cluster_status{type=\"storage_capacity\"})", + "hide": false, + "intervalFactor": 4, + "legendFormat": "used ratio", + "refId": "B", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0.8 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Current Storage Usage", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "decbytes", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 23, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{instance=\"$instance\"}[5m])) by (grpc_method, le))", + "intervalFactor": 2, + "legendFormat": "{{grpc_method}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% completed_cmds_duration_seconds", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 24, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": true, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(grpc_server_handling_seconds_sum{instance=\"$instance\"}[5m])) by (grpc_method, le))", + "hide": true, + "intervalFactor": 2, + "legendFormat": "{{grpc_method}}", + "refId": "A", + "step": 4 + }, + { + "expr": "rate(grpc_server_handling_seconds_sum{instance=\"$instance\"}[30s]) / rate(grpc_server_handling_seconds_count{instance=\"$instance\"}[30s])", + "intervalFactor": 2, + "legendFormat": "{{grpc_method}}", + "refId": "B", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "average completed_cmds_duration_seconds", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0.2 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "min(pd_cluster_status{type=\"region_balance_ratio\"})", + "hide": true, + "intervalFactor": 2, + "legendFormat": "ratio", + "refId": "B", + "step": 4 + }, + "params": [ + "B", + "5m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Store balance ratio is high", + "name": "Region Balance Ratio alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 4, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 26, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"region_balance_ratio\"}", + "hide": false, + "intervalFactor": 2, + "legendFormat": "\bstore balance ratio", + "refId": "A", + "step": 2 + }, + { + "expr": "min(pd_cluster_status{type=\"region_balance_ratio\"})", + "hide": true, + "intervalFactor": 2, + "legendFormat": "ratio", + "refId": "B", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0.2 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Region Balance Ratio", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0.2 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "min(pd_cluster_status{type=\"leader_balance_ratio\"})", + "hide": true, + "intervalFactor": 2, + "legendFormat": "ratio", + "refId": "B", + "step": 4 + }, + "params": [ + "B", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Region leader balance ratio is high.", + "name": "Leader Banlace Ratio alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 4, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 25, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"leader_balance_ratio\"}", + "intervalFactor": 2, + "legendFormat": "leader max diff ratio", + "refId": "A", + "step": 2 + }, + { + "expr": "min(pd_cluster_status{type=\"leader_balance_ratio\"})", + "hide": true, + "intervalFactor": 2, + "legendFormat": "ratio", + "refId": "B", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0.2 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Leader Balance Ratio", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "PD", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 1, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "sort": "current", + "sortDesc": false, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.98, sum(rate(pd_client_request_handle_requests_duration_seconds_bucket[30s])) by (type, le))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{type}} 98th percentile", + "refId": "A", + "step": 2 + }, + { + "expr": "avg(rate(pd_client_request_handle_requests_duration_seconds_sum[30s])) by (type) / avg(rate(pd_client_request_handle_requests_duration_seconds_count[30s])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}} average", + "refId": "B", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "handle_requests_duration_seconds", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 2, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 12, + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "total", + "lines": false + } + ], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(tidb_server_query_total[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}} {{status}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "QPS", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 2, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 4, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "total", + "fill": 0, + "lines": false + } + ], + "span": 12, + "stack": true, + "steppedLine": true, + "targets": [ + { + "expr": "tidb_server_connections", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 2 + }, + { + "expr": "sum(tidb_server_connections)", + "intervalFactor": 2, + "legendFormat": "total", + "refId": "B", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Connection Count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": null, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 3, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(irate(tidb_executor_statement_node_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Statement Count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 10 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 2 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Query duration for 99th percentile is high.", + "name": "Query Duration 99th percentile alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 5, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "B", + "step": 2 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 10 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Query Duration 99th percentile", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "rate(tidb_server_schema_lease_error_counter[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "metric": "tidb_server_", + "refId": "A", + "step": 2 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Schema lease error.", + "name": "Schema Lease Error alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 6, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(tidb_server_schema_lease_error_counter[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "metric": "tidb_server_", + "refId": "A", + "step": 2 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Schema Lease Error Rate", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "TiDB", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 299, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 7, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_scheduler_command_duration_seconds_bucket[1m])) by (le,type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_scheduler_command_duration_seconds_bucket", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% scheduler command duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 8, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tikv_scheduler_command_duration_seconds_bucket[1m])) by (le,type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99.99% scheduler command duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 9, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_storage_engine_async_request_duration_seconds_bucket", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% storage async request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 10, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% storage async request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 11, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_server_report_failure_msg_total[1m])) by (type,instance,job,store_id)", + "intervalFactor": 2, + "legendFormat": "{{job}}-{{type}}-to-{{store_id}}", + "metric": "tikv_server_raft_store_msg_total", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "server report failure msg", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 12, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_raft_sent_message_total{type=\"vote\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}-vote", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "vote", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 13, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_duration_seconds_bucket{req!=\"\"}[1m])) by (le,type,req))", + "intervalFactor": 2, + "legendFormat": "{{type}}-{{req}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% coprocessor request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 14, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_duration_seconds_bucket{req!=\"\"}[1m])) by (le,type,req))", + "intervalFactor": 2, + "legendFormat": "{{type}}-{{req}}", + "metric": "tikv_coprocessor_request_duration_seconds_bucket", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% coprocessor request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 15, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 8, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_worker_pending_task_total[1m])) by (name)", + "intervalFactor": 2, + "legendFormat": "{{name}}", + "metric": "tikv_pd_heartbeat_tick_total", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Pending Task", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "fill": 1, + "id": 16, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_stall_micro_seconds[30s])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "stall", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ms", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "sum(rate(tikv_channel_full_total[1m])) by (type, job)", + "intervalFactor": 2, + "legendFormat": "{{job}}-{{type}}", + "metric": "", + "refId": "A", + "step": 2 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiKV channel full", + "name": "TiKV channel full alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 3, + "grid": {}, + "id": 17, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 5, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_channel_full_total[1m])) by (type, job)", + "intervalFactor": 2, + "legendFormat": "{{job}}-{{type}}", + "metric": "", + "refId": "A", + "step": 2 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "channel full", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 20, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "total", + "lines": false + } + ], + "span": 7, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_pd_heartbeat_tick_total{type=\"leader\"}) by (instance,job)", + "intervalFactor": 2, + "legendFormat": "{{instance}}-{{job}}", + "metric": "tikv_pd_heartbeat_tick_total", + "refId": "A", + "step": 2 + }, + { + "expr": "sum(tikv_pd_heartbeat_tick_total{type=\"leader\"}) ", + "hide": true, + "intervalFactor": 2, + "legendFormat": "total", + "refId": "B", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "leader", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 19, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 5, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_pd_msg_send_duration_seconds_bucket[30s])) by (le))", + "intervalFactor": 2, + "legendFormat": "", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% send_message_duration_seconds", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 21, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 7, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_pd_heartbeat_tick_total{type=\"region\"}) by (job,instance)", + "intervalFactor": 2, + "legendFormat": "{{instance}}-{{job}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "region", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "TiKV", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [ + { + "allValue": null, + "current": {}, + "datasource": "${DS_TIDB-CLUSTER}", + "hide": 0, + "includeAll": false, + "label": null, + "multi": false, + "name": "instance", + "options": [], + "query": "label_values(pd_cluster_status, instance)", + "refresh": 1, + "regex": "", + "sort": 0, + "tagValuesQuery": "", + "tags": [], + "tagsQuery": "", + "type": "query", + "useTags": false + } + ] + }, + "time": { + "from": "now-5m", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ] + }, + "timezone": "browser", + "title": "TiDB Cluster - Overview", + "version": 1 +} diff --git a/v1.0/etc/pd.json b/v1.0/etc/pd.json new file mode 100755 index 0000000000000..02da3d1317660 --- /dev/null +++ b/v1.0/etc/pd.json @@ -0,0 +1,3556 @@ +{ + "style": "dark", + "rows": [ + { + "repeat": null, + "titleSize": "h4", + "repeatIteration": null, + "title": "Cluster", + "height": "300px", + "repeatRowId": null, + "panels": [ + { + "id": 55, + "title": "PD Role", + "span": 2, + "type": "singlestat", + "targets": [ + { + "refId": "A", + "expr": "delta(pd_server_tso{type=\"save\",instance=\"$instance\"}[15s])", + "intervalFactor": 2, + "metric": "pd_server_tso", + "step": 60, + "legendFormat": "" + } + ], + "links": [], + "datasource": "${DS_TIDB-CLUSTER}", + "maxDataPoints": 100, + "interval": null, + "cacheTimeout": null, + "format": "none", + "prefix": "", + "postfix": "", + "nullText": null, + "valueMaps": [ + { + "value": "null", + "op": "=", + "text": "N/A" + } + ], + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "rangeMaps": [ + { + "from": "1", + "to": "100000", + "text": "Leader" + }, + { + "from": "0", + "to": "1", + "text": "Follower" + } + ], + "mappingType": 2, + "nullPointMode": "connected", + "valueName": "current", + "prefixFontSize": "50%", + "valueFontSize": "50%", + "postfixFontSize": "50%", + "thresholds": "", + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "sparkline": { + "show": false, + "full": false, + "lineColor": "rgb(31, 120, 193)", + "fillColor": "rgba(31, 118, 189, 0.18)" + }, + "gauge": { + "show": false, + "minValue": 0, + "maxValue": 100, + "thresholdMarkers": true, + "thresholdLabels": false + } + }, + { + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "links": [], + "valueMaps": [ + { + "text": "N/A", + "value": "null", + "op": "=" + } + ], + "thresholds": "", + "rangeMaps": [ + { + "text": "N/A", + "from": "null", + "to": "null" + } + ], + "nullPointMode": "null", + "prefix": "", + "gauge": { + "thresholdLabels": false, + "show": false, + "thresholdMarkers": false, + "maxValue": 100, + "minValue": 0 + }, + "id": 10, + "maxDataPoints": 100, + "mappingType": 1, + "span": 2, + "colorBackground": false, + "title": "Storage Capacity", + "sparkline": { + "full": true, + "fillColor": "rgba(77, 135, 25, 0.18)", + "lineColor": "rgb(21, 179, 65)", + "show": true + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\",namespace=~\"$namespace\",type=\"storage_capacity\"})", + "step": 60, + "refId": "A" + } + ], + "prefixFontSize": "50%", + "valueName": "current", + "type": "singlestat", + "valueFontSize": "80%", + "format": "decbytes", + "editable": true, + "cacheTimeout": null, + "postfix": "", + "decimals": null, + "interval": null, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TIDB-CLUSTER}", + "error": false, + "nullText": null, + "postfixFontSize": "50%", + "colorValue": false + }, + { + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "links": [], + "valueMaps": [ + { + "text": "N/A", + "value": "null", + "op": "=" + } + ], + "thresholds": "", + "rangeMaps": [ + { + "text": "N/A", + "from": "null", + "to": "null" + } + ], + "nullPointMode": "null", + "prefix": "", + "gauge": { + "thresholdLabels": false, + "show": false, + "thresholdMarkers": true, + "maxValue": 100, + "minValue": 0 + }, + "id": 38, + "maxDataPoints": 100, + "mappingType": 1, + "span": 2, + "colorBackground": false, + "title": "Current Storage Size", + "sparkline": { + "full": true, + "fillColor": "rgba(31, 118, 189, 0.18)", + "lineColor": "rgb(31, 120, 193)", + "show": true + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\",namespace=~\"$namespace\",type=\"storage_size\"})", + "step": 60, + "refId": "A" + } + ], + "prefixFontSize": "50%", + "valueName": "current", + "type": "singlestat", + "valueFontSize": "80%", + "format": "decbytes", + "editable": true, + "hideTimeOverride": false, + "postfix": "", + "decimals": 1, + "interval": null, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "cacheTimeout": null, + "datasource": "${DS_TIDB-CLUSTER}", + "error": false, + "nullText": null, + "postfixFontSize": "50%", + "colorValue": false + }, + { + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "links": [], + "valueMaps": [ + { + "text": "N/A", + "value": "null", + "op": "=" + } + ], + "thresholds": "", + "rangeMaps": [ + { + "text": "N/A", + "from": "null", + "to": "null" + } + ], + "nullPointMode": "null", + "prefix": "", + "gauge": { + "thresholdLabels": false, + "show": false, + "thresholdMarkers": false, + "maxValue": 100, + "minValue": 0 + }, + "id": 20, + "maxDataPoints": 100, + "mappingType": 1, + "span": 2, + "colorBackground": false, + "title": "Number of Regions", + "sparkline": { + "full": true, + "fillColor": "rgba(31, 118, 189, 0.18)", + "lineColor": "rgb(31, 120, 193)", + "show": true + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\",namespace=~\"$namespace\",type=\"region_count\"})", + "step": 60, + "refId": "A" + } + ], + "prefixFontSize": "50%", + "valueName": "current", + "type": "singlestat", + "valueFontSize": "80%", + "format": "none", + "editable": true, + "cacheTimeout": null, + "postfix": "", + "interval": null, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TIDB-CLUSTER}", + "error": false, + "nullText": null, + "postfixFontSize": "50%", + "colorValue": false + }, + { + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "links": [], + "valueMaps": [ + { + "text": "N/A", + "value": "null", + "op": "=" + } + ], + "thresholds": "0.01,0.5", + "rangeMaps": [ + { + "text": "N/A", + "from": "null", + "to": "null" + } + ], + "nullPointMode": "null", + "prefix": "", + "gauge": { + "thresholdLabels": false, + "show": false, + "thresholdMarkers": true, + "maxValue": 1, + "minValue": 0 + }, + "id": 37, + "maxDataPoints": 100, + "mappingType": 1, + "span": 1, + "colorBackground": false, + "title": "Leader Balance Ratio", + "sparkline": { + "full": true, + "fillColor": "rgba(31, 118, 189, 0.18)", + "lineColor": "rgb(31, 120, 193)", + "show": true + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "1 - min(pd_scheduler_balance_score{instance=\"$instance\", namespace=~\"$namespace\", type=\"leader\"}) / max(pd_scheduler_balance_score{instance=\"$instance\", namespace=~\"$namespace\", type=\"leader\"})", + "step": 60, + "refId": "A" + } + ], + "prefixFontSize": "50%", + "valueName": "current", + "type": "singlestat", + "valueFontSize": "80%", + "format": "percentunit", + "editable": true, + "hideTimeOverride": false, + "postfix": "", + "interval": null, + "colors": [ + "rgba(50, 172, 45, 0.97)", + "rgba(237, 129, 40, 0.89)", + "rgba(245, 54, 54, 0.9)" + ], + "cacheTimeout": null, + "datasource": "${DS_TIDB-CLUSTER}", + "error": false, + "nullText": null, + "postfixFontSize": "50%", + "colorValue": true + }, + { + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "links": [], + "valueMaps": [ + { + "text": "N/A", + "value": "null", + "op": "=" + } + ], + "thresholds": "0.05,0.5", + "rangeMaps": [ + { + "text": "N/A", + "from": "null", + "to": "null" + } + ], + "nullPointMode": "null", + "prefix": "", + "gauge": { + "thresholdLabels": false, + "show": false, + "thresholdMarkers": true, + "maxValue": 1, + "minValue": 0 + }, + "id": 36, + "maxDataPoints": 100, + "mappingType": 1, + "span": 1, + "colorBackground": false, + "title": "Region Balance Ratio", + "sparkline": { + "full": true, + "fillColor": "rgba(31, 118, 189, 0.18)", + "lineColor": "rgb(31, 120, 193)", + "show": true + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "1 - min(pd_scheduler_balance_score{instance=\"$instance\", namespace=~\"$namespace\", type=\"region\"}) / max(pd_scheduler_balance_score{instance=\"$instance\", namespace=~\"$namespace\", type=\"region\"})", + "step": 60, + "refId": "A", + "legendFormat": "" + } + ], + "prefixFontSize": "50%", + "valueName": "current", + "type": "singlestat", + "valueFontSize": "80%", + "format": "percentunit", + "editable": true, + "cacheTimeout": null, + "postfix": "", + "decimals": null, + "interval": null, + "colors": [ + "rgba(50, 172, 45, 0.97)", + "rgba(237, 129, 40, 0.89)", + "rgba(245, 54, 54, 0.9)" + ], + "datasource": "${DS_TIDB-CLUSTER}", + "error": false, + "nullText": null, + "postfixFontSize": "50%", + "colorValue": true + }, + { + "sort": { + "col": null, + "desc": false + }, + "styles": [ + { + "pattern": "Metric", + "type": "string", + "sanitize": false, + "dateFormat": "YYYY-MM-DD HH:mm:ss" + }, + { + "colorMode": "cell", + "thresholds": [ + "1", + "2" + ], + "colors": [ + "rgba(50, 172, 45, 0.97)", + "rgba(237, 129, 40, 0.89)", + "rgba(245, 54, 54, 0.9)" + ], + "type": "number", + "pattern": "Current", + "decimals": 0, + "unit": "short" + } + ], + "repeat": null, + "span": 2, + "pageSize": null, + "links": [], + "title": "Store Status", + "editable": true, + "transform": "timeseries_aggregations", + "showHeader": true, + "scroll": false, + "targets": [ + { + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\", type=\"store_up_count\"})", + "metric": "pd_cluster_status", + "interval": "", + "step": 20, + "legendFormat": "Up Stores", + "intervalFactor": 2, + "refId": "A" + }, + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\",type=\"store_disconnected_count\"})", + "step": 20, + "refId": "B", + "legendFormat": "Disconnect Stores" + }, + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\",type=\"store_low_space_count\"})", + "step": 20, + "refId": "C", + "legendFormat": "LowSpace Stores" + }, + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\",type=\"store_down_count\"})", + "step": 20, + "refId": "D", + "legendFormat": "Down Stores" + }, + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\",type=\"store_offline_count\"})", + "step": 20, + "refId": "E", + "legendFormat": "Offline Stores" + }, + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\",type=\"store_tombstone_count\"})", + "step": 20, + "refId": "F", + "legendFormat": "Tombstone Stores" + } + ], + "transparent": false, + "hideTimeOverride": false, + "fontSize": "100%", + "datasource": "${DS_TIDB-CLUSTER}", + "error": false, + "type": "table", + "id": 39, + "columns": [ + { + "text": "Current", + "value": "current" + } + ] + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [ + { + "colorMode": "critical", + "line": true, + "op": "gt", + "value": 0.8, + "fill": true + } + ], + "nullPointMode": "null", + "renderer": "flot", + "linewidth": 1, + "steppedLine": false, + "id": 9, + "fill": 0, + "span": 4, + "title": "Current Storage Usage", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "total": false, + "show": true, + "max": false, + "min": false, + "current": false, + "values": false, + "avg": false + }, + "targets": [ + { + "hide": false, + "expr": "pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\", type=\"storage_size\"}", + "step": 10, + "legendFormat": "strage size", + "intervalFactor": 2, + "refId": "A" + }, + { + "hide": false, + "expr": "avg(pd_cluster_status{type=\"storage_size\", namespace=~\"$namespace\"}) / avg(pd_cluster_status{type=\"storage_capacity\", namespace=~\"$namespace\"})", + "step": 20, + "legendFormat": "used ratio", + "intervalFactor": 4, + "refId": "B" + } + ], + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "decbytes", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "percentunit", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [ + { + "alias": "used ratio", + "yaxis": 2 + } + ], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "alert": { + "noDataState": "no_data", + "name": "Current Storage Usage alert", + "frequency": "60s", + "notifications": [], + "handler": 1, + "executionErrorState": "alerting", + "message": "Storage used space is above 80%.", + "conditions": [ + { + "operator": { + "type": "and" + }, + "query": { + "params": [ + "B", + "5m", + "now" + ], + "model": { + "hide": false, + "expr": "avg(pd_cluster_status{type=\"storage_size\"}) / avg(pd_cluster_status{type=\"storage_capacity\"})", + "step": 20, + "legendFormat": "used ratio", + "intervalFactor": 4, + "refId": "B" + }, + "datasourceId": 1 + }, + "evaluator": { + "type": "gt", + "params": [ + 0.8 + ] + }, + "reducer": { + "type": "avg", + "params": [] + }, + "type": "query" + } + ] + }, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": 2 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "linewidth": 2, + "steppedLine": false, + "id": 18, + "fill": 1, + "span": 4, + "title": "Current Regions Count", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "total": false, + "show": false, + "max": false, + "min": false, + "current": false, + "values": false, + "avg": false + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\", type=\"region_count\"})", + "step": 10, + "refId": "A", + "legendFormat": "count" + } + ], + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "none", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "none", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": null + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "id": 27, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(delta(pd_schedule_operators_count{instance=\"$instance\"}[1m])) by (type)", + "step": 10, + "refId": "A", + "legendFormat": "{{type}}" + } + ], + "fill": 1, + "span": 4, + "title": "Schedule operators count", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": false, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "values": true, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "opm", + "min": "0", + "label": "operation/minute" + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [ + { + "colorMode": "critical", + "line": true, + "op": "gt", + "value": 0.2, + "fill": true + } + ], + "nullPointMode": "null", + "renderer": "flot", + "linewidth": 2, + "steppedLine": false, + "id": 40, + "fill": 1, + "span": 6, + "title": "Store leader score", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "alignAsTable": true, + "total": false, + "show": false, + "max": true, + "min": true, + "current": true, + "values": false, + "avg": false + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "pd_scheduler_balance_score{instance=\"$instance\", namespace=~\"$namespace\", type=\"leader\"}", + "step": 10, + "refId": "A", + "legendFormat": "tikv-{{store}}" + } + ], + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": "0", + "label": null + }, + { + "logBase": 1, + "show": false, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": 4 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [ + { + "colorMode": "critical", + "line": true, + "op": "gt", + "value": 0.2, + "fill": true + } + ], + "nullPointMode": "null", + "renderer": "flot", + "linewidth": 2, + "steppedLine": false, + "id": 41, + "fill": 1, + "span": 6, + "title": "Store region score", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "alignAsTable": true, + "total": false, + "show": false, + "max": false, + "min": false, + "current": false, + "values": false, + "avg": false + }, + "targets": [ + { + "hide": false, + "expr": "pd_scheduler_balance_score{instance=\"$instance\", namespace=~\"$namespace\", type=\"region\"}", + "step": 10, + "legendFormat": "\bstore balance ratio", + "intervalFactor": 2, + "refId": "A" + } + ], + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": 4 + } + ], + "showTitle": true, + "collapse": false + }, + { + "repeat": null, + "titleSize": "h4", + "repeatIteration": null, + "title": "Scheduler", + "height": 288, + "repeatRowId": null, + "panels": [ + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 45, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(delta(pd_schedule_operators_count{instance=\"$instance\"}[1m])) by (type,state)", + "step": 10, + "refId": "A", + "legendFormat": "{{type}}-{{state}}" + } + ], + "fill": 1, + "span": 4, + "title": "Schedule operators count with state", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "values": false, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 47, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "expr": "pd_scheduler_status{type=\"limit\",instance=\"$instance\"}", + "metric": "pd_scheduler_status", + "step": 10, + "legendFormat": "{{kind}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 0, + "span": 4, + "title": "Scheduler limit", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "values": false, + "alignAsTable": true, + "avg": false, + "sortDesc": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 46, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "expr": "pd_scheduler_status{type=\"allow\",instance=\"$instance\"}", + "metric": "pd_scheduler_status", + "step": 10, + "legendFormat": "{{kind}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 0, + "span": 4, + "title": "Scheduler allow", + "tooltip": { + "sort": 1, + "shared": true, + "value_type": "individual" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "hideEmpty": true, + "values": false, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 1 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 50, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "pd_hotspot_status{instance=\"$instance\",type=\"hot_write_region_as_leader\"}", + "step": 10, + "refId": "A", + "legendFormat": "{{store}}" + } + ], + "fill": 0, + "span": 6, + "title": "Hot region's leader distribution", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "values": false, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 51, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "pd_hotspot_status{instance=\"$instance\",type=\"hot_write_region_as_peer\"}", + "step": 10, + "refId": "A", + "legendFormat": "{{store}}" + } + ], + "fill": 0, + "span": 6, + "title": "Hot region's peer distribution", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "values": false, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 48, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "expr": "pd_hotspot_status{instance=\"$instance\",type=\"total_written_bytes_as_leader\"}", + "metric": "pd_hotspot_status", + "step": 10, + "legendFormat": "{{store}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 1, + "span": 6, + "title": "Hot region's leader written bytes", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "values": false, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "bytes", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 49, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "pd_hotspot_status{instance=\"$instance\",type=\"total_written_bytes_as_peer\"}", + "step": 10, + "refId": "A", + "legendFormat": "{{store}}" + } + ], + "fill": 1, + "span": 6, + "title": "Hot region's peer written bytes", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "values": false, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "decbytes", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "fill": 1, + "id": 52, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(delta(pd_scheduler_event_count{instance=\"$instance\", type=\"balance-leader-scheduler\"}[1m])) by (name)", + "intervalFactor": 2, + "legendFormat": "{{name}}", + "metric": "pd_scheduler_event_count", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Balance leader scheduler", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "fill": 1, + "id": 53, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(delta(pd_scheduler_event_count{instance=\"$instance\", type=\"balance-region-scheduler\"}[1m])) by (name)", + "intervalFactor": 2, + "legendFormat": "{{name}}", + "metric": "pd_scheduler_event_count", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Balance region scheduler", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "showTitle": true, + "collapse": false + }, + { + "repeat": null, + "titleSize": "h4", + "repeatIteration": null, + "title": "PD", + "height": "300px", + "repeatRowId": null, + "panels": [ + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "linewidth": 1, + "steppedLine": false, + "id": 1, + "fill": 1, + "span": 6, + "title": "completed commands rate", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(rate(grpc_server_handling_seconds_count{instance=\"$instance\"}[1m])) by (grpc_method)", + "step": 10, + "refId": "A", + "legendFormat": "{{grpc_method}}" + } + ], + "yaxes": [ + { + "logBase": 10, + "show": true, + "max": null, + "format": "ops", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": null + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "id": 2, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{instance=\"$instance\"}[5m])) by (grpc_method, le))", + "step": 10, + "refId": "A", + "legendFormat": "{{grpc_method}}" + } + ], + "fill": 0, + "span": 6, + "title": "99% completed_cmds_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "sortDesc": true, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "id": 23, + "linewidth": 1, + "steppedLine": true, + "targets": [ + { + "hide": true, + "expr": "histogram_quantile(0.9999, sum(rate(grpc_server_handling_seconds_sum{instance=\"$instance\"}[5m])) by (grpc_method, le))", + "step": 4, + "legendFormat": "{{grpc_method}}", + "intervalFactor": 2, + "refId": "A" + }, + { + "intervalFactor": 2, + "expr": "rate(grpc_server_handling_seconds_sum{instance=\"$instance\"}[30s]) / rate(grpc_server_handling_seconds_count{instance=\"$instance\"}[30s])", + "step": 10, + "refId": "B", + "legendFormat": "{{grpc_method}}" + } + ], + "fill": 0, + "span": 6, + "title": "average completed_cmds_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "sortDesc": true, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": "0", + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [ + { + "colorMode": "critical", + "line": true, + "op": "lt", + "value": 0.1, + "fill": true + } + ], + "nullPointMode": "null", + "renderer": "flot", + "id": 44, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "delta(etcd_disk_wal_fsync_duration_seconds_count[1m])", + "step": 10, + "refId": "A", + "legendFormat": "{{instance}} etch disk wal fsync rate" + } + ], + "fill": 1, + "span": 6, + "title": "etch disk wal fsync rate", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "total": false, + "show": true, + "max": false, + "min": false, + "current": false, + "values": false, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "opm", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "alert": { + "noDataState": "no_data", + "name": "etch disk fsync", + "frequency": "60s", + "notifications": [], + "handler": 1, + "executionErrorState": "alerting", + "message": "PD etcd disk fsync is down", + "conditions": [ + { + "operator": { + "type": "and" + }, + "query": { + "params": [ + "A", + "1m", + "now" + ], + "model": { + "intervalFactor": 2, + "expr": "delta(etcd_disk_wal_fsync_duration_seconds_count[1m])", + "step": 10, + "refId": "A", + "legendFormat": "{{instance}} etch disk wal fsync rate" + }, + "datasourceId": 1 + }, + "evaluator": { + "type": "lt", + "params": [ + 0.1 + ] + }, + "reducer": { + "type": "min", + "params": [] + }, + "type": "query" + } + ] + }, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": 1 + } + ], + "showTitle": true, + "collapse": false + }, + { + "repeat": null, + "titleSize": "h4", + "repeatIteration": null, + "title": "Etcd", + "height": "300px", + "repeatRowId": null, + "panels": [ + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 5, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(rate(pd_txn_handle_txns_duration_seconds_count[5m])) by (instance, result)", + "step": 4, + "refId": "A", + "legendFormat": "{{instance}} : {{result}}" + } + ], + "fill": 1, + "span": 12, + "title": "handle_txns_count", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "values": true, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 6, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "histogram_quantile(0.99, sum(rate(pd_txn_handle_txns_duration_seconds_bucket[5m])) by (instance, result, le))", + "step": 10, + "refId": "A", + "legendFormat": "{{instance}} {{result}}" + } + ], + "fill": 1, + "span": 6, + "title": "99% handle_txns_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "values": true, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "connected", + "renderer": "flot", + "id": 24, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "hide": true, + "expr": "histogram_quantile(0.9999, sum(rate(pd_txn_handle_txns_duration_seconds_bucket[1m])) by (instance, result, le))", + "step": 4, + "legendFormat": "{{instance}} : {{result}}", + "intervalFactor": 2, + "refId": "A" + }, + { + "hide": false, + "expr": "rate(pd_txn_handle_txns_duration_seconds_sum[30s]) / rate(pd_txn_handle_txns_duration_seconds_count[30s])", + "interval": "", + "step": 10, + "legendFormat": "{{instance}} average", + "intervalFactor": 2, + "refId": "B" + } + ], + "fill": 1, + "span": 6, + "title": "average handle_txns_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 7, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m])) by (instance, le))", + "metric": "", + "step": 10, + "legendFormat": "{{instance}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 1, + "span": 6, + "title": "99% wal_fsync_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "transparent": false, + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "connected", + "renderer": "flot", + "id": 25, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "hide": true, + "expr": "histogram_quantile(0.9999, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[1m])) by (instance, le))", + "metric": "", + "step": 4, + "legendFormat": "{{instance}}", + "intervalFactor": 2, + "refId": "A" + }, + { + "intervalFactor": 2, + "expr": "rate(etcd_disk_wal_fsync_duration_seconds_sum[30s]) / rate(etcd_disk_wal_fsync_duration_seconds_count[30s])", + "step": 10, + "refId": "B", + "legendFormat": "{{instance}} average" + } + ], + "fill": 1, + "span": 6, + "title": "average wal_fsync_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "transparent": false, + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 34, + "linewidth": 2, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(etcd_network_peer_round_trip_time_seconds_bucket[5m])) by (instance, le))", + "metric": "", + "step": 10, + "legendFormat": "{{instance}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 1, + "span": 6, + "title": "99% peer_round_trip_time_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "values": true, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "transparent": false, + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 35, + "linewidth": 2, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(etcd_network_peer_round_trip_time_seconds_bucket[5m])) by (instance, le))", + "metric": "", + "step": 10, + "legendFormat": "{{instance}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 1, + "span": 6, + "title": "99.99% peer_round_trip_time_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "values": true, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "transparent": false, + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + } + ], + "showTitle": true, + "collapse": false + }, + { + "repeat": null, + "titleSize": "h4", + "repeatIteration": null, + "title": "TiDB", + "height": "300", + "repeatRowId": null, + "panels": [ + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 28, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(rate(pd_client_request_handle_requests_duration_seconds_count[1m])) by (type)", + "step": 4, + "refId": "A", + "legendFormat": "{{type}}" + } + ], + "fill": 1, + "span": 12, + "title": "handle_requests_count", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "values": true, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "id": 29, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "hide": false, + "expr": "histogram_quantile(0.98, sum(rate(pd_client_request_handle_requests_duration_seconds_bucket[30s])) by (type, le))", + "step": 4, + "legendFormat": "{{type}} 98th percentile", + "intervalFactor": 2, + "refId": "A" + }, + { + "intervalFactor": 2, + "expr": "avg(rate(pd_client_request_handle_requests_duration_seconds_sum[30s])) by (type) / avg(rate(pd_client_request_handle_requests_duration_seconds_count[30s])) by (type)", + "step": 4, + "refId": "B", + "legendFormat": "{{type}} average" + } + ], + "fill": 1, + "span": 12, + "title": "handle_requests_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual", + "msResolution": false + }, + "legend": { + "sort": "current", + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "sortDesc": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": "0", + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + } + ], + "showTitle": true, + "collapse": false + }, + { + "repeat": null, + "titleSize": "h4", + "repeatIteration": null, + "title": "TiKV", + "height": "300", + "repeatRowId": null, + "panels": [ + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 31, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(rate(tikv_pd_msg_send_duration_seconds_count[1m]))", + "step": 4, + "refId": "A", + "legendFormat": "" + } + ], + "fill": 1, + "span": 12, + "title": "send_message_count", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual", + "msResolution": false + }, + "legend": { + "rightSide": false, + "total": false, + "min": false, + "max": false, + "show": false, + "current": false, + "values": false, + "alignAsTable": false, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 32, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "histogram_quantile(0.95, sum(rate(tikv_pd_msg_send_duration_seconds_bucket[30s])) by (le))", + "step": 10, + "refId": "A", + "legendFormat": "" + } + ], + "fill": 1, + "span": 6, + "title": "95% send_message_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual", + "msResolution": false + }, + "legend": { + "rightSide": false, + "total": false, + "min": false, + "max": false, + "show": false, + "current": false, + "values": false, + "alignAsTable": false, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "id": 33, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "hide": true, + "expr": "histogram_quantile(0.98, sum(rate(tikv_pd_msg_send_duration_seconds_bucket[60s])) by (type, le))", + "step": 4, + "legendFormat": "98th percentile", + "intervalFactor": 2, + "refId": "A" + }, + { + "intervalFactor": 2, + "expr": "rate(tikv_pd_msg_send_duration_seconds_sum[30s]) / rate(tikv_pd_msg_send_duration_seconds_count[30s])", + "step": 10, + "refId": "B", + "legendFormat": "{{job}}" + } + ], + "fill": 0, + "span": 6, + "title": "send_message_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": true, + "show": true, + "current": false, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": "0", + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "id": 54, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "hide": false, + "expr": "sum(rate(pd_scheduler_region_heartbeat{instance=\"$instance\"}[1m])) by (store, type, status)", + "step": 4, + "legendFormat": "store{{store}}-{{type}}-{{status}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 0, + "span": 6, + "title": "Region heartbeat", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": true, + "show": true, + "current": false, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "ops", + "min": "0", + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + } + ], + "showTitle": true, + "collapse": false + }, + { + "repeat": null, + "titleSize": "h6", + "repeatIteration": null, + "title": "Nodes", + "height": "", + "repeatRowId": null, + "panels": [ + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [ + { + "colorMode": "critical", + "line": true, + "op": "gt", + "value": 4, + "fill": true + } + ], + "nullPointMode": "null", + "renderer": "flot", + "id": 42, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "expr": "node_load1{job=\"tikv-node\"}", + "metric": "", + "step": 10, + "legendFormat": "{{instance}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 1, + "span": 6, + "title": "TiKV Node Load", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "total": false, + "show": true, + "max": false, + "min": false, + "current": false, + "values": false, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "alert": { + "noDataState": "no_data", + "name": "TiKV Node Load alert", + "frequency": "60s", + "notifications": [], + "handler": 1, + "executionErrorState": "alerting", + "message": "TiKV is under high load", + "conditions": [ + { + "operator": { + "type": "and" + }, + "query": { + "params": [ + "A", + "5m", + "now" + ], + "model": { + "expr": "node_load1{job=\"tikv-node\"}", + "metric": "", + "step": 10, + "legendFormat": "{{instance}}", + "intervalFactor": 2, + "refId": "A" + }, + "datasourceId": 1 + }, + "evaluator": { + "type": "gt", + "params": [ + 4 + ] + }, + "reducer": { + "type": "avg", + "params": [] + }, + "type": "query" + } + ] + }, + "stack": true, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": 2 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [ + { + "colorMode": "critical", + "line": true, + "op": "gt", + "value": 8, + "fill": true + }, + { + "colorMode": "warning", + "line": true, + "op": "gt", + "value": 4, + "fill": true + } + ], + "nullPointMode": "null", + "renderer": "flot", + "id": 43, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "node_load1{job=\"tidb-node\"}", + "step": 10, + "refId": "A", + "legendFormat": "{{instance}}" + } + ], + "fill": 1, + "span": 6, + "title": "TiDB Node Load", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "total": false, + "show": true, + "max": false, + "min": false, + "current": false, + "values": false, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + } + ], + "showTitle": true, + "collapse": true + } + ], + "editMode": false, + "links": [ + { + "tags": [], + "type": "dashboards", + "icon": "external link" + } + ], + "tags": [], + "graphTooltip": 1, + "hideControls": false, + "title": "TiDB Cluster - pd", + "editable": true, + "refresh": "30s", + "id": null, + "gnetId": null, + "timepicker": { + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ], + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ] + }, + "__inputs": [ + { + "description": "", + "pluginName": "Prometheus", + "label": "tidb-cluster", + "pluginId": "prometheus", + "type": "datasource", + "name": "DS_TIDB-CLUSTER" + } + ], + "version": 18, + "time": { + "to": "now", + "from": "now-1h" + }, + "__requires": [ + { + "version": "4.0.1", + "type": "grafana", + "id": "grafana", + "name": "Grafana" + }, + { + "version": "1.0.0", + "type": "datasource", + "id": "prometheus", + "name": "Prometheus" + } + ], + "timezone": "browser", + "schemaVersion": 14, + "annotations": { + "list": [] + }, + "templating": { + "list": [ + { + "regex": "", + "sort": 0, + "multi": false, + "hide": 0, + "name": "instance", + "tags": [], + "allValue": null, + "tagValuesQuery": null, + "refresh": 1, + "label": null, + "current": {}, + "datasource": "${DS_TIDB-CLUSTER}", + "type": "query", + "query": "label_values(pd_cluster_status, instance)", + "useTags": false, + "tagsQuery": null, + "options": [], + "includeAll": false + }, + { + "allValue": ".*", + "current": {}, + "datasource": "${DS_TIDB-CLUSTER}", + "hide": 0, + "includeAll": true, + "label": "Namespace", + "multi": false, + "name": "namespace", + "options": [], + "query": "label_values(pd_cluster_status{instance=\"$instance\"}, namespace)", + "refresh": 1, + "regex": "", + "sort": 1, + "tagValuesQuery": "", + "tags": [], + "tagsQuery": "", + "type": "query", + "useTags": false + } + ] + } +} diff --git a/v1.0/etc/tidb.json b/v1.0/etc/tidb.json new file mode 100755 index 0000000000000..7d722d5d49b68 --- /dev/null +++ b/v1.0/etc/tidb.json @@ -0,0 +1,3629 @@ +{ + "__inputs": [ + { + "name": "DS_TEST-CLUSTER", + "label": "test-cluster", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + } + ], + "annotations": { + "list": [] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "hideControls": false, + "id": null, + "links": [ + { + "icon": "external link", + "tags": [], + "type": "dashboards" + } + ], + "refresh": "30s", + "rows": [ + { + "collapse": false, + "height": "240", + "panels": [ + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "histogram_quantile(0.80, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "A", + "step": 60 + }, + "params": [ + "A", + "10s", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Query 处理时间异常!", + "name": "Query Seconds 80 alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 23, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.80, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "A", + "step": 60 + }, + { + "expr": "histogram_quantile(0.80, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "B", + "step": 60 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Query Duration 80th percentile", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "histogram_quantile(0.95, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "A", + "step": 60 + }, + "params": [ + "A", + "10s", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Query duration at 95th percentile is high.", + "name": "Query Duration 95th percentile alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 1, + "legend": { + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "A", + "step": 60 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{ instance }}", + "refId": "B", + "step": 60 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Query Duration 95th percentile", + "tooltip": { + "msResolution": true, + "shared": false, + "sort": 0, + "value_type": "cumulative" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [ + "max" + ] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 10 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 60 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Query duration for 99th percentile is high.", + "name": "Query Duration 99th percentile alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 25, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "B", + "step": 60 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 60 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 10 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Query Duration 99th percentile", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "irate(tidb_server_handle_query_duration_seconds_sum[30s]) / irate(tidb_server_handle_query_duration_seconds_count[30s])", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 60 + }, + "params": [ + "A", + "5m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Average query duration is high.", + "name": "Average Query Duration alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 37, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": false, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "irate(tidb_server_handle_query_duration_seconds_sum[30s]) / irate(tidb_server_handle_query_duration_seconds_count[30s])", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 60 + }, + { + "expr": "sum(irate(tidb_server_handle_query_duration_seconds_sum[30s])) / sum(irate(tidb_server_handle_query_duration_seconds_count[30s]))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "B", + "step": 60 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Average Query Duration", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 2, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 12, + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "total", + "lines": false + } + ], + "span": 8, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(tidb_server_query_total[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}}-{{job}} {{type}} {{status}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "QPS", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 2, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 42, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": false, + "show": true, + "sideWidth": 250, + "sort": "max", + "sortDesc": false, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 12, + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_server_query_total[1m])) by (status)", + "intervalFactor": 2, + "legendFormat": "query {{status}}", + "refId": "A", + "step": 60 + }, + { + "expr": "sum(rate(tidb_server_query_total{status=\"OK\"}[1m] offset 1d))", + "intervalFactor": 3, + "legendFormat": "yesterday", + "refId": "B", + "step": 90 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "QPS Total", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": null, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 21, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 8, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(irate(tidb_executor_statement_node_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Statement Count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Query", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 8, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "total", + "fill": 0, + "lines": false + } + ], + "span": 6, + "stack": true, + "steppedLine": true, + "targets": [ + { + "expr": "tidb_server_connections", + "intervalFactor": 2, + "legendFormat": "{{instance}}-{{job}}", + "refId": "A", + "step": 30 + }, + { + "expr": "sum(tidb_server_connections)", + "intervalFactor": 2, + "legendFormat": "total", + "refId": "B", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Connection Count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1000000000 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "go_memstats_heap_inuse_bytes{job=~\"tidb.*\"}", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{instance}}-{{job}}", + "metric": "go_memstats_heap_inuse_bytes", + "refId": "B", + "step": 30 + }, + "params": [ + "B", + "10s", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiDB mem heap is over 1GiB", + "name": "TiDB Heap Memory Usage alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 3, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "go_memstats_heap_inuse_bytes{job=~\"tidb.*\"}", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{instance}}-{{job}}", + "metric": "go_memstats_heap_inuse_bytes", + "refId": "B", + "step": 30 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1000000000 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Heap Memory Usage", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": "", + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Query", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 12, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "max": false, + "min": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [ + { + "type": "dashboard" + } + ], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tidb_distsql_handle_query_duration_seconds_bucket[1m])) by (le))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "", + "metric": "tidb_distsql_handle_query_duration_seconds_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Distsql Seconds 99", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 14, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_distsql_query_total [1m]))", + "intervalFactor": 2, + "legendFormat": "", + "metric": "tidb_distsql_query_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Distsql QPS", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Distsql", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 40, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "sort": "total", + "sortDesc": true, + "total": true, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_cop_count[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Coprocessor Count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 41, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.999, sum(rate(tidb_tikvclient_cop_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Coprocessor Seconds 999", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Coprocessor", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 5, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_txn_cmd_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Cmd Count", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 4, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_txn_total[1m])) by (instance)", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Txn Count", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 6, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tidb_tikvclient_backoff_seconds_bucket[1m])) by (instance, le))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Retry Seconds 9999", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 30, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tidb_tikvclient_request_seconds_bucket[1m])) by (le, instance, type))", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Request Seconds 9999", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 18, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tidb_tikvclient_txn_cmd_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Cmd Seconds 99", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 22, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.999, sum(rate(tidb_tikvclient_txn_cmd_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Cmd Seconds 9999", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 44, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.90, sum(rate(tidb_tikvclient_txn_regions_num_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "90 Txn regions count", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "KV", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 33, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "avg", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(1, sum(rate(tidb_tikvclient_txn_write_kv_count_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "B", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Count Per Txn", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 34, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "avg", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(1, sum(rate(tidb_tikvclient_txn_write_size_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Size Per Txn", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 10, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "sum(rate(tidb_tikvclient_region_err_total[1m])) by (type, instance)", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "metric": "tidb_server_session_execute_parse_duration_count", + "refId": "A", + "step": 30 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiDB report 'server is busy'", + "name": "TiDB TiClient Region Error alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 11, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_region_err_total[1m])) by (type, instance)", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "metric": "tidb_server_session_execute_parse_duration_count", + "refId": "A", + "step": 30 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "TiClient Region Error", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 32, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_lock_resolver_actions_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tidb_tikvclient_lock_resolver_actions_total", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "LockResolve", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "KV 2", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 20, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 3, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(pd_client_cmd_handle_cmds_duration_seconds_bucket[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD Client cmd count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 35, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 3, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.999, sum(rate(pd_client_cmd_handle_cmds_duration_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD Client cmd duration 999", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 45, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 3, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.999, sum(rate(pd_client_request_handle_requests_duration_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD Client request duration 999", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 43, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 3, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(pd_client_cmd_handle_failed_cmds_duration_seconds_count[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD Client cmd fail", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "PD Client", + "titleSize": "h6" + }, + { + "collapse": true, + "height": 250, + "panels": [ + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 5 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "tidb_domain_load_schema_duration_sum / tidb_domain_load_schema_duration_count", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "metric": "", + "refId": "A", + "step": 10 + }, + "params": [ + "A", + "5m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiDB load schema latency is over 5s", + "name": "Load Schema Duration alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 27, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "tidb_domain_load_schema_duration_sum / tidb_domain_load_schema_duration_count", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 5 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Load Schema Duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "rate(tidb_domain_load_schema_total{type='failed'}[1m])", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{instance}} failed", + "refId": "B", + "step": 10 + }, + "params": [ + "B", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiDB load schema fails", + "name": "Load schema alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 28, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "/.*failed/", + "bars": true + } + ], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(tidb_domain_load_schema_total{type='succ'}[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} succ", + "metric": "tidb_domain_load_schema_duration_count", + "refId": "A", + "step": 10 + }, + { + "expr": "rate(tidb_domain_load_schema_total{type='failed'}[1m])", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{instance}} failed", + "refId": "B", + "step": 10 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Load Schema QPS", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "rate(tidb_server_schema_lease_error_counter[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "metric": "tidb_server_", + "refId": "A", + "step": 10 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Schema lease error.", + "name": "Schema Lease Error alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 29, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(tidb_server_schema_lease_error_counter[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "metric": "tidb_server_", + "refId": "A", + "step": 10 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Schema Lease Error Rate", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Schema Load", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 9, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tidb_ddl_handle_job_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "ddl handle job duration", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "DDL Seconds 95", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 7, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": false, + "show": true, + "sortDesc": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tidb_ddl_batch_add_or_del_data_succ_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "ddl batch", + "metric": "tidb_ddl_ba", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "DDL Batch Seconds 95", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 36, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_server_session_retry_count[1m]))", + "intervalFactor": 2, + "legendFormat": "session retry", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Session Retry", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 38, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": true, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_backoff_count[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Backoff Count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "DDL", + "titleSize": "h6" + }, + { + "collapse": true, + "height": 250, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "fill": 1, + "id": 46, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tidb_statistics_auto_analyze_duration_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "auto analyze duration", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Auto Analyze Seconds 95", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "fill": 1, + "id": 47, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(tidb_statistics_auto_analyze_total{type='succ'}[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} succ", + "refId": "A", + "step": 30 + }, + { + "expr": "rate(tidb_statistics_auto_analyze_total{type='failed'}[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} failed", + "refId": "B", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Auto Analyze QPS", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Statistics", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [] + }, + "time": { + "from": "now-6h", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ] + }, + "timezone": "browser", + "title": "TiDB Cluster - tidb", + "version": 0 +} \ No newline at end of file diff --git a/v1.0/etc/tikv.json b/v1.0/etc/tikv.json new file mode 100755 index 0000000000000..09666f8d1c25d --- /dev/null +++ b/v1.0/etc/tikv.json @@ -0,0 +1,11914 @@ +{ + "__inputs": [ + { + "name": "DS_TEST-CLUSTER", + "label": "test-cluster", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + }, + { + "type": "panel", + "id": "singlestat", + "name": "Singlestat", + "version": "" + } + ], + "annotations": { + "list": [] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "hideControls": false, + "id": null, + "links": [ + { + "icon": "external link", + "tags": [], + "type": "dashboards" + } + ], + "refresh": "1m", + "rows": [ + { + "collapse": false, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 34, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "total", + "lines": false + } + ], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_pd_heartbeat_tick_total{type=\"leader\"}) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_pd_heartbeat_tick_total", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "leader", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 37, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_pd_heartbeat_tick_total{type=\"region\"}) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "region", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 3, + "grid": {}, + "id": 33, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_engine_size_bytes) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "cf size", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "decbytes", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 5, + "grid": {}, + "id": 56, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 0, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_engine_size_bytes) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "store size", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "decbytes", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "sum(rate(tikv_channel_full_total[1m])) by (job, type)", + "intervalFactor": 2, + "legendFormat": "{{job}} - {{type}}", + "metric": "", + "refId": "A", + "step": 10 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiKV channel full", + "name": "TiKV channel full alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 3, + "grid": {}, + "id": 22, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_channel_full_total[1m])) by (job, type)", + "intervalFactor": 2, + "legendFormat": "{{job}} - {{type}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "channel full", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 18, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_server_report_failure_msg_total[1m])) by (type,instance,job,store_id)", + "intervalFactor": 2, + "legendFormat": "{{job}} - {{type}} - to - {{store_id}}", + "metric": "tikv_server_raft_store_msg_total", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "server report failures", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 57, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_region_written_keys_sum[1m])) by (job) / sum(rate(tikv_region_written_keys_count[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_region_written_keys_bucket", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "region average written keys", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 58, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_region_written_bytes_sum[1m])) by (job) / sum(rate(tikv_region_written_bytes_count[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_regi", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "region average written bytes", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 75, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_region_written_keys_count[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_region_written_keys_bucket", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "active written leaders", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 1481, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(1.0, sum(rate(tikv_raftstore_region_size_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "max", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_region_size_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_region_size_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_raftstore_region_size_sum[1m])) / sum(rate(tikv_raftstore_region_size_count[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "approximate region size", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Server", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 1164, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='tick'}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='tick'}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 4 + }, + { + "expr": "sum(rate(tikv_raftstore_raft_process_duration_secs_sum{type='tick'}[1m])) / sum(rate(tikv_raftstore_raft_process_duration_secs_count{type='tick'}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "B", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "raft process tick duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 1165, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='tick'}[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "C", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "95% raft process tick duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='ready'}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "max" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiKV raft process ready duration 99th percentile is above 1s", + "name": "TiKV raft process ready duration 99th percentile alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 12, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='ready'}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='ready'}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 4 + }, + { + "expr": "sum(rate(tikv_raftstore_raft_process_duration_secs_sum{type='ready'}[1m])) / sum(rate(tikv_raftstore_raft_process_duration_secs_count{type='ready'}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "B", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "raft process ready duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 118, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='ready'}[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "C", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "95% raft process ready duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 5, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_raft_ready_handled_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_raftstore_raft_ready_handled_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft ready handled", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 108, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_apply_proposal_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft proposals per ready", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 76, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_proposal_total{type=~\"conf_change|transfer_leader\"}[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_raftstore_proposal_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft admin proposals", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 7, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_proposal_total{type=~\"local_read|normal|read_index\"}[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_raftstore_proposal_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft read/write proposals", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 119, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_proposal_total{type=~\"local_read|read_index\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft read proposals per server", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 120, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_proposal_total{type=\"normal\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_raftstore_proposal_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft write proposals per server", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 72, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tikv_raftstore_log_lag_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_raftstore_log_lag_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99.99% raft log lag", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 73, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tikv_raftstore_propose_log_size_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_raftstore_propose_log_size_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99.99% raft log size", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 77, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_admin_cmd_total{status=\"success\", type!=\"compact\"}[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_raftstore_admin_cmd_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft admin commands", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 21, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_admin_cmd_total{status=\"success\", type=\"compact\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_raftstore_admin_cmd_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft compact commands", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 70, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_check_split_total{type!=\"ignore\"}[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_raftstore_check_split_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "check split", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 71, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tikv_raftstore_check_split_duration_seconds_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_raftstore_check_split_duration_seconds_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99.99% check split duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 11, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_raft_sent_message_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft sent messages", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 106, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_server_raft_message_recv_total[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft recv messages per server", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 25, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_raft_sent_message_total{type=\"vote\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "vote", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 1309, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_raft_dropped_message_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft dropped messages", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Raft", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 31, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_apply_log_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": " 99%", + "metric": "", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_apply_log_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_raftstore_apply_log_duration_seconds_sum[1m])) / sum(rate(tikv_raftstore_apply_log_duration_seconds_count[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "apply log duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 32, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_apply_log_duration_seconds_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": " {{job}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% apply log duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 39, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_append_log_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": " 99%", + "metric": "", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_append_log_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_raftstore_append_log_duration_seconds_sum[1m])) / sum(rate(tikv_raftstore_append_log_duration_seconds_count[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "append log duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 40, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_append_log_duration_seconds_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}} ", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% append log duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 41, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_request_wait_time_duration_secs_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "tikv_raftstore_request_wait_time_duration_secs_bucket", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_request_wait_time_duration_secs_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_raftstore_request_wait_time_duration_secs_sum[1m])) / sum(rate(tikv_raftstore_request_wait_time_duration_secs_count[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% request wait duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 42, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_request_wait_time_duration_secs_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% request wait duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Raft Ready", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 2, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_storage_command_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "storage command total", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 8, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_storage_engine_async_request_total{status!~\"all|success\"}[1m])) by (status)", + "intervalFactor": 2, + "legendFormat": "{{status}}", + "metric": "tikv_raftstore_raft_process_duration_secs_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "storage async request error", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 15, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket{type=\"snapshot\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket{type=\"snapshot\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_storage_engine_async_request_duration_seconds_sum{type=\"snapshot\"}[1m])) / sum(rate(tikv_storage_engine_async_request_duration_seconds_count{type=\"snapshot\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "storage async snapshot duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 109, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket{type=\"write\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket{type=\"write\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_storage_engine_async_request_duration_seconds_sum{type=\"write\"}[1m])) / sum(rate(tikv_storage_engine_async_request_duration_seconds_count{type=\"write\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "storage async write duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 1310, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "raft-95%", + "yaxis": 2 + } + ], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_storage_batch_commands_total_bucket[30s])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_storage_batch_commands_total_bucket[30s])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_storage_batch_commands_total_sum[30s])) / sum(rate(tikv_storage_batch_commands_total_count[30s]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_batch_snapshot_commands_total_bucket[30s])) by (le))", + "intervalFactor": 2, + "legendFormat": "raft-95%", + "refId": "D", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "storage async batch snapshot", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "Storage Batch Size", + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": "Raftstore Batch Size", + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Storage", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "height": "400", + "id": 167, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 12, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_too_busy_total[1m])) by (stage)", + "intervalFactor": 2, + "legendFormat": "busy", + "refId": "A", + "step": 20 + }, + { + "expr": "sum(rate(tikv_scheduler_stage_total[1m])) by (stage)", + "intervalFactor": 2, + "legendFormat": "{{stage}}", + "refId": "B", + "step": 20 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler stage total", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "height": "", + "id": 1, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": 6, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_commands_pri_total[1m])) by (priority)", + "intervalFactor": 2, + "legendFormat": "{{priority}}", + "metric": "", + "refId": "A", + "step": 40 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler priority commands", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "height": "", + "id": 193, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": 6, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_scheduler_contex_total) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "", + "refId": "A", + "step": 40 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler pending commands", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Scheduler", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "height": "400", + "id": 168, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 12, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_too_busy_total{type=\"$command\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "busy", + "refId": "A", + "step": 4 + }, + { + "expr": "sum(rate(tikv_scheduler_stage_total{type=\"$command\"}[1m])) by (stage)", + "intervalFactor": 2, + "legendFormat": "{{stage}}", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler stage total", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 3, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_scheduler_command_duration_seconds_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_scheduler_command_duration_seconds_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_scheduler_command_duration_seconds_sum{type=\"$command\"}[1m])) / sum(rate(tikv_scheduler_command_duration_seconds_count{type=\"$command\"}[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler command duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 194, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_scheduler_latch_wait_duration_seconds_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_scheduler_latch_wait_duration_seconds_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_scheduler_latch_wait_duration_seconds_sum{type=\"$command\"}[1m])) / sum(rate(tikv_scheduler_latch_wait_duration_seconds_count{type=\"$command\"}[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler latch wait duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 195, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_scheduler_kv_command_key_read_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "kv_command_key", + "refId": "A", + "step": 10 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_scheduler_kv_command_key_read_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_scheduler_kv_command_key_read_sum{type=\"$command\"}[1m])) / sum(rate(tikv_scheduler_kv_command_key_read_count{type=\"$command\"}[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler keys read", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 373, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_scheduler_kv_command_key_write_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "kv_command_key", + "refId": "A", + "step": 10 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_scheduler_kv_command_key_write_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_scheduler_kv_command_key_write_sum{type=\"$command\"}[1m])) / sum(rate(tikv_scheduler_kv_command_key_write_count{type=\"$command\"}[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler keys written", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 560, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_kv_scan_details{req=\"$command\"}[1m])) by (tag)", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler scan details", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 675, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_kv_scan_details{req=\"$command\", cf=\"lock\"}[1m])) by (tag)", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler scan details [lock]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 829, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_kv_scan_details{req=\"$command\", cf=\"write\"}[1m])) by (tag)", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler scan details [write]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 830, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_kv_scan_details{req=\"$command\", cf=\"default\"}[1m])) by (tag)", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler scan details [default]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": "command", + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Scheduler - $command", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 16, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_duration_seconds_bucket{req=\"select\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_duration_seconds_bucket{req=\"select\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": " sum(rate(tikv_coprocessor_request_duration_seconds_sum{req=\"select\"}[1m])) / sum(rate(tikv_coprocessor_request_duration_seconds_count{req=\"select\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 13, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_duration_seconds_bucket{req=\"index\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "tikv_coprocessor_request_duration_seconds_bucket", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_duration_seconds_bucket{req=\"index\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_coprocessor_request_duration_seconds_sum{req=\"index\"}[1m])) / sum(rate(tikv_coprocessor_request_duration_seconds_count{req=\"index\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 115, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_duration_seconds_bucket[1m])) by (le, job,req))", + "intervalFactor": 2, + "legendFormat": "{{job}}-{{req}}", + "metric": "tikv_coprocessor_request_duration_seconds_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% coprocessor request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 111, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_wait_seconds_bucket{req=\"select\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_wait_seconds_bucket{req=\"select\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": " sum(rate(tikv_coprocessor_request_wait_seconds_sum{req=\"select\"}[1m])) / sum(rate(tikv_coprocessor_request_wait_seconds_count{req=\"select\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table wait duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 112, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_wait_seconds_bucket{req=\"index\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_wait_seconds_bucket{req=\"index\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": " sum(rate(tikv_coprocessor_request_wait_seconds_sum{req=\"index\"}[1m])) / sum(rate(tikv_coprocessor_request_wait_seconds_count{req=\"index\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index wait duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 116, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_wait_seconds_bucket[1m])) by (le, job,req))", + "intervalFactor": 2, + "legendFormat": "{{job}}-{{req}}", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% coprocessor wait duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 113, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_handle_seconds_bucket{req=\"select\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_handle_seconds_bucket{req=\"select\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": " sum(rate(tikv_coprocessor_request_handle_seconds_sum{req=\"select\"}[1m])) / sum(rate(tikv_coprocessor_request_handle_seconds_count{req=\"select\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table handle duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 114, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_handle_seconds_bucket{req=\"index\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_handle_seconds_bucket{req=\"index\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": " sum(rate(tikv_coprocessor_request_handle_seconds_sum{req=\"index\"}[1m])) / sum(rate(tikv_coprocessor_request_handle_seconds_count{req=\"index\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index handle duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 117, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_handle_seconds_bucket[1m])) by (le, job,req))", + "intervalFactor": 2, + "legendFormat": "{{job}}-{{req}}", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% coprocessor handle duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 52, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_keys_bucket[1m])) by (req)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{req}}", + "metric": "tikv_coprocessor_scan_keys_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor scan keys", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 551, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_executor_count[1m])) by (type)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor executor count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 74, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_request_error[1m])) by (reason)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{reason}}", + "metric": "tikv_coprocessor_request_error", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor request errors", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 550, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_pending_request[1m])) by (req, priority)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{ req }} - {{priority}}", + "metric": "tikv_coprocessor_request_error", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor pending requests", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 552, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{req=\"select\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table scan details", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 122, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "repeat": null, + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{cf=\"lock\", req=\"select\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table scan details [lock]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 555, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{cf=\"write\", req=\"select\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table scan details [write]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 556, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{cf=\"default\", req=\"select\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table scan details [default]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 553, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{req=\"index\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index scan details", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 554, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "repeat": "cf", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{cf=\"lock\", req=\"index\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index scan details - [lock]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 557, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{cf=\"write\", req=\"index\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index scan details - [write]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 558, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{cf=\"default\", req=\"index\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index scan details - [default]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Coprocessor", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 26, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(1.0, sum(rate(tikv_storage_mvcc_versions_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": " max", + "metric": "", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_storage_mvcc_versions_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "B", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_storage_mvcc_versions_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": " 95%", + "metric": "", + "refId": "C", + "step": 4 + }, + { + "expr": "sum(rate(tikv_storage_mvcc_versions_sum[1m])) / sum(rate(tikv_storage_mvcc_versions_count[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "D", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "MVCC Versions", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 559, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(1.0, sum(rate(tikv_storage_mvcc_gc_delete_versions_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": " max", + "metric": "", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_storage_mvcc_gc_delete_versions_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "B", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_storage_mvcc_gc_delete_versions_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": " 95%", + "metric": "", + "refId": "C", + "step": 4 + }, + { + "expr": "sum(rate(tikv_storage_mvcc_gc_delete_versions_sum[1m])) / sum(rate(tikv_storage_mvcc_gc_delete_versions_count[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "D", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "MVCC Delete Versions", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 121, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_storage_command_total{type=\"gc\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "total", + "metric": "tikv_storage_command_total", + "refId": "A", + "step": 4 + }, + { + "expr": "sum(rate(tikv_storage_gc_skipped_counter[1m]))", + "intervalFactor": 2, + "legendFormat": "skipped", + "metric": "tikv_storage_gc_skipped_counter", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "GC Commands", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 966, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_gc_worker_actions_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "GC Worker Actions", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 0, + "editable": true, + "error": false, + "format": "s", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 27, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "null", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "max(tidb_tikvclient_gc_config{type=\"tikv_gc_life_time\"})", + "interval": "", + "intervalFactor": 2, + "refId": "A", + "step": 60 + } + ], + "thresholds": "", + "title": "GC LifeTime", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "current" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 0, + "editable": true, + "error": false, + "format": "s", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 28, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "null", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "max(tidb_tikvclient_gc_config{type=\"tikv_gc_run_interval\"})", + "intervalFactor": 2, + "refId": "A", + "step": 60 + } + ], + "thresholds": "", + "title": "GC interval", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "current" + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "GC", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 35, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(delta(tikv_raftstore_raft_sent_message_total{type=\"snapshot\"}[1m]))", + "intervalFactor": 2, + "legendFormat": " ", + "refId": "A", + "step": 60 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "rate snapshot message", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "opm", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 36, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_server_send_snapshot_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "send", + "refId": "A", + "step": 60 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_snapshot_duration_seconds_bucket{type=\"apply\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "apply", + "refId": "B", + "step": 60 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_snapshot_duration_seconds_bucket{type=\"generate\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "generate", + "refId": "C", + "step": 60 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% handle snapshot duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 38, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": true, + "targets": [ + { + "expr": "sum(tikv_raftstore_snapshot_traffic_total) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "", + "refId": "A", + "step": 60 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "snapshot state count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 44, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tikv_snapshot_size_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "size", + "metric": "tikv_snapshot_size_bucket", + "refId": "A", + "step": 40 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99.99% snapshot size", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 43, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tikv_snapshot_kv_count_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "count", + "metric": "tikv_snapshot_kv_count_bucket", + "refId": "A", + "step": 40 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99.99% snapshot kv count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Snapshot", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 59, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 400, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_worker_handled_task_total[1m])) by (name)", + "intervalFactor": 2, + "legendFormat": "{{name}}", + "metric": "tikv_pd_heartbeat_tick_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Worker Handled Tasks", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 1395, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 400, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_worker_pending_task_total[1m])) by (name)", + "intervalFactor": 2, + "legendFormat": "{{name}}", + "metric": "tikv_pd_heartbeat_tick_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Worker Pending Tasks", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Task", + "titleSize": "h6" + }, + { + "collapse": true, + "height": 250, + "panels": [ + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0.8 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"raftstore_.*\"}[1m])) by (job, name)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 20 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "max" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiKV raftstore thread CPU usage is high", + "name": "TiKV raft store CPU alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 61, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"raftstore_.*\"}[1m])) by (job, name)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0.8 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "raft store CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 79, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=\"apply_worker\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "async apply CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 63, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"storage_schedul.*\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 64, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"sched_worker.*\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler worker CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 78, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"endpoint.*\"}[1m])) by (job)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 67, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"snapshot_worker.*\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "snapshot worker CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 68, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"split_check.*\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "split check CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 69, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "max(rate(tikv_thread_cpu_seconds_total{name=~\"rocksdb.*\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "warning", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + }, + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 4 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "rocksdb CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 105, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"grpc.*\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "grpc poll CPU", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Thread CPU", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 138, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_memtable_efficiency{db=\"$db\", type=\"memtable_hit\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "memtable", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=~\"block_cache_data_hit|block_cache_filter_hit\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "block_cache", + "metric": "", + "refId": "E", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_get_served{db=\"$db\", type=\"get_hit_l0\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "l0", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_get_served{db=\"$db\", type=\"get_hit_l1\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "l1", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_get_served{db=\"$db\", type=\"get_hit_l2_and_up\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "l2_and_up", + "refId": "F", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Get Operations", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 82, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_get_micro_seconds{db=\"$db\",type=\"get_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_get_micro_seconds{db=\"$db\",type=\"get_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_get_micro_seconds{db=\"$db\",type=\"get_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_get_micro_seconds{db=\"$db\",type=\"get_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Get Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "µs", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 129, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_locate{db=\"$db\", type=\"number_db_seek\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "seek", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_locate{db=\"$db\", type=\"number_db_seek_found\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "seek_found", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_locate{db=\"$db\", type=\"number_db_next\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "next", + "metric": "", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_locate{db=\"$db\", type=\"number_db_next_found\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "next_found", + "metric": "", + "refId": "D", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_locate{db=\"$db\", type=\"number_db_prev\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "prev", + "metric": "", + "refId": "E", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_locate{db=\"$db\", type=\"number_db_prev_found\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "prev_found", + "metric": "", + "refId": "F", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Seek Operations", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 125, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_seek_micro_seconds{db=\"$db\",type=\"seek_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_seek_micro_seconds{db=\"$db\",type=\"seek_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_seek_micro_seconds{db=\"$db\",type=\"seek_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_seek_micro_seconds{db=\"$db\",type=\"seek_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Seek Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "µs", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 139, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_write_served{db=\"$db\", type=~\"write_done_by_self|write_done_by_other\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "done", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_write_served{db=\"$db\", type=\"write_timeout\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "timeout", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_write_served{db=\"$db\", type=\"write_with_wal\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "with_wal", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Write Operations", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 126, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_write_micro_seconds{db=\"$db\",type=\"write_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_write_micro_seconds{db=\"$db\",type=\"write_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_write_micro_seconds{db=\"$db\",type=\"write_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_write_micro_seconds{db=\"$db\",type=\"write_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Write Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "µs", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 137, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_wal_file_synced{db=\"$db\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "sync", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "WAL Sync Operations", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 135, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 6, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_wal_file_sync_micro_seconds{db=\"$db\",type=\"wal_file_sync_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_wal_file_sync_micro_seconds{db=\"$db\",type=\"wal_file_sync_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_wal_file_sync_micro_seconds{db=\"$db\",type=\"wal_file_sync_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_wal_file_sync_micro_seconds{db=\"$db\",type=\"wal_file_sync_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "WAL Sync Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "µs", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 128, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_event_total{db=\"$db\"}[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_engine_event_total", + "refId": "B", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Compaction Operations", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 136, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_compaction_time{db=\"$db\",type=\"compaction_time_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_compaction_time{db=\"$db\",type=\"compaction_time_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_compaction_time{db=\"$db\",type=\"compaction_time_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_compaction_time{db=\"$db\",type=\"compaction_time_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Compaction Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "µs", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 140, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(rate(tikv_engine_sst_read_micros{db=\"$db\", type=\"sst_read_micros_max\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "max", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(rate(tikv_engine_sst_read_micros{db=\"$db\", type=\"sst_read_micros_percentile99\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(rate(tikv_engine_sst_read_micros{db=\"$db\", type=\"sst_read_micros_percentile95\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(rate(tikv_engine_sst_read_micros{db=\"$db\", type=\"sst_read_micros_average\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "SST Read Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ms", + "label": null, + "logBase": 10, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 87, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(rate(tikv_engine_write_stall{db=\"$db\", type=\"write_stall_max\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "max", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(rate(tikv_engine_write_stall{db=\"$db\", type=\"write_stall_percentile99\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(rate(tikv_engine_write_stall{db=\"$db\", type=\"write_stall_percentile95\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(rate(tikv_engine_write_stall{db=\"$db\", type=\"write_stall_average\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Write Stall Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ms", + "label": null, + "logBase": 10, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 103, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_memory_bytes{db=\"$db\", type=\"mem-tables\"}) by (cf)", + "intervalFactor": 2, + "legendFormat": "{{cf}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Memtable Size", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 88, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": null, + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_memtable_efficiency{db=\"$db\", type=\"memtable_hit\"}[1m])) / (sum(rate(tikv_engine_memtable_efficiency{db=\"$db\", type=\"memtable_hit\"}[1m])) + sum(rate(tikv_engine_memtable_efficiency{db=\"$db\", type=\"memtable_miss\"}[1m])))", + "intervalFactor": 2, + "legendFormat": "hit", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Memtable Hit", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 102, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_block_cache_size_bytes{db=\"$db\"}) by(cf)", + "intervalFactor": 2, + "legendFormat": "{{cf}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Block Cache Size", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 80, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 6, + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_hit\"}[1m])) / (sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_hit\"}[1m])) + sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_miss\"}[1m])))", + "intervalFactor": 2, + "legendFormat": "all", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_data_hit\"}[1m])) / (sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_data_hit\"}[1m])) + sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_data_miss\"}[1m])))", + "intervalFactor": 2, + "legendFormat": "data", + "metric": "", + "refId": "D", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_filter_hit\"}[1m])) / (sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_filter_hit\"}[1m])) + sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_filter_miss\"}[1m])))", + "intervalFactor": 2, + "legendFormat": "filter", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_index_hit\"}[1m])) / (sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_index_hit\"}[1m])) + sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_index_miss\"}[1m])))", + "intervalFactor": 2, + "legendFormat": "index", + "metric": "", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_bloom_efficiency{db=\"$db\", type=\"bloom_prefix_useful\"}[1m])) / sum(rate(tikv_engine_bloom_efficiency{db=\"$db\", type=\"bloom_prefix_checked\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "bloom prefix", + "metric": "", + "refId": "E", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Block Cache Hit", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "height": "", + "id": 467, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"block_cache_byte_read\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "total_read", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"block_cache_byte_write\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "total_written", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_data_bytes_insert\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "data_insert", + "metric": "", + "refId": "D", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_filter_bytes_insert\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "filter_insert", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_filter_bytes_evict\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "filter_evict", + "metric": "", + "refId": "E", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_index_bytes_insert\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "index_insert", + "metric": "", + "refId": "F", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_index_bytes_evict\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "index_evict", + "metric": "", + "refId": "G", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Block Cache Flow", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "none", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 468, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_add\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "total_add", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_data_add\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "data_add", + "metric": "", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_filter_add\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "filter_add", + "metric": "", + "refId": "D", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_index_add\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "index_add", + "metric": "", + "refId": "E", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_add_failures\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "add_failures", + "metric": "", + "refId": "B", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Block Cache Operations", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "height": "", + "id": 132, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"keys_read\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "read", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"keys_written\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "written", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_compaction_num_corrupt_keys{db=\"$db\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "corrupt", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Keys Flow", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 131, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_engine_estimate_num_keys{db=\"$db\"}) by (cf)", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{cf}}", + "metric": "tikv_engine_estimate_num_keys", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Total Keys", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "height": "", + "id": 85, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"bytes_read\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "get", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"iter_bytes_read\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "scan", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Read Flow", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 133, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 6, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_bytes_per_read{db=\"$db\",type=\"bytes_per_read_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_bytes_per_read{db=\"$db\",type=\"bytes_per_read_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_bytes_per_read{db=\"$db\",type=\"bytes_per_read_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_bytes_per_read{db=\"$db\",type=\"bytes_per_read_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Bytes / Read", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "decbytes", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "height": "", + "id": 86, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"wal_file_bytes\"}[1m]))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "wal", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"bytes_written\"}[1m]))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "write", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Write Flow", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 134, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 6, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_bytes_per_write{db=\"$db\",type=\"bytes_per_write_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_bytes_per_write{db=\"$db\",type=\"bytes_per_write_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_bytes_per_write{db=\"$db\",type=\"bytes_per_write_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_bytes_per_write{db=\"$db\",type=\"bytes_per_write_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Bytes / Write", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "decbytes", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 90, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_compaction_flow_bytes{db=\"$db\", type=\"bytes_read\"}[1m]))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "read", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_compaction_flow_bytes{db=\"$db\", type=\"bytes_written\"}[1m]))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "written", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"flush_write_bytes\"}[1m]))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "flushed", + "refId": "B", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Compaction Flow", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "Bps", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 127, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_pending_compaction_bytes{db=\"$db\"}[1m])) by (cf)", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{cf}}", + "metric": "tikv_engine_pending_compaction_bytes", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Compaction Pending Bytes", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "Bps", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 518, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_read_amp_flow_bytes{db=\"$db\", type=\"read_amp_total_read_bytes\"}[1m])) by (job) / sum(rate(tikv_engine_read_amp_flow_bytes{db=\"$db\", type=\"read_amp_estimate_useful_bytes\"}[1m])) by (job)", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Read Amplication", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 863, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_compression_ratio{db=\"$db\"}) by (level)", + "hide": false, + "intervalFactor": 2, + "legendFormat": "level - {{level}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Compression Ratio", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 516, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "tikv_engine_num_snapshots{db=\"$db\"}", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Number of Snapshots", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 517, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "tikv_engine_oldest_snapshot_duration{db=\"$db\"}", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_engine_oldest_snapshot_duration", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Oldest Snapshots Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + } + ], + "repeat": "db", + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Rocksdb - $db", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 95, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_grpc_msg_duration_seconds_count[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_grpc_msg_duration_seconds_bucket", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "grpc message count", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 107, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_grpc_msg_fail_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_grpc_msg_fail_total", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "grpc message failed", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 97, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.8, sum(rate(tikv_grpc_msg_duration_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "80% grpc messge duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 10, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 98, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_grpc_msg_duration_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% grpc messge duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 10, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Grpc", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 1069, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 350, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_pd_request_duration_seconds_count[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{ type }}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD requests", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 1070, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 350, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_pd_request_duration_seconds_sum[1m])) by (type) / sum(rate(tikv_pd_request_duration_seconds_count[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{ type }}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD request duration (average)", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 1215, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 350, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_pd_heartbeat_message_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{ type }}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD heartbeats", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 1396, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 350, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_pd_validate_peer_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{ type }}", + "metric": "", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD validate peers", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "PD", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [ + { + "allValue": null, + "current": {}, + "datasource": "${DS_TEST-CLUSTER}", + "hide": 0, + "includeAll": true, + "label": "db", + "multi": true, + "name": "db", + "options": [], + "query": "label_values(tikv_engine_block_cache_size_bytes, db)", + "refresh": 1, + "regex": "", + "sort": 1, + "tagValuesQuery": "", + "tags": [], + "tagsQuery": "", + "type": "query", + "useTags": false + }, + { + "allValue": null, + "current": {}, + "datasource": "${DS_TEST-CLUSTER}", + "hide": 0, + "includeAll": true, + "label": "command", + "multi": true, + "name": "command", + "options": [], + "query": "label_values(tikv_storage_command_total, type)", + "refresh": 1, + "regex": "", + "sort": 1, + "tagValuesQuery": "", + "tags": [], + "tagsQuery": "", + "type": "query", + "useTags": false + } + ] + }, + "time": { + "from": "now-1h", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ] + }, + "timezone": "browser", + "title": "Test-Cluster-TiKV", + "version": 2 +} diff --git a/v1.0/media/architecture.jpeg b/v1.0/media/architecture.jpeg new file mode 100755 index 0000000000000..f37b6cc495bbd Binary files /dev/null and b/v1.0/media/architecture.jpeg differ diff --git a/v1.0/media/explain_dot.png b/v1.0/media/explain_dot.png new file mode 100755 index 0000000000000..9ec5d1e566dc0 Binary files /dev/null and b/v1.0/media/explain_dot.png differ diff --git a/v1.0/media/grafana-screenshot.png b/v1.0/media/grafana-screenshot.png new file mode 100755 index 0000000000000..2e442f4e5cb13 Binary files /dev/null and b/v1.0/media/grafana-screenshot.png differ diff --git a/v1.0/media/monitor-architecture.png b/v1.0/media/monitor-architecture.png new file mode 100755 index 0000000000000..22b6f9ef07ab1 Binary files /dev/null and b/v1.0/media/monitor-architecture.png differ diff --git a/v1.0/media/pingcap-logo-1.png b/v1.0/media/pingcap-logo-1.png new file mode 100755 index 0000000000000..adc261932c354 Binary files /dev/null and b/v1.0/media/pingcap-logo-1.png differ diff --git a/v1.0/media/pingcap-logo.png b/v1.0/media/pingcap-logo.png new file mode 100755 index 0000000000000..3cc65eec06d21 Binary files /dev/null and b/v1.0/media/pingcap-logo.png differ diff --git a/v1.0/media/prometheus-in-tidb.png b/v1.0/media/prometheus-in-tidb.png new file mode 100755 index 0000000000000..757c5a6d2e474 Binary files /dev/null and b/v1.0/media/prometheus-in-tidb.png differ diff --git a/v1.0/media/syncer_architecture.png b/v1.0/media/syncer_architecture.png new file mode 100755 index 0000000000000..a221aecadd379 Binary files /dev/null and b/v1.0/media/syncer_architecture.png differ diff --git a/v1.0/media/syncer_monitor_scheme.png b/v1.0/media/syncer_monitor_scheme.png new file mode 100755 index 0000000000000..c965622e5a88c Binary files /dev/null and b/v1.0/media/syncer_monitor_scheme.png differ diff --git a/v1.0/media/syncer_sharding.png b/v1.0/media/syncer_sharding.png new file mode 100755 index 0000000000000..a9f50f9abba55 Binary files /dev/null and b/v1.0/media/syncer_sharding.png differ diff --git a/v1.0/media/sysbench-01.png b/v1.0/media/sysbench-01.png new file mode 100755 index 0000000000000..ca256377e4f1a Binary files /dev/null and b/v1.0/media/sysbench-01.png differ diff --git a/v1.0/media/sysbench-02.png b/v1.0/media/sysbench-02.png new file mode 100755 index 0000000000000..9e708370271b0 Binary files /dev/null and b/v1.0/media/sysbench-02.png differ diff --git a/v1.0/media/sysbench-03.png b/v1.0/media/sysbench-03.png new file mode 100755 index 0000000000000..04eb0b36bf741 Binary files /dev/null and b/v1.0/media/sysbench-03.png differ diff --git a/v1.0/media/sysbench-04.png b/v1.0/media/sysbench-04.png new file mode 100755 index 0000000000000..cadd75e9831e8 Binary files /dev/null and b/v1.0/media/sysbench-04.png differ diff --git a/v1.0/media/sysbench-05.png b/v1.0/media/sysbench-05.png new file mode 100755 index 0000000000000..7842f60a4f0d8 Binary files /dev/null and b/v1.0/media/sysbench-05.png differ diff --git a/v1.0/media/sysbench-06.png b/v1.0/media/sysbench-06.png new file mode 100755 index 0000000000000..14bb2196ab72a Binary files /dev/null and b/v1.0/media/sysbench-06.png differ diff --git a/v1.0/media/sysbench-07.png b/v1.0/media/sysbench-07.png new file mode 100755 index 0000000000000..bd3313a11b744 Binary files /dev/null and b/v1.0/media/sysbench-07.png differ diff --git a/v1.0/media/sysbench-08.png b/v1.0/media/sysbench-08.png new file mode 100755 index 0000000000000..c3c218af4ab7c Binary files /dev/null and b/v1.0/media/sysbench-08.png differ diff --git a/v1.0/media/sysbench-09.png b/v1.0/media/sysbench-09.png new file mode 100755 index 0000000000000..fce27b6b59dcd Binary files /dev/null and b/v1.0/media/sysbench-09.png differ diff --git a/v1.0/media/tidb-architecture.png b/v1.0/media/tidb-architecture.png new file mode 100755 index 0000000000000..b0fa6767259b3 Binary files /dev/null and b/v1.0/media/tidb-architecture.png differ diff --git a/v1.0/media/tidb_binlog_kafka_architecture.png b/v1.0/media/tidb_binlog_kafka_architecture.png new file mode 100755 index 0000000000000..79790eb436466 Binary files /dev/null and b/v1.0/media/tidb_binlog_kafka_architecture.png differ diff --git a/v1.0/media/tidb_pump_deployment.jpeg b/v1.0/media/tidb_pump_deployment.jpeg new file mode 100755 index 0000000000000..177a72f6253eb Binary files /dev/null and b/v1.0/media/tidb_pump_deployment.jpeg differ diff --git a/v1.0/media/tispark-architecture.png b/v1.0/media/tispark-architecture.png new file mode 100755 index 0000000000000..6d8f0849fa90c Binary files /dev/null and b/v1.0/media/tispark-architecture.png differ diff --git a/v1.0/op-guide/ansible-deployment.md b/v1.0/op-guide/ansible-deployment.md new file mode 100755 index 0000000000000..2393231545857 --- /dev/null +++ b/v1.0/op-guide/ansible-deployment.md @@ -0,0 +1,703 @@ +--- +title: Ansible Deployment +category: operations +--- + +# Ansible Deployment + +## Overview + +Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. + +[TiDB-Ansible](https://github.com/pingcap/tidb-ansible) is a TiDB cluster deployment tool developed by PingCAP, based on Ansible playbook. TiDB-Ansible enables you to quickly deploy a new TiDB cluster which includes PD, TiDB, TiKV, and the cluster monitoring modules. + +You can use the TiDB-Ansible configuration file to set up the cluster topology, completing all operation tasks with one click, including: + +- Initializing operating system parameters +- Deploying the components +- Rolling upgrade, including module survival detection +- Cleaning data +- Cleaning environment +- Configuring monitoring modules + + +## Prepare + +Before you start, make sure that you have: + +1. Several target machines with the following requirements: + + - 4 or more machines. At least 3 instances for TiKV. Do not deploy TiKV together with TiDB or PD on the same machine. See [Software and Hardware Requirements](recommendation.md). + + - Recommended Operating system: + + - CentOS 7.3 or later Linux + - x86_64 architecture (AMD64) + - ext4 filesystem + + Use ext4 filesystem for your data disks. Mount ext4 filesystem with the `nodelalloc` mount option. See [Mount the data disk ext4 filesystem with options](#mount-the-data-disk-ext4-filesystem-with-options). + + - The network between machines. Turn off the firewalls and iptables when deploying and turn them on after the deployment. + + - The same time and time zone for all machines with the NTP service on to synchronize the correct time. See [How to check whether the NTP service is normal](#how-to-check-whether-the-ntp-service-is-normal). + + - Create a normal `tidb` user account as the user who runs the service. The `tidb` user can sudo to the root user without a password. See [How to configure SSH mutual trust and sudo without password](#how-to-configure-ssh-mutual-trust-and-sudo-without-password). + +    > **Note:** When you deploy TiDB using Ansible, use SSD disks for the data directory of TiKV and PD nodes. + +2. A Control Machine with the following requirements: + + - The Control Machine can be one of the managed nodes. + - It is recommended to install CentOS 7.3 or later version of Linux operating system (Python 2.7 involved by default). + - The Control Machine must have access to the Internet in order to download TiDB and related packages. + - Configure mutual trust of `ssh authorized_key`. In the Control Machine, you can login to the deployment target machine using `tidb` user account without a password. See [How to configure SSH mutual trust and sudo without password](#how-to-configure-ssh-mutual-trust-and-sudo-without-password). + +## Install Ansible and dependencies in the Control Machine + +Use the following method to install Ansible on the Control Machine of CentOS 7 system. Installation from the EPEL source includes Ansible dependencies automatically (such as `Jinja2==2.7.2 MarkupSafe==0.11`). After installation, you can view the version using `ansible --version`. + +> **Note:** Make sure that the Ansible version is **Ansible 2.4** or later, otherwise a compatibility issue occurs. + +```bash + # yum install epel-release + # yum install ansible curl + # ansible --version + ansible 2.4.2.0 +``` + +For other systems, see [Install Ansible](ansible-deployment.md#install-ansible). + +## Download TiDB-Ansible to the Control Machine + +Login to the Control Machine using the `tidb` user account and enter the `/home/tidb` directory. Use the following command to download the corresponding version of TiDB-Ansible from GitHub [TiDB-Ansible project](https://github.com/pingcap/tidb-ansible). The default folder name is `tidb-ansible`. + +Download the 1.0 version: + +``` +cd /home/tidb +git clone -b release-1.0 https://github.com/pingcap/tidb-ansible.git +``` + +or + +Download the master version: + +``` +cd /home/tidb +git clone https://github.com/pingcap/tidb-ansible.git +``` + +> **Note:** For the production environment, download the 1.0 version to deploy TiDB. + +## Orchestrate the TiDB cluster + +The file path of `inventory.ini`: `tidb-ansible/inventory.ini` + +The standard cluster has 6 machines: + +- 2 TiDB nodes, the first TiDB machine is used as a monitor +- 3 PD nodes +- 3 TiKV nodes + +### The cluster topology of single TiKV instance on a single machine + +| Name | Host IP | Services | +|:------|:------------|:-----------| +| node1 | 172.16.10.1 | PD1, TiDB1 | +| node2 | 172.16.10.2 | PD2, TiDB2 | +| node3 | 172.16.10.3 | PD3 | +| node4 | 172.16.10.4 | TiKV1 | +| node5 | 172.16.10.5 | TiKV2 | +| node6 | 172.16.10.6 | TiKV3 | + +```ini +[tidb_servers] +172.16.10.1 +172.16.10.2 + +[pd_servers] +172.16.10.1 +172.16.10.2 +172.16.10.3 + +[tikv_servers] +172.16.10.4 +172.16.10.5 +172.16.10.6 + +[monitoring_servers] +172.16.10.1 + +[grafana_servers] +172.16.10.1 + +[monitored_servers] +172.16.10.1 +172.16.10.2 +172.16.10.3 +172.16.10.4 +172.16.10.5 +172.16.10.6 +``` + + +### The cluster topology of multiple TiKV instances on a single machine + +Take two TiKV instances as an example: + +| Name | Host IP | Services | +|:------|:------------|:-----------| +| node1 | 172.16.10.1 | PD1, TiDB1 | +| node2 | 172.16.10.2 | PD2, TiDB2 | +| node3 | 172.16.10.3 | PD3 | +| node4 | 172.16.10.4 | TiKV1-1, TiKV1-2 | +| node5 | 172.16.10.5 | TiKV2-1, TiKV2-2 | +| node6 | 172.16.10.6 | TiKV3-1, TiKV3-2 | + +```ini +[tidb_servers] +172.16.10.1 +172.16.10.2 + +[pd_servers] +172.16.10.1 +172.16.10.2 +172.16.10.3 + +[tikv_servers] +TiKV1-1 ansible_host=172.16.10.4 deploy_dir=/data1/deploy tikv_port=20171 labels="host=tikv1" +TiKV1-2 ansible_host=172.16.10.4 deploy_dir=/data2/deploy tikv_port=20172 labels="host=tikv1" +TiKV1-3 ansible_host=172.16.10.4 deploy_dir=/data3/deploy tikv_port=20173 labels="host=tikv1" +TiKV2-1 ansible_host=172.16.10.5 deploy_dir=/data1/deploy tikv_port=20171 labels="host=tikv2" +TiKV2-2 ansible_host=172.16.10.5 deploy_dir=/data2/deploy tikv_port=20172 labels="host=tikv2" +TiKV2-3 ansible_host=172.16.10.5 deploy_dir=/data3/deploy tikv_port=20173 labels="host=tikv2" +TiKV3-1 ansible_host=172.16.10.6 deploy_dir=/data1/deploy tikv_port=20171 labels="host=tikv3" +TiKV3-2 ansible_host=172.16.10.6 deploy_dir=/data2/deploy tikv_port=20172 labels="host=tikv3" +TiKV3-3 ansible_host=172.16.10.6 deploy_dir=/data3/deploy tikv_port=20173 labels="host=tikv3" + +[monitoring_servers] +172.16.10.1 + +[grafana_servers] +172.16.10.1 + +[monitored_servers] +172.16.10.1 +172.16.10.2 +172.16.10.3 +172.16.10.4 +172.16.10.5 +172.16.10.6 + +...... + +[pd_servers:vars] +location_labels = ["host"] +``` + +**Edit the parameters in the service configuration file:** + +1. For multiple TiKV instances, edit the `end-point-concurrency` and `block-cache-size` parameters in `tidb-ansible/conf/tikv.yml`: + + - `end-point-concurrency`: keep the number lower than CPU Vcores + - `rocksdb defaultcf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 30% + - `rocksdb writecf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 45% + - `rocksdb lockcf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 2.5% (128 MB at a minimum) + - `raftdb defaultcf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 2.5% (128 MB at a minimum) + +2. If multiple TiKV instances are deployed on a same physical disk, edit the `capacity` parameter in `conf/tikv.yml`: + + - `capacity`: (DISK - log space) / TiKV instance number (the unit is GB) + +### Description of inventory.ini variables + +#### Description of the deployment directory + +You can configure the deployment directory using the `deploy_dir` variable. The global variable is set to `/home/tidb/deploy` by default, and it applies to all services. If the data disk is mounted on the `/data1` directory, you can set it to `/data1/deploy`. For example: + +``` +## Global variables +[all:vars] +deploy_dir = /data1/deploy +``` + +To set a deployment directory separately for a service, you can configure host variables when configuring the service host list. Take the TiKV node as an example and it is similar for other services. You must add the first column alias to avoid confusion when the services are mixedly deployed. + +``` +TiKV1-1 ansible_host=172.16.10.4 deploy_dir=/data1/deploy +``` + +#### Description of other variables + +| Variable | Description | +| ---- | ------- | +| cluster_name | the name of a cluster, adjustable | +| tidb_version | the version of TiDB, configured by default in TiDB-Ansible branches | +| deployment_method | the method of deployment, binary by default, Docker optional | +| process_supervision | the supervision way of processes, systemd by default, supervise optional | +| timezone | the timezone of the managed node, adjustable, `Asia/Shanghai` by default, used with the `set_timezone` variable | +| set_timezone | to edit the timezone of the managed node, True by default; False means closing | +| enable_elk | currently not supported | +| enable_firewalld | to enable the firewall, closed by default | +| enable_ntpd | to monitor the NTP service of the managed node, True by default; do not close it | +| machine_benchmark | to monitor the disk IOPS of the managed node, True by default; do not close it | +| set_hostname | to edit the hostname of the mananged node based on the IP, False by default | +| enable_binlog | whether to deploy Pump and enable the binlog, False by default, dependent on the Kafka cluster; see the `zookeeper_addrs` variable | +| zookeeper_addrs | the zookeeper address of the binlog Kafka cluster | +| enable_slow_query_log | to record the slow query log of TiDB into a single file: ({{ deploy_dir }}/log/tidb_slow_query.log). False by default, to record it into the TiDB log | +| deploy_without_tidb | the Key-Value mode, deploy only PD, TiKV and the monitoring service, not TiDB; set the IP of the tidb_servers host group to null in the `inventory.ini` file | + +## Deploy the TiDB cluster + +When `ansible-playbook` runs Playbook, the default concurrent number is 5. If many deployment target machines are deployed, you can add the `-f` parameter to specify the concurrency, such as `ansible-playbook deploy.yml -f 10`. + +The following example uses the `tidb` user account as the user who runs the service. + +To deploy TiDB using a normal user account, take the following steps: + +1. Edit the `tidb-ansible/inventory.ini` file to make sure `ansible_user = tidb`. + + ``` + ## Connection + # ssh via root: + # ansible_user = root + # ansible_become = true + # ansible_become_user = tidb + + # ssh via normal user + ansible_user = tidb + ``` + + Run the following command and if all servers return `tidb`, then the SSH mutual trust is successfully configured: + + ``` + ansible -i inventory.ini all -m shell -a 'whoami' + ``` + + Run the following command and if all servers return `root`, then sudo without password of the `tidb` user is successfully configured: + + ``` + ansible -i inventory.ini all -m shell -a 'whoami' -b + ``` + +2. Run the `local_prepare.yml` playbook, connect to the Internet and download TiDB binary to the Control Machine. + + ``` + ansible-playbook local_prepare.yml + ``` + +3. Initialize the system environment and modify the kernel parameters. + + ``` + ansible-playbook bootstrap.yml + ``` + +4. Deploy the TiDB cluster software. + + ``` + ansible-playbook deploy.yml + ``` + +5. Start the TiDB cluster. + + ``` + ansible-playbook start.yml + ``` + +> **Note:** If you want to deploy TiDB using the root user account, see [Ansible Deployment Using the Root User Account](op-guide/root-ansible-deployment.md). + +## Test the cluster + +It is recommended to configure load balancing to provide uniform SQL interface. + +1. Connect to the TiDB cluster using the MySQL client. + + ```sql + mysql -u root -h 172.16.10.1 -P 4000 + ``` + + > **Note**: The default port of TiDB service is 4000. + +2. Access the monitoring platform using a web browser. + + ``` + http://172.16.10.1:3000 + ``` + + The default account and password: `admin`/`admin`. + +## Perform rolling update + +- The rolling update of the TiDB service does not impact the ongoing business. Minimum requirements: `pd*3, tidb*2, tikv*3`. +- **If the `pump`/`drainer` services are running in the cluster, stop the `drainer` service before rolling update. The rolling update of the TiDB service automatically updates the `pump` service.** + +### Download the binary automatically + +1. Edit the value of the `tidb_version` parameter in `inventory.ini`, and specify the version number you need to update to. The following example specifies the version number as `v1.0.2`: + + ``` + tidb_version = v1.0.2 + ``` + +2. Delete the existing downloads directory `tidb-ansible/downloads/`. + + ``` + rm -rf downloads + ``` + +3. Use `playbook` to download the TiDB 1.0 binary and replace the existing binary in `tidb-ansible/resource/bin/` automatically. + + ``` + ansible-playbook local_prepare.yml + ``` + +### Download the binary manually + +You can also download the binary manually. Use `wget` to download the binary and replace the existing binary in `tidb-ansible/resource/bin/` manually. + +``` +wget http://download.pingcap.org/tidb-v1.0.0-linux-amd64-unportable.tar.gz +``` + +> **Note:** Remember to replace the version number in the download link. + +### Use Ansible for rolling update + +- Apply rolling update to the TiKV node (only update the TiKV service). + + ``` + ansible-playbook rolling_update.yml --tags=tikv + ``` + +- Apply rolling update to the PD node (only update single PD service). + + ``` + ansible-playbook rolling_update.yml --tags=pd + ``` + +- Apply rolling update to the TiDB node (only update single TiDB service). + + ``` + ansible-playbook rolling_update.yml --tags=tidb + ``` + +- Apply rolling update to all services. + + ``` + ansible-playbook rolling_update.yml + ``` + +## Summary of common operations + +| Job | Playbook | +|:----------------------------------|:-----------------------------------------| +| Start the cluster | `ansible-playbook start.yml` | +| Stop the cluster | `ansible-playbook stop.yml` | +| Destroy the cluster | `ansible-playbook unsafe_cleanup.yml` (If the deployment directory is a mount point, an error will be reported, but implementation results will remain unaffected) | +| Clean data (for test) | `ansible-playbook unsafe_cleanup_data.yml` | +| Rolling Upgrade | `ansible-playbook rolling_update.yml` | +| Rolling upgrade TiKV | `ansible-playbook rolling_update.yml --tags=tikv` | +| Rolling upgrade modules except PD | `ansible-playbook rolling_update.yml --skip-tags=pd` | +| Rolling update the monitoring components | `ansible-playbook rolling_update_monitor.yml` | + +## FAQ + +### How to download and install TiDB of a specified version? + +If you need to install the TiDB 1.0.4 version, download the `TiDB-Ansible release-1.0` branch and make sure `tidb_version = v1.0.4` in the `inventory.ini` file. For installation procedures, see the above description in this document. + +Download the `TiDB-Ansible release-1.0` branch from GitHub: + +``` +git clone -b release-1.0 https://github.com/pingcap/tidb-ansible.git +``` + +### How to customize the port? + +Edit the `inventory.ini` file and add the following host variable after the IP of the corresponding service: + +| Component | Variable Port | Default Port | Description | +|:--------------|:-------------------|:-------------|:-------------------------| +| TiDB | tidb_port | 4000 | the communication port for the application and DBA tools | +| TiDB | tidb_status_port | 10080 | the communication port to report TiDB status | +| TiKV | tikv_port | 20160 | the TiKV communication port | +| PD | pd_client_port | 2379 | the communication port between TiDB and PD | +| PD | pd_peer_port | 2380 | the inter-node communication port within the PD cluster | +| Pump | pump_port | 8250 | the pump communication port | +| Prometheus | prometheus_port | 9090 | the communication port for the Prometheus service | +| Pushgateway | pushgateway_port | 9091 | the aggregation and report port for TiDB, TiKV, and PD monitor | +| node_exporter | node_exporter_port | 9100 | the communication port to report the system information of every TiDB cluster node | +| Grafana | grafana_port | 3000 | the port for the external Web monitoring service and client (Browser) access | + +### How to customize the deployment directory? + +| Component | Variable Directory | Default Directory | Description | +|:--------------|:----------------------|:------------------------------|:-----| +| Global | deploy_dir | /home/tidb/deploy | the deployment directory | +| TiDB | tidb_log_dir | {{ deploy_dir }}/log | the TiDB log directory | +| TiKV | tikv_log_dir | {{ deploy_dir }}/log | the TiKV log directory | +| TiKV | tikv_data_dir | {{ deploy_dir }}/data | the data directory | +| TiKV | wal_dir | "" | the rocksdb write-ahead log directory, consistent with the TiKV data directory when the value is null | +| TiKV | raftdb_path | "" | the raftdb directory, being tikv_data_dir/raft when the value is null | +| PD | pd_log_dir | {{ deploy_dir }}/log | the PD log directory | +| PD | pd_data_dir | {{ deploy_dir }}/data.pd | the PD data directory | +| Pump | pump_log_dir | {{ deploy_dir }}/log | the Pump log directory | +| Pump | pump_data_dir | {{ deploy_dir }}/data.pump | the Pump data directory | +| Prometheus | prometheus_log_dir | {{ deploy_dir }}/log | the Prometheus log directory | +| Prometheus | prometheus_data_dir | {{ deploy_dir }}/data.metrics | the Prometheus data directory | +| pushgateway | pushgateway_log_dir | {{ deploy_dir }}/log | the pushgateway log directory | +| node_exporter | node_exporter_log_dir | {{ deploy_dir }}/log | the node_exporter log directory | +| Grafana | grafana_log_dir | {{ deploy_dir }}/log | the Grafana log directory | +| Grafana | grafana_data_dir | {{ deploy_dir }}/data.grafana | the Grafana data directory | + +### How to check whether the NTP service is normal? + +Run the following command. If it returns `running`, then the NTP service is running: + +``` +$ sudo systemctl status ntpd.service +● ntpd.service - Network Time Service + Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled) + Active: active (running) since 一 2017-12-18 13:13:19 CST; 3s ago +``` + +Run the ntpstat command. If it returns `synchronised to NTP server` (synchronizing with the NTP server), then the synchronization process is normal. + +``` +$ ntpstat +synchronised to NTP server (85.199.214.101) at stratum 2 + time correct to within 91 ms + polling server every 1024 s +``` + +> **Note:** For the Ubuntu system, install the `ntpstat` package. + +The following condition indicates the NTP service is not synchronized normally: + +``` +$ ntpstat +unsynchronised +``` + +The following condition indicates the NTP service is not running normally: + +``` +$ ntpstat +Unable to talk to NTP daemon. Is it running? +``` + +Running the following command can promote the starting of the NTP service synchronization. You can replace `pool.ntp.org` with other NTP server. + +``` +$ sudo systemctl stop ntpd.service +$ sudo ntpdate pool.ntp.org +$ sudo systemctl start ntpd.service +``` + +### How to deploy the NTP service using Ansible? + +Refer to [Download TiDB-Ansible to the Control Machine](#download-tidb-ansible-to-the-control-machine) and download TiDB-Ansible. Add the IP of the deployment target machine to `[servers]`. You can replace the `ntp_server` variable value `pool.ntp.org` with other NTP server. Before starting the NTP service, the system `ntpdate` the NTP server. The NTP service deployed by Ansible uses the default server list in the package. See the `server` parameter in the `cat /etc/ntp.conf` file. + +``` +$ vi hosts.ini +[servers] +172.16.10.49 +172.16.10.50 +172.16.10.61 +172.16.10.62 + +[all:vars] +username = tidb +ntp_server = pool.ntp.org +``` + +Run the following command, and enter the root password of the deployment target machine as prompted: + +``` +$ ansible-playbook -i hosts.ini deploy_ntp.yml -k +``` + +### How to install the NTP service manually? + +Run the following command on the CentOS 7 system: + +``` +$ sudo yum install ntp ntpdate +$ sudo systemctl start ntpd.service +``` + +### How to deploy TiDB using Docker? + +- Install Docker on the Control Machine and the managed node. The normal user (such as `ansible_user = tidb`) account in `inventory.ini` must have the sudo privileges or [running Docker privileges](https://docs.docker.com/engine/installation/linux/linux-postinstall/). +- Install the `docker-py` module on the Control Machine and the managed node. + + ``` + sudo pip install docker-py + ``` + +- Edit the `inventory.ini` file: + + ``` + # deployment methods, [binary, docker] + deployment_method = docker + + # process supervision, [systemd, supervise] + process_supervision = systemd + ``` + +The Docker installation process is similar to the binary method. + +### How to adjust the supervision method of a process from supervise to systemd? + +``` +# process supervision, [systemd, supervise] +process_supervision = systemd +``` + +For versions earlier than TiDB 1.0.4, the TiDB-Ansible supervision method of a process is supervise by default. The previously installed cluster can remain the same. If you need to change the supervision method to systemd, close the cluster and run the following command: + +``` +ansible-playbook stop.yml +ansible-playbook deploy.yml -D +ansible-playbook start.yml +``` + +#### How to install Ansible? + +- For the CentOS system, install Ansible following the method described at the beginning of this document. +- For the Ubuntu system, install Ansible using PPA source: + + ```bash + sudo add-apt-repository ppa:ansible/ansible + sudo apt-get update + sudo apt-get install ansible + ``` + +- For other systems, see the [official Ansible document](http://docs.ansible.com/ansible/intro_installation.html). + +### Mount the data disk ext4 filesystem with options + +Format your data disks to ext4 filesystem and mount the filesystem with the `nodelalloc` and `noatime` options. It is required to mount the `nodelalloc` option, or else the Ansible deployment cannot pass the detection. The `noatime` option is optional. + +Take the `/dev/nvme0n1` data disk as an example: + +``` +# vi /etc/fstab +/dev/nvme0n1 /data1 ext4 defaults,nodelalloc,noatime 0 2 +``` + +### How to configure SSH mutual trust and sudo without password? + +#### Create the `tidb` user on the Control Machine and generate the SSH key. + +``` +# useradd tidb +# passwd tidb +# su - tidb +$ +$ ssh-keygen -t rsa +Generating public/private rsa key pair. +Enter file in which to save the key (/home/tidb/.ssh/id_rsa): +Created directory '/home/tidb/.ssh'. +Enter passphrase (empty for no passphrase): +Enter same passphrase again: +Your identification has been saved in /home/tidb/.ssh/id_rsa. +Your public key has been saved in /home/tidb/.ssh/id_rsa.pub. +The key fingerprint is: +SHA256:eIBykszR1KyECA/h0d7PRKz4fhAeli7IrVphhte7/So tidb@172.16.10.49 +The key's randomart image is: ++---[RSA 2048]----+ +|=+o+.o. | +|o=o+o.oo | +| .O.=.= | +| . B.B + | +|o B * B S | +| * + * + | +| o + . | +| o E+ . | +|o ..+o. | ++----[SHA256]-----+ +``` + +#### How to automatically configure SSH mutual trust and sudo without password using Ansible? + +Refer to [Download TiDB-Ansible to the Control Machine](#download-tidb-ansible-to-the-control-machine) and download TiDB-Ansible. Add the IP of the deployment target machine to `[servers]`. + +``` +$ vi hosts.ini +[servers] +172.16.10.49 +172.16.10.50 +172.16.10.61 +172.16.10.62 + +[all:vars] +username = tidb +``` + +Run the following command, and enter the `root` password of the deployment target machine as prompted: + +``` +$ ansible-playbook -i hosts.ini create_users.yml -k +``` + +#### How to manually configure SSH mutual trust and sudo without password? + +Use the `root` user to login to the deployment target machine, create the `tidb` user and set the login password. + +``` +# useradd tidb +# passwd tidb +``` + +To configure sudo without password, run the following command, and add `tidb ALL=(ALL) NOPASSWD: ALL` to the end of the file: + +``` +# visudo +tidb ALL=(ALL) NOPASSWD: ALL +``` + +Use the `tidb` user to login to the Control Machine, and run the following command. Replace `172.16.10.61` with the IP of your deployment target machine, and enter the `tidb` user password of the deployment target machine. Successful execution indicates that SSH mutual trust is already created. This applies to other machines as well. + +``` +[tidb@172.16.10.49 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.10.61 +``` + +#### Authenticate SSH mutual trust and sudo without password + +Use the `tidb` user to login to the Control Machine, and login to the IP of the target machine using SSH. If you do not need to enter the password and can successfully login, then SSH mutual trust is successfully configured. + +``` +[tidb@172.16.10.49 ~]$ ssh 172.16.10.61 +[tidb@172.16.10.61 ~]$ +``` + +After you login to the deployment target machine using the `tidb` user, run the following command. If you do not need to enter the password and can switch to the `root` user, then sudo without password of the `tidb` user is successfully configured. + +``` +[tidb@172.16.10.61 ~]$ sudo -su root +[root@172.16.10.61 tidb]# +``` + +### Error: You need to install jmespath prior to running json_query filter + +See [Install Ansible and dependencies in the Control Machine](#install-ansible-and-dependencies-in-the-control-machine) and install Ansible 2.4 in the Control Machine. The `python2-jmespath` dependent package is installed by default. + +For the CentOS 7 system, you can install `jmespath` using the following command: + +``` +sudo yum install python2-jmespath +``` + +Enter `import jmespath` in the Python interactive window of the Control Machine. + +- If no error displays, the dependency is successfully installed. +- If the `ImportError: No module named jmespath` error displays, the Python `jmespath` module is not successfully installed. + +``` +$ python +Python 2.7.5 (default, Nov 6 2016, 00:28:07) +[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2 +Type "help", "copyright", "credits" or "license" for more information. +>>> import jmespath +``` \ No newline at end of file diff --git a/v1.0/op-guide/backup-restore.md b/v1.0/op-guide/backup-restore.md new file mode 100755 index 0000000000000..cdb87a09d52db --- /dev/null +++ b/v1.0/op-guide/backup-restore.md @@ -0,0 +1,121 @@ +--- +title: Backup and Restore +category: operations +--- + +# Backup and Restore + +## About + +This document describes how to backup and restore the data of TiDB. Currently, this document only covers full backup and restoration. + +Here we assume that the TiDB service information is as follows: + +|Name|Address|Port|User|Password| +|:----:|:-------:|:----:|:----:|:------:| +|TiDB|127.0.0.1|4000|root|*| + +Use the following tools for data backup and restoration: + +- `mydumper`: to export data from TiDB +- `loader`: to import data into TiDB + +### Download TiDB toolset (Linux) + +```bash +# Download the tool package. +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf tidb-enterprise-tools-latest-linux-amd64.tar.gz +cd tidb-enterprise-tools-latest-linux-amd64 +``` + +## Full backup and restoration using `mydumper`/`loader` + +You can use `mydumper` to export data from MySQL and `loader` to import the data into TiDB. + +> **Note**: Although TiDB also supports the official `mysqldump` tool from MySQL for data migration, it is not recommended to use it. Its performance is much lower than `mydumper`/`loader` and it takes much time to migrate large amounts of data. `mydumper`/`loader` is more powerful. For more information, see https://github.com/maxbube/mydumper. + +### Best practices of full backup and restoration using `mydumper`/`loader` + +To quickly backup and restore data (especially large amounts of data), refer to the following recommendations: + +- Keep the exported data file as small as possible and it is recommended keep it within 64M. You can use the `-F` parameter to set the value. +- You can adjust the `-t` parameter of `loader` based on the number and the load of TiKV instances. For example, if there are three TiKV instances, `-t` can be set to 3 * (1 ~ n). If the load of TiKV is too high and the log `backoffer.maxSleep 15000ms is exceeded` is displayed many times, decrease the value of `-t`; otherwise, increase it. + +#### An example of restoring data and related configuration + +- The total size of the exported files is 214G. A single table has 8 columns and 2 billion rows. +- The cluster topology: + - 12 TiKV instances: 4 nodes, 3 TiKV instances per node + - 4 TiDB instances + - 3 PD instances +- The configuration of each node: + - CPU: Intel Xeon E5-2670 v3 @ 2.30GHz + - 48 vCPU [2 x 12 physical cores] + - Memory: 128G + - Disk: sda [raid 10, 300G] sdb[RAID 5, 2T] + - Operating System: CentOS 7.3 +- The `-F` parameter of `mydumper` is set to 16 and the `-t` parameter of `loader` is set to 64. + +**Results**: It takes 11 hours to import all the data, which is 19.4G/hour. + +### Backup data from TiDB + +Use `mydumper` to backup data from TiDB. + +```bash +./bin/mydumper -h 127.0.0.1 -P 4000 -u root -t 16 -F 64 -B test -T t1,t2 --skip-tz-utc -o ./var/test +``` +In this command, + +- `-B test`: means the data is exported from the `test` database. +- `-T t1,t2`: means only the `t1` and `t2` tables are exported. +- `-t 16`: means 16 threads are used to export the data. +- `-F 64`: means a table is partitioned into chunks and one chunk is 64MB. +- `--skip-tz-utc`: the purpose of adding this parameter is to ignore the inconsistency of time zone setting between MySQL and the data exporting machine and to disable automatic conversion. + +### Restore data into TiDB + +To restore data into TiDB, use `loader` to import the previously exported data. See [Loader instructions](../tools/loader.md) for more information. + +```bash +./bin/loader -h 127.0.0.1 -u root -P 4000 -t 32 -d ./var/test +``` + +After the data is imported, you can view the data in TiDB using the MySQL client: + +```sql +mysql -h127.0.0.1 -P4000 -uroot + +mysql> show tables; ++----------------+ +| Tables_in_test | ++----------------+ +| t1 | +| t2 | ++----------------+ + +mysql> select * from t1; ++----+------+ +| id | age | ++----+------+ +| 1 | 1 | +| 2 | 2 | +| 3 | 3 | ++----+------+ + +mysql> select * from t2; ++----+------+ +| id | name | ++----+------+ +| 1 | a | +| 2 | b | +| 3 | c | ++----+------+ +``` \ No newline at end of file diff --git a/v1.0/op-guide/binary-deployment.md b/v1.0/op-guide/binary-deployment.md new file mode 100755 index 0000000000000..abc743265db32 --- /dev/null +++ b/v1.0/op-guide/binary-deployment.md @@ -0,0 +1,447 @@ +--- +title: Deploy TiDB Using the Binary +category: operations +--- + +# Deploy TiDB Using the Binary + +## Overview + +A complete TiDB cluster contains PD, TiKV, and TiDB. To start the database service, follow the order of PD -> TiKV -> TiDB. To stop the database service, follow the order of stopping TiDB -> TiKV -> PD. + +Before you start, see [TiDB architecture](../overview.md#tidb-architecture) and [Software and Hardware Requirements](op-guide/recommendation.md). + +This document describes the binary deployment of three scenarios: + +- To quickly understand and try TiDB, see [Single node cluster deployment](#single-node-cluster-deployment). +- To try TiDB out and explore the features, see [Multiple nodes cluster deployment for test](#multiple-nodes-cluster-deployment-for-test). +- To deploy and use TiDB in production, see [Multiple nodes cluster deployment](#multiple-nodes-cluster-deployment). + +## TiDB components and default ports + +### TiDB database components (required) + +See the following table for the default ports for the TiDB components: + +| Component | Default Port | Protocol | Description | +| :-- | :-- | :-- | :----------- | +| ssh | 22 | TCP | sshd service | +| TiDB| 4000 | TCP | the communication port for the application and DBA tools | +| TiDB| 10080 | TCP | the communication port to report TiDB status | +| TiKV| 20160 | TCP | the TiKV communication port | +| PD | 2379 | TCP | the communication port between TiDB and PD | +| PD | 2380 | TCP | the inter-node communication port within the PD cluster | + +### TiDB database components (optional) + +See the following table for the default ports for the optional TiDB components: + +| Component | Default Port | Protocol | Description | +| :-- | :-- | :-- | :------------------------ | +| Prometheus | 9090| TCP | the communication port for the Prometheus service | +| Pushgateway | 9091 | TCP | the aggregation and report port for TiDB, TiKV, and PD monitor | +| Node_exporter| 9100| TCP | the communication port to report the system information of every TiDB cluster node | +| Grafana | 3000 | TCP | the port for the external Web monitoring service and client (Browser) access | +| alertmanager | 9093 | TCP | the port for the alert service | + +## Configure and check the system before installation + +### Operating system + +| Configuration | Description | +| :-- | :-------------------- | +| Supported Platform | See the [Software and Hardware Requirements](./recommendation.md) | +| File System | The ext4 file system is recommended in TiDB Deployment | +| Swap Space | The Swap Space is recommended to close in TiDB Deployment | +| Disk Block Size | Set the size of the system disk `Block` to `4096` | + +### Network and firewall + +| Configuration | Description | +| :-- | :------------------- | +| Firewall / Port | Check whether the ports required by TiDB are accessible between the nodes | + +### Operating system parameters + +| Configuration | Description | +| :-- | :-------------------------- | +| Nice Limits | For system users, set the default value of `nice` in TiDB to `0` | +| min_free_kbytes | The setting for `vm.min_free_kbytes` in `sysctl.conf` needs to be high enough | +| User Open Files Limit | For database administrators, set the number of TiDB open files to `1000000` | +| System Open File Limits | Set the number of system open files to `1000000` | +| User Process Limits | For TiDB users, set the `nproc` value to `4096` in `limits.conf` | +| Address Space Limits | For TiDB users, set the space to `unlimited` in `limits.conf` | +| File Size Limits | For TiDB users, set the `fsize` value to `unlimited` in `limits.conf` | +| Disk Readahead | Set the value of the `readahead` data disk to `4096` at a minimum | +| NTP service | Configure the NTP time synchronization service for each node | +| SELinux | Turn off the SELinux service for each node | +| CPU Frequency Scaling | It is recommended to turn on CPU overclocking | +| Transparent Hugepages | For Red Hat 7+ and CentOS 7+ systems, it is required to set the Transparent Hugepages to `always` | +| I/O Scheduler | Set the I/O Scheduler of data disks to the `deadline` mode | +| vm.swappiness | Set `vm.swappiness = 0` | + + +> **Note**: To adjust the operating system parameters, contact your system administrator. + +### Database running user + +| Configuration | Description | +| :-- | :---------------------------- | +| LANG environment | Set `LANG = en_US.UTF8` | +| TZ time zone | Set the TZ time zone of all nodes to the same value | + +## Create the database running user account + +In the Linux environment, create TiDB on each installation node as a database running user, and set up the SSH mutual trust between cluster nodes. To create a running user and open SSH mutual trust, contact the system administrator. Here is an example: + +```bash +# useradd tidb +# usermod -a -G tidb tidb +# su - tidb +Last login: Tue Aug 22 12:06:23 CST 2017 on pts/2 +-bash-4.2$ ssh-keygen -t rsa +Generating public/private rsa key pair. +Enter file in which to save the key (/home/tidb/.ssh/id_rsa): +Created directory '/home/tidb/.ssh'. +Enter passphrase (empty for no passphrase): +Enter same passphrase again: +Your identification has been saved in /home/tidb/.ssh/id_rsa. +Your public key has been saved in /home/tidb/.ssh/id_rsa.pub. +The key fingerprint is: +5a:00:e6:df:9e:40:25:2c:2d:e2:6e:ee:74:c6:c3:c1 tidb@t001 +The key's randomart image is: ++--[ RSA 2048]----+ +| oo. . | +| .oo.oo | +| . ..oo | +| .. o o | +| . E o S | +| oo . = . | +| o. * . o | +| ..o . | +| .. | ++-----------------+ + +-bash-4.2$ cd .ssh +-bash-4.2$ cat id_rsa.pub >> authorized_keys +-bash-4.2$ chmod 644 authorized_keys +-bash-4.2$ ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.1.100 +``` + +## Download the official binary package + +TiDB provides the official binary installation package that supports Linux. For the operating system, it is recommended to use Redhat 7.3+, CentOS 7.3+ and higher versions. + +### Operating system: Linux (Redhat 7+, CentOS 7+) + +``` +# Download the package. +wget http://download.pingcap.org/tidb-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/tidb-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c tidb-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf tidb-latest-linux-amd64.tar.gz +cd tidb-latest-linux-amd64 +``` + +## Single node cluster deployment + +After downloading the TiDB binary package, you can run and test the TiDB cluster on a standalone server. Follow the steps below to start PD, TiKV and TiDB: + +1. Start PD. + + ```bash + ./bin/pd-server --data-dir=pd \ + --log-file=pd.log + ``` + + +2. Start TiKV. + + ```bash + ./bin/tikv-server --pd="127.0.0.1:2379" \ + --data-dir=tikv \ + --log-file=tikv.log + ``` + +3. Start TiDB. + + ```bash + ./bin/tidb-server --store=tikv \ + --path="127.0.0.1:2379" \ + --log-file=tidb.log + ``` + +4. Use the official MySQL client to connect to TiDB. + + ```sh + mysql -h 127.0.0.1 -P 4000 -u root -D test + ``` + +## Multiple nodes cluster deployment for test + +If you want to test TiDB but have a limited number of nodes, you can use one PD instance to test the entire cluster. + +Assuming that you have four nodes, you can deploy 1 PD instance, 3 TiKV instances, and 1 TiDB instance. See the following table for details: + +| Name | Host IP | Services | +| :-- | :-- | :------------------- | +| Node1 | 192.168.199.113 | PD1, TiDB | +| Node2 | 192.168.199.114 | TiKV1 | +| Node3 | 192.168.199.115 | TiKV2 | +| Node4 | 192.168.199.116 | TiKV3 | + +Follow the steps below to start PD, TiKV and TiDB: + +1. Start PD on Node1. + + ```bash + ./bin/pd-server --name=pd1 \ + --data-dir=pd1 \ + --client-urls="http://192.168.199.113:2379" \ + --peer-urls="http://192.168.199.113:2380" \ + --initial-cluster="pd1=http://192.168.199.113:2380" \ + --log-file=pd.log + ``` + +2. Start TiKV on Node2, Node3 and Node4. + + ```bash + ./bin/tikv-server --pd="192.168.199.113:2379" \ + --addr="192.168.199.114:20160" \ + --data-dir=tikv1 \ + --log-file=tikv.log + + ./bin/tikv-server --pd="192.168.199.113:2379" \ + --addr="192.168.199.115:20160" \ + --data-dir=tikv2 \ + --log-file=tikv.log + + ./bin/tikv-server --pd="192.168.199.113:2379" \ + --addr="192.168.199.116:20160" \ + --data-dir=tikv3 \ + --log-file=tikv.log + ``` + +3. Start TiDB on Node1. + + ```bash + ./bin/tidb-server --store=tikv \ + --path="192.168.199.113:2379" \ + --log-file=tidb.log + ``` + +4. Use the official MySQL client to connect to TiDB. + + ```sh + mysql -h 192.168.199.113 -P 4000 -u root -D test + ``` + +## Multiple nodes cluster deployment + +For the production environment, multiple nodes cluster deployment is recommended. Before you begin, see [Software and Hardware Requirements](./recommendation.md). + +Assuming that you have six nodes, you can deploy 3 PD instances, 3 TiKV instances, and 1 TiDB instance. See the following table for details: + +| Name | Host IP | Services | +| :-- | :-- | :-------------- | +| Node1 | 192.168.199.113| PD1, TiDB | +| Node2 | 192.168.199.114| PD2 | +| Node3 | 192.168.199.115| PD3 | +| Node4 | 192.168.199.116| TiKV1 | +| Node5 | 192.168.199.117| TiKV2 | +| Node6 | 192.168.199.118| TiKV3 | + +Follow the steps below to start PD, TiKV, and TiDB: + +1. Start PD on Node1, Node2, and Node3 in sequence. + + ```bash + ./bin/pd-server --name=pd1 \ + --data-dir=pd1 \ + --client-urls="http://192.168.199.113:2379" \ + --peer-urls="http://192.168.199.113:2380" \ + --initial-cluster="pd1=http://192.168.199.113:2380,pd2=http://192.168.199.114:2380,pd3=http://192.168.199.115:2380" \ + -L "info" \ + --log-file=pd.log + + ./bin/pd-server --name=pd2 \ + --data-dir=pd2 \ + --client-urls="http://192.168.199.114:2379" \ + --peer-urls="http://192.168.199.114:2380" \ + --initial-cluster="pd1=http://192.168.199.113:2380,pd2=http://192.168.199.114:2380,pd3=http://192.168.199.115:2380" \ + --join="http://192.168.199.113:2379" \ + -L "info" \ + --log-file=pd.log + + ./bin/pd-server --name=pd3 \ + --data-dir=pd3 \ + --client-urls="http://192.168.199.115:2379" \ + --peer-urls="http://192.168.199.115:2380" \ + --initial-cluster="pd1=http://192.168.199.113:2380,pd2=http://192.168.199.114:2380,pd3=http://192.168.199.115:2380" \ + --join="http://192.168.199.113:2379" \ + -L "info" \ + --log-file=pd.log + ``` + +2. Start TiKV on Node4, Node5 and Node6. + + ```bash + ./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \ + --addr="192.168.199.116:20160" \ + --data-dir=tikv1 \ + --log-file=tikv.log + + ./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \ + --addr="192.168.199.117:20160" \ + --data-dir=tikv2 \ + --log-file=tikv.log + + ./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \ + --addr="192.168.199.118:20160" \ + --data-dir=tikv3 \ + --log-file=tikv.log + ``` + +3. Start TiDB on Node1. + + ```bash + ./bin/tidb-server --store=tikv \ + --path="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \ + --log-file=tidb.log + ``` + +4. Use the official MySQL client to connect to TiDB. + + ```sh + mysql -h 192.168.199.113 -P 4000 -u root -D test + ``` + +> **Note**: +> +> - If you start TiKV or deploy PD in the production environment, it is highly recommended to specify the path for the configuration file using the `--config` parameter. If the parameter is not set, TiKV or PD does not read the configuration file. +> - To tune TiKV, see [Performance Tuning for TiKV](./tune-TiKV.md). +> - If you use `nohup` to start the cluster in the production environment, write the startup commands in a script and then run the script. If not, the `nohup` process might abort because it receives exceptions when the Shell command exits. For more information, see [The TiDB/TiKV/PD process aborts unexpectedly](../trouble-shooting.md#the-tidbtikvpd-process-aborts-unexpectedly). + +## TiDB monitor and alarm deployment + +To install and deploy the environment for TiDB monitor and alarm service, see the following table for the system information: + +| Name | Host IP | Services | +| :-- | :-- | :------------- | +| Node1 | 192.168.199.113 | node_export, pushgateway, Prometheus, Grafana | +| Node2 | 192.168.199.114 | node_export | +| Node3 | 192.168.199.115 | node_export | +| Node4 | 192.168.199.116 | node_export | + +### Download the binary package + +``` +# Download the package. +wget https://github.com/prometheus/prometheus/releases/download/v1.5.2/prometheus-1.5.2.linux-amd64.tar.gz +wget https://github.com/prometheus/node_exporter/releases/download/v0.14.0-rc.2/node_exporter-0.14.0-rc.2.linux-amd64.tar.gz +wget https://grafanarel.s3.amazonaws.com/builds/grafana-4.1.2-1486989747.linux-x64.tar.gz +wget https://github.com/prometheus/pushgateway/releases/download/v0.3.1/pushgateway-0.3.1.linux-amd64.tar.gz + +# Extract the package. +tar -xzf prometheus-1.5.2.linux-amd64.tar.gz +tar -xzf node_exporter-0.14.0-rc.1.linux-amd64.tar.gz +tar -xzf grafana-4.1.2-1486989747.linux-x64.tar.gz +tar -xzf pushgateway-0.3.1.linux-amd64.tar.gz +``` + +### Start the monitor service + +#### Start `node_exporter` on Node1, Node2, Node3 and Node4. + +``` +$cd node_exporter-0.14.0-rc.1.linux-amd64 + +# Start the node_exporter service. +./node_exporter --web.listen-address=":9100" \ + --log.level="info" +``` + +#### Start `pushgateway` on Node1. + +``` +$cd pushgateway-0.3.1.linux-amd64 + +# Start the pushgateway service. +./pushgateway \ + --log.level="info" \ + --web.listen-address=":9091" +``` + +#### Start Prometheus in Node1. + +``` +$cd prometheus-1.5.2.linux-amd64 + +# Edit the Configuration file: + +vi prometheus.yml + +... +global: + scrape_interval: 15s # By default, scrape targets every 15 seconds. + evaluation_interval: 15s # By default, scrape targets every 15 seconds. + # scrape_timeout is set to the global default (10s). + labels: + cluster: 'test-cluster' + monitor: "prometheus" + +scrape_configs: + - job_name: 'overwritten-cluster' + scrape_interval: 3s + honor_labels: true # don't overwrite job & instance labels + static_configs: + - targets: ['192.168.199.113:9091'] + + - job_name: "overwritten-nodes" + honor_labels: true # don't overwrite job & instance labels + static_configs: + - targets: + - '192.168.199.113:9100' + - '192.168.199.114:9100' + - '192.168.199.115:9100' + - '192.168.199.116:9100' +... + +# Start Prometheus: +./prometheus \ + --config.file="/data1/tidb/deploy/conf/prometheus.yml" \ + --web.listen-address=":9090" \ + --web.external-url="http://192.168.199.113:9090/" \ + --log.level="info" \ + --storage.local.path="/data1/tidb/deploy/data.metrics" \ + --storage.local.retention="360h0m0s" +``` + +#### Start Grafana in Node1. + +``` +cd grafana-4.1.2-1486989747.linux-x64 + +# Edit the Configuration file: + +vi grafana.ini + +... + +# The http port to use +http_port = 3000 + +# The public facing domain name used to access grafana from a browser +domain = 192.168.199.113 + +... + +# Start the Grafana service: +./grafana-server \ + --homepath="/data1/tidb/deploy/opt/grafana" \ + --config="/data1/tidb/deploy/opt/grafana/conf/grafana.ini" +``` diff --git a/v1.0/op-guide/configuration.md b/v1.0/op-guide/configuration.md new file mode 100755 index 0000000000000..197508c013248 --- /dev/null +++ b/v1.0/op-guide/configuration.md @@ -0,0 +1,332 @@ +--- +title: Configuration Flags +category: operations +--- + +# Configuration Flags + +TiDB, TiKV and PD are configurable using command-line flags and environment variables. + +## TiDB + +The default TiDB ports are 4000 for client requests and 10080 for status report. + +### `--binlog-socket` + +- The TiDB services use the unix socket file for internal connections, such as the PUMP service +- Default: `` +- You can use "/tmp/pump.sock" to accept the communication of PUMP unix socket file. + +### `--cross-join` + +- To enable (true) or disable (false) the cross join without any equal conditions +- Default: true +- The value can be `true` or `false`. By default, `true` is to enable `join` without any equal conditions (the `Where` field). If you set the value to `false`, the server refuses to run the `join` statement. + +### `--host` + +- The host address that the TiDB server monitors +- Default: "0.0.0.0" +- The TiDB server monitors this address. +- The "0.0.0.0" monitors all network cards. If you have multiple network cards, specify the network card that provides service, such as 192.168.100.113. + +### `--join-concurrency int` + +- The number of goroutine when `join-concurrency` executes `join` concurrently +- Default: 5 +- The number depends on the amount of data and data distribution, usually the larger the better, and a larger number means a larger CPU overhead. + +### `-L` + ++ The log level ++ Default: "info" ++ You can choose from debug, info, warn, error, or fatal. + +### `--lease` + ++ The schema lease time in seconds ++ Default: "10" ++ This is the schema lease time that is used in online schema changes. The value will affect the DDL statement running time. Do not change it unless you understand the internal mechanism. + +### `--log-file` + ++ The log file ++ Default: "" ++ If this flag is not set, logs will be written to stderr. Otherwise, logs will be stored in the log file which will be automatically rotated every day. + +### `--metrics-addr` + ++ The Prometheus pushgateway address ++ Default: "" ++ Leaving it empty stops the Prometheus client from pushing. ++ The format is: + + ``` + --metrics-addr=192.168.100.115:9091 + ``` + +### `--metrics-interval` + ++ The Prometheus client push interval in seconds ++ Default: 0 ++ Setting the value to 0 stops the Prometheus client from pushing. + +### `-P` + ++ The monitoring port of TiDB services ++ Default: "4000" ++ The TiDB server accepts MySQL client requests from this port. + +### `--path` + +- The path to the data directory for local storage engines like "goleveldb" and "BoltDB" +- Do not set `--path` for the "memory" storage engine. +- For the distributed storage engine like TiKV, `--path` specifies the actual PD address. Assuming that you deploy the PD server on 192.168.100.113:2379, 192.168.100.114:2379 and 192.168.100.115:2379, the value of `--path` is "192.168.100.113:2379, 192.168.100.114:2379, 192.168.100.115:2379". +- Default: "/tmp/tidb" + +### `--perfschema` + ++ To enable(true) or disable(false) the performance schema ++ Default: false ++ The value can be (true) or (false). (true) is to enable and (false) is to disable. The Performance Schema provides a way to inspect internal execution of the server at runtime. See [performance schema](http://dev.mysql.com/doc/refman/5.7/en/performance-schema.html) for more information. If you enable the performance schema, the performance is affected. + +### `--privilege` + ++ To enable(true) or disable(false) the privilege check(for debugging) ++ Default: true ++ The value can be (true) or (false). (true) is to enable and (false) is to disable. This option is deprecated and will be removed. + +### `--proxy-protocol-networks` + ++ The list of proxy server's IP addresses allowed by PROXY Protocol; if you need to configure multiple addresses, separate them using `,` ++ Default: "" (empty string) ++ Leaving it empty disables PROXY Protocol. The value can be the IP address (192.168.1.50) or CIDR (192.168.1.0/24). `*` means any IP addresses. + +### `--proxy-protocol-header-timeout` + ++ Timeout for the PROXY protocol header read ++ Default: 5 (seconds) ++ Generally use the default value and do not set its value to 0. The unit is second. + +### `--query-log-max-len int` + +- The maximum length of SQL statements recorded in the log +- Default: 2048 +- Overlong requests are truncated when output to the log. + +### `--report-status` + ++ To enable(true) or disable(false) the status report and pprof tool ++ Default: true ++ The value can be (true) or (false). (true) is to enable metrics and pprof. (false) is to disable metrics and pprof. + +### `--retry-limit int` + +- The maximum number of retries when a transaction meets conflicts +- Default: 10 +- A large number of retries affects the TiDB cluster performance. + +### `--run-ddl` + +- To see whether the `tidb-server` runs DDL statements, and set when the number of `tidb-server` is over two in the cluster +- Default: true +- The value can be (true) or (false). (true) indicates the `tidb-server` runs DDL itself. (false) indicates the `tidb-server` does not run DDL itself. + +### `--skip-grant-table` + ++ To enable anyone to connect without a password and with all privileges ++ Default: false ++ The value can be (true) or (false). This option is usually used to reset password, and enabling it requires the root privileges. + +### `--slow-threshold int` + +- The SQL statements with a larger value of this parameter are recorded. +- Default: 300 +- The value can only be an integer (int), and the unit is millisecond. + +### `--socket string` + ++ The TiDB services use the unix socket file for external connections. ++ Default: "" ++ You can use "/tmp/tidb.sock" to open the unix socket file. + +### `--ssl-ca` + ++ The path to a file in PEM format that contains a list of trusted SSL certificate authorities. ++ Default: "" ++ When this option is specified along with `--ssl-cert` and `--ssl-key`, the server verifies the client's certificate via this CA list if the client provides its certificate accordingly. ++ The secure connection will be established without client verification if the client does not provide a certificate even when this option is set. + +### `--ssl-cert` + ++ The path to an SSL certificate file in PEM format to use for establishing a secure connection. ++ Default: "" ++ When this option is specified along with `--ssl-key`, the server permits but does not require secure connections. ++ If the specified certificate or key is not valid, the server still starts normally but does not permit secure connections. + +### `--ssl-key` + ++ The path to an SSL key file in PEM format to use for establishing a secure connection, namely the private key of the certificate you specified by `--ssl-cert`. ++ Default: "" ++ Currently TiDB does not support keys protected by a passphrase. + +### `--status` + ++ The status report port for TiDB server ++ Default: "10080" ++ This is used to get server internal data. The data includes [prometheus metrics](https://prometheus.io/) and [pprof](https://golang.org/pkg/net/http/pprof/). ++ Prometheus metrics can be got through "http://host:status_port/metrics". ++ Pprof data can be got through "http://host:status_port/debug/pprof". + +### `--statsLease string` + +- Scan the full table incrementally, and analyze information like the data amount and index of the table +- Default: 3s +- Before you use `--statsLease string`, run `analyze table name` manually. The statistics are updated automatically and stored in TiKV, taking up some memory. + +### `--store` + ++ The storage engine type ++ Human-readable name for this member. ++ Default: "goleveldb" ++ You can choose from "memory", "goleveldb", "BoltDB" or "TiKV". The first three are all local storage engines. TiKV is a distributed storage engine. + +### `--tcp-keep-alive` + +- `keepalive` is enabled in the tcp layer of TiDB. +- Default: false + +## Placement Driver (PD) + +### `--advertise-client-urls` + ++ The advertise URL list for client traffic from outside ++ Default: ${client-urls} ++ If the client cannot connect to PD through the default listening client URLs, you must manually set the advertise client URLs explicitly. ++ For example, the internal IP address of Docker is 172.17.0.1, while the IP address of the host is 192.168.100.113 and the port mapping is set to `-p 2379:2379`. In this case, you can set `--advertise-client-urls` to "http://192.168.100.113:2379". The client can find this service through "http://192.168.100.113:2379". + +### `--advertise-peer-urls` + ++ The advertise URL list for peer traffic from outside ++ Default: ${peer-urls} ++ If the peer cannot connect to PD through the default listening peer URLs, you must manually set the advertise peer URLs explicitly. ++ For example, the internal IP address of Docker is 172.17.0.1, while the IP address of the host is 192.168.100.113 and the port mapping is set to `-p 2380:2380`. In this case, you can set `--advertise-peer-urls` to "http://192.168.100.113:2380". The other PD nodes can find this service through "http://192.168.100.113:2380". + +### `--client-urls` + ++ The listening URL list for client traffic ++ Default: "http://127.0.0.1:2379" ++ To deploy a cluster, you must use `--client-urls` to specify the IP address of the current host, such as "http://192.168.100.113:2379". If the cluster is run on Docker, specify the IP address of Docker as "http://0.0.0.0:2379". + +### `--config` + ++ The config file ++ Default: "" ++ If you set the configuration using the command line, the same setting in the config file will be overwritten. + +### `--data-dir` + ++ The path to the data directory ++ Default: "default.${name}" + +### `--initial-cluster` + ++ The initial cluster configuration for bootstrapping ++ Default: "{name}=http://{advertise-peer-url}" ++ For example, if `name` is "pd", and `advertise-peer-urls` is "http://192.168.100.113:2380", the `initial-cluster` is "pd=http://192.168.100.113:2380". ++ If you need to start three PD servers, the `initial-cluster` might be: + + ``` + pd1=http://192.168.100.113:2380, pd2=http://192.168.100.114:2380, pd3=192.168.100.115:2380 + ``` + +### `--join` + ++ Join the cluster dynamically ++ Default: "" ++ If you want to join an existing cluster, you can use `--join="${advertise-client-urls}"`, the `advertise-client-url` is any existing PD's, multiply advertise client urls are separated by comma. + +### `-L` + ++ The log level ++ Default: "info" ++ You can choose from debug, info, warn, error, or fatal. + +### `--log-file` + ++ The log file ++ Default: "" ++ If this flag is not set, logs will be written to stderr. Otherwise, logs will be stored in the log file which will be automatically rotated every day. + +### `--log-rotate` + +- To enable or disable log rotation +- Default: true +- When the value is true, follow the `[log.file]` in PD configuration files. + +### `--name` + ++ The human-readable unique name for this PD member ++ Default: "pd" ++ If you want to start multiply PDs, you must use different name for each one. + +### `--peer-urls` + ++ The listening URL list for peer traffic ++ Default: "http://127.0.0.1:2380" ++ To deploy a cluster, you must use `--peer-urls` to specify the IP address of the current host, such as "http://192.168.100.113:2380". If the cluster is run on Docker, specify the IP address of Docker as "http://0.0.0.0:2380". + +## TiKV + +TiKV supports some readable unit conversions for command line parameters. + +- File size (based on byte): KB, MB, GB, TB, PB (or lowercase) +- Time (based on ms): ms, s, m, h + +### `-A, --addr` + ++ The address that the TiKV server monitors ++ Default: "127.0.0.1:20160" ++ To deploy a cluster, you must use `--addr` to specify the IP address of the current host, such as "192.168.100.113:20160". If the cluster is run on Docker, specify the IP address of Docker as "0.0.0.0:20160". + +### `--advertise-addr` + ++ The server advertise address for client traffic from outside ++ Default: ${addr} ++ If the client cannot connect to TiKV through the default monitoring address because of Docker or NAT network, you must manually set the advertise address explicitly. ++ For example, the internal IP address of Docker is 172.17.0.1, while the IP address of the host is 192.168.100.113 and the port mapping is set to `-p 20160:20160`. In this case, you can set `--advertise-addr` to "192.168.100.113:20160". The client can find this service through 192.168.100.113:20160. + +### `-C, --config` + ++ The config file ++ Default: "" ++ If you set the configuration using the command line, the same setting in the config file will be overwritten. + +### `--capacity` + ++ The store capacity ++ Default: 0 (unlimited) ++ PD uses this flag to determine how to balance the TiKV servers. (Tip: you can use 10GB instead of 1073741824) + +### `--data-dir` + ++ The path to the data directory ++ Default: "/tmp/tikv/store" + +### `-L, --Log` + ++ The log level ++ Default: "info" ++ You can choose from trace, debug, info, warn, error, or off. + +### `--log-file` + ++ The log file ++ Default: "" ++ If this flag is not set, logs will be written to stderr. Otherwise, logs will be stored in the log file which will be automatically rotated every day. + +### `--pd` + +- The address list of PD servers +- Default: "" +- To make TiKV work, you must use the value of `--pd` to connect the TiKV server to the PD server. Separate multiple PD addresses using comma, for example "192.168.100.113:2379, 192.168.100.114:2379, 192.168.100.115:2379". diff --git a/v1.0/op-guide/dashboard-overview-info.md b/v1.0/op-guide/dashboard-overview-info.md new file mode 100755 index 0000000000000..090a5f8938c63 --- /dev/null +++ b/v1.0/op-guide/dashboard-overview-info.md @@ -0,0 +1,41 @@ +--- +title: Key Metrics +category: operations +--- + +# Key Metrics + +If you use Ansible to deploy TiDB cluster, you can deploy the monitoring system at the same time. See [Overview of the Monitoring Framework](monitor-overview.md) for more information. + +The Grafana dashboard is divided into four sub dashboards: node_export, PD, TiKV, and TiDB. There are a lot of metics there to help you diagnose. For routine operations, some of the key metrics are displayed on the Overview dashboard so that you can get the overview of the status of the components and the entire cluster. See the following section for their descriptions: + +## Key metrics description + +Service | Panel Name | Description | Normal Range +---- | ---------------- | ---------------------------------- | -------------- +PD | Storage Capacity | the total storage capacity of the TiDB cluster | +PD | Current Storage Size | the occupied storage capacity of the TiDB cluster | +PD | Store Status -- up store | the number of TiKV nodes that are up | +PD | Store Status -- down store | the number of TiKV nodes that are down | `0`. If the number is bigger than `0`, it means some node(s) are not down. +PD | Store Status -- offline store | the number of TiKV nodes that are manually offline| +PD | Store Status -- Tombstone store | the number of TiKV nodes that are Tombstone| +PD | Current storage usage | the storage occupancy rate of the TiKV cluster | If it exceeds 80%, you need to consider adding more TiKV nodes. +PD | 99% completed cmds duration seconds | the 99th percentile duration to complete a pd-server request| less than 5ms +PD | average completed cmds duration seconds | the average duration to complete a pd-server request | less than 50ms +PD | leader balance ratio | the leader ratio difference of the nodes with the biggest leader ratio and the smallest leader ratio | It is less than 5% for a balanced situation. It becomes bigger when a node is restarting. +PD | region balance ratio | the region ratio difference of the nodes with the biggest region ratio and the smallest region ratio | It is less than 5% for a balanced situation. It becomes bigger when adding or removing a node. +TiDB | handle requests duration seconds | the response time to get TSO from PD| less than 100ms +TiDB | tidb server QPS | the QPS of the cluster | application specific +TiDB | connection count | the number of connections from application servers to the database | Application specific. If the number of connections hops, you need to find out the reasons. If it drops to 0, you can check if the network is broken; if it surges, you need to check the application. +TiDB | statement count | the number of different types of statement within a given time | application specific +TiDB | Query Duration 99th percentile | the 99th percentile query time | +TiKV | 99% & 99.99% scheduler command duration | the 99th percentile and 99.99th percentile scheduler command duration| For 99%, it is less than 50ms; for 99.99%, it is less than 100ms. +TiKV | 95% & 99.99% storage async_request duration | the 95th percentile and 99.99th percentile Raft command duration | For 95%, it is less than 50ms; for 99.99%, it is less than 100ms. +TiKV | server report failure message | There might be an issue with the network or the message might not come from this cluster. | If there are large amount of messages which contains `unreachable`, there might be an issue with the network. If the message contains `store not match`, the message does not come from this cluster. +TiKV | Vote |the frequency of the Raft vote | Usually, the value only changes when there is a split. If the value of Vote remains high for a long time, the system might have a severe issue and some nodes are not working. +TiKV | 95% and 99% coprocessor request duration | the 95th percentile and the 99th percentile coprocessor request duration | Application specific. Usually, the value does not remain high. +TiKV | Pending task | the number of pending tasks | Except for PD worker, it is not normal if the value is too high. +TiKV | stall | RocksDB stall time | If the value is bigger than 0, it means that RocksDB is too busy, and you need to pay attention to IO and CPU usage. +TiKV | channel full | The channel is full and the threads are too busy. | If the value is bigger than 0, the threads are too busy. +TiKV | 95% send message duration seconds | the 95th percentile message sending time | less than 50ms +TiKV | leader/region | the number of leader/region per TiKV server| application specific \ No newline at end of file diff --git a/v1.0/op-guide/docker-compose.md b/v1.0/op-guide/docker-compose.md new file mode 100755 index 0000000000000..ae53741b01cb2 --- /dev/null +++ b/v1.0/op-guide/docker-compose.md @@ -0,0 +1,114 @@ +--- +title: TiDB Docker Compose Deployment +category: operations +--- + +# TiDB Docker Compose Deployment + +This document describes how to quickly deploy TiDB using Docker Compose. + +With [Docker Compose](https://docs.docker.com/compose/overview), you can use a YAML file to configure application services in multiple containers. Then, with a single command, you can create and start all the services from your configuration. + +You can use Docker Compose to deploy a TiDB test cluster with a single command. It is required to use Docker 17.06.0 or later. + +## Quick start + +1. Download `tidb-docker-compose`. + + ```bash + git clone https://github.com/pingcap/tidb-docker-compose.git + ``` + +2. Create and start the cluster. + + ```bash + cd tidb-docker-compose && docker-compose up -d + ``` + +3. Access the cluster. + + ```bash + mysql -h 127.0.0.1 -P 4000 -u root + ``` + + Access the Grafana monitoring interface: + + - Default address: + - Default account name: admin + - Default password: admin + + Access the [cluster data visualization interface](https://github.com/tidb-vision): + +## Customize the cluster + +In [Quick start](#quick-start), the following components are deployed by default: + +- 3 PD instances, 3 TiKV instances, 1 TiDB instance +- Monitoring components: Prometheus,Pushgateway,Grafana +- Data visualization component: tidb-vision + +To customize the cluster, you can edit the `docker-compose.yml` file directly. It is recommended to generate `docker-compose.yml` using the [Helm](https://helm.sh) template engine, because manual editing is tedious and error-prone. + +1. Install Helm. + + [Helm](https://helm.sh) can be used as a template rendering engine. To use Helm, you only need to download its binary file: + + ```bash + curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash + ``` + + For macOS, you can also install Helm using the following command in Homebrew: + + ``` + brew install kubernetes-helm + ``` + +2. Download `tidb-docker-compose`. + + ```bash + git clone https://github.com/pingcap/tidb-docker-compose.git + ``` + +3. Customize the cluster. + + ```bash + cd tidb-docker-compose + cp compose/values.yaml values.yaml + vim values.yaml + ``` + + Modify the configuration in `values.yaml`, such as the cluster size, TiDB image version, and so on. + + [tidb-vision](https://github.com/pingcap/tidb-vision) is the data visualization interface of the TiDB cluster, used to visually display the PD scheduling on TiKV data. If you do not need this component, leave `tidbVision` empty. + + For PD, TiKV, TiDB and tidb-vision, you can build Docker images from GitHub source code or local files for development and testing. + + - To build the image of a component from GitHub source code, you need to leave the `image` field empty and set `buildFrom` to `remote`. + - To build PD, TiKV or TiDB images from the locally compiled binary file, you need to leave the `image` field empty, set `buildFrom` to `local` and copy the compiled binary file to the corresponding `pd/bin/pd-server`, `tikv/bin/tikv-server`, `tidb/bin/tidb-server`. + - To build the tidb-vision image from local, you need to leave the `image` field empty, set `buildFrom` to `local` and copy the tidb-vision project to `tidb-vision/tidb-vision`. + +4. Generate the `docker-compose.yml` file. + + ```bash + helm template -f values.yaml compose > generated-docker-compose.yml + ``` + +5. Create and start the cluster using the generated `docker-compose.yml` file. + + ```bash + docker-compose -f generated-docker-compose.yml up -d + ``` + +6. Access the cluster. + + ```bash + mysql -h 127.0.0.1 -P 4000 -u root + ``` + + Access the Grafana monitoring interface: + + - Default address: + - Default account name: admin + - Default password: admin + + If tidb-vision is enabled, you can access the [cluster data visualization interface](https://github.com/tidb-vision): . \ No newline at end of file diff --git a/v1.0/op-guide/docker-deployment.md b/v1.0/op-guide/docker-deployment.md new file mode 100755 index 0000000000000..41f54583d1e2a --- /dev/null +++ b/v1.0/op-guide/docker-deployment.md @@ -0,0 +1,202 @@ +--- +title: TiDB Docker Deployment +category: operations +--- + +# Docker Deployment + +This page shows you how to manually deploy a multi-node TiDB cluster on multiple machines using Docker. + +To learn more, see [TiDB architecture](../overview.md#tidb-architecture) and [Software and Hardware Requirements](recommendation.md). + +## Preparation + +Before you start, make sure that you have: + ++ Installed the latest version of [Docker](https://www.docker.com/products/docker) ++ Pulled the latest images of TiDB, TiKV and PD from [Docker Hub](https://hub.docker.com). If not, pull the images using the following commands: + + ```bash + docker pull pingcap/tidb:latest + docker pull pingcap/tikv:latest + docker pull pingcap/pd:latest + ``` + +## Multi nodes deployment + +Assume we have 6 machines with the following details: + +| Host Name | IP | Services | Data Path | +| --------- | ------------- | ---------- | --------- | +| **host1** | 192.168.1.101 | PD1 & TiDB | /data | +| **host2** | 192.168.1.102 | PD2 | /data | +| **host3** | 192.168.1.103 | PD3 | /data | +| **host4** | 192.168.1.104 | TiKV1 | /data | +| **host5** | 192.168.1.105 | TiKV2 | /data | +| **host6** | 192.168.1.106 | TiKV3 | /data | + +### 1. Start PD + +Start PD1 on the **host1** +```bash +docker run -d --name pd1 \ + -p 2379:2379 \ + -p 2380:2380 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/pd:latest \ + --name="pd1" \ + --data-dir="/data/pd1" \ + --client-urls="http://0.0.0.0:2379" \ + --advertise-client-urls="http://192.168.1.101:2379" \ + --peer-urls="http://0.0.0.0:2380" \ + --advertise-peer-urls="http://192.168.1.101:2380" \ + --initial-cluster="pd1=http://192.168.1.101:2380,pd2=http://192.168.1.102:2380,pd3=http://192.168.1.103:2380" +``` + +Start PD2 on the **host2** +```bash +docker run -d --name pd2 \ + -p 2379:2379 \ + -p 2380:2380 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/pd:latest \ + --name="pd2" \ + --data-dir="/data/pd2" \ + --client-urls="http://0.0.0.0:2379" \ + --advertise-client-urls="http://192.168.1.102:2379" \ + --peer-urls="http://0.0.0.0:2380" \ + --advertise-peer-urls="http://192.168.1.102:2380" \ + --initial-cluster="pd1=http://192.168.1.101:2380,pd2=http://192.168.1.102:2380,pd3=http://192.168.1.103:2380" +``` + +Start PD3 on the **host3** +```bash +docker run -d --name pd3 \ + -p 2379:2379 \ + -p 2380:2380 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/pd:latest \ + --name="pd3" \ + --data-dir="/data/pd3" \ + --client-urls="http://0.0.0.0:2379" \ + --advertise-client-urls="http://192.168.1.103:2379" \ + --peer-urls="http://0.0.0.0:2380" \ + --advertise-peer-urls="http://192.168.1.103:2380" \ + --initial-cluster="pd1=http://192.168.1.101:2380,pd2=http://192.168.1.102:2380,pd3=http://192.168.1.103:2380" +``` + +### 2. Start TiKV + +Start TiKV1 on the **host4** +```bash +docker run -d --name tikv1 \ + -p 20160:20160 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/tikv:latest \ + --addr="0.0.0.0:20160" \ + --advertise-addr="192.168.1.104:20160" \ + --data-dir="/data/tikv1" \ + --pd="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" +``` + +Start TiKV2 on the **host5** +```bash +docker run -d --name tikv2 \ + -p 20160:20160 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/tikv:latest \ + --addr="0.0.0.0:20160" \ + --advertise-addr="192.168.1.105:20160" \ + --data-dir="/data/tikv2" \ + --pd="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" +``` + +Start TiKV3 on the **host6** +```bash +docker run -d --name tikv3 \ + -p 20160:20160 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/tikv:latest \ + --addr="0.0.0.0:20160" \ + --advertise-addr="192.168.1.106:20160" \ + --data-dir="/data/tikv3" \ + --pd="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" +``` + +### 3. Start TiDB + +Start TiDB on the **host1** + +```bash +docker run -d --name tidb \ + -p 4000:4000 \ + -p 10080:10080 \ + -v /etc/localtime:/etc/localtime:ro \ + pingcap/tidb:latest \ + --store=tikv \ + --path="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" +``` + +### 4. Use the MySQL client to connect to TiDB + +Install the [MySQL client](http://dev.mysql.com/downloads/mysql/) on **host1** and run: + +```bash +$ mysql -h 127.0.0.1 -P 4000 -u root -D test +mysql> show databases; ++--------------------+ +| Database | ++--------------------+ +| INFORMATION_SCHEMA | +| PERFORMANCE_SCHEMA | +| mysql | +| test | ++--------------------+ +4 rows in set (0.00 sec) +``` + +### How to customize the configuration file + +The TiKV and PD can be started with a specified configuration file, which includes some advanced parameters, for the performance tuning. + +Assume that the path to configuration file of PD and TiKV on the host is `/path/to/config/pd.toml` and `/path/to/config/tikv.toml` + +You can start TiKV and PD as follows: + +```bash +docker run -d --name tikv1 \ + -p 20160:20160 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + -v /path/to/config/tikv.toml:/tikv.toml:ro \ + pingcap/tikv:latest \ + --addr="0.0.0.0:20160" \ + --advertise-addr="192.168.1.104:20160" \ + --data-dir="/data/tikv1" \ + --pd="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" \ + --config="/tikv.toml" +``` + +```bash +docker run -d --name pd1 \ + -p 2379:2379 \ + -p 2380:2380 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + -v /path/to/config/pd.toml:/pd.toml:ro \ + pingcap/pd:latest \ + --name="pd1" \ + --data-dir="/data/pd1" \ + --client-urls="http://0.0.0.0:2379" \ + --advertise-client-urls="http://192.168.1.101:2379" \ + --peer-urls="http://0.0.0.0:2380" \ + --advertise-peer-urls="http://192.168.1.101:2380" \ + --initial-cluster="pd1=http://192.168.1.101:2380,pd2=http://192.168.1.102:2380,pd3=http://192.168.1.103:2380" \ + --config="/pd.toml" +``` diff --git a/v1.0/op-guide/generate-self-signed-certificates.md b/v1.0/op-guide/generate-self-signed-certificates.md new file mode 100755 index 0000000000000..d6f662e626c54 --- /dev/null +++ b/v1.0/op-guide/generate-self-signed-certificates.md @@ -0,0 +1,154 @@ +--- +title: Generate Self-signed Certificates +category: deployment +--- + +# Generate Self-signed Certificates + +## Overview + +This document describes how to generate self-signed certificates using `cfssl`. + +Assume that the topology of the instance cluster is as follows: + +| Name | Host IP | Services | +| ----- | ----------- | ---------- | +| node1 | 172.16.10.1 | PD1, TiDB1 | +| node2 | 172.16.10.2 | PD2, TiDB2 | +| node3 | 172.16.10.3 | PD3 | +| node4 | 172.16.10.4 | TiKV1 | +| node5 | 172.16.10.5 | TiKV2 | +| node6 | 172.16.10.6 | TiKV3 | + +## Download `cfssl` + +Assume that the host is x86_64 Linux: + +```bash +mkdir ~/bin +curl -s -L -o ~/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 +curl -s -L -o ~/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 +chmod +x ~/bin/{cfssl,cfssljson} +export PATH=$PATH:~/bin +``` + +## Initialize the certificate authority + +To make it easy for modification later, generate the default configuration of `cfssl`: + +```bash +mkdir ~/cfssl +cd ~/cfssl +cfssl print-defaults config > ca-config.json +cfssl print-defaults csr > ca-csr.json +``` + +## Generate certificates + +### Certificates description + +- tidb-server certificate: used by TiDB to authenticate TiDB for other components and clients +- tikv-server certificate: used by TiKV to authenticate TiKV for other components and clients +- pd-server certificate: used by PD to authenticate PD for other components and clients +- client certificate: used to authenticate the clients from PD, TiKV and TiDB, such as `pd-ctl`, `tikv-ctl` and `pd-recover` + +### Configure the CA option + +Edit `ca-config.json` according to your need: + +```json +{ + "signing": { + "default": { + "expiry": "43800h" + }, + "profiles": { + "server": { + "expiry": "43800h", + "usages": [ + "signing", + "key encipherment", + "server auth", + "client auth" + ] + }, + "client": { + "expiry": "43800h", + "usages": [ + "signing", + "key encipherment", + "client auth" + ] + } + } + } +} +``` + +Edit `ca-csr.json` according to your need: + +```json +{ + "CN": "My own CA", + "key": { + "algo": "rsa", + "size": 2048 + }, + "names": [ + { + "C": "CN", + "L": "Beijing", + "O": "PingCAP", + "ST": "Beijing" + } + ] +} +``` + +### Generate the CA certificate + +```bash +cfssl gencert -initca ca-csr.json | cfssljson -bare ca - +``` + +The command above generates the following files: + +```bash +ca-key.pem +ca.csr +ca.pem +``` + +### Generate the server certificate + +The IP address of all components and `127.0.0.1` are included in `hostname`. + +```bash +echo '{"CN":"tidb-server","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server -hostname="172.16.10.1,172.16.10.2,127.0.0.1" - | cfssljson -bare tidb-server + +echo '{"CN":"tikv-server","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server -hostname="172.16.10.4,172.16.10.5,172.16.10.6,127.0.0.1" - | cfssljson -bare tikv-server + +echo '{"CN":"pd-server","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server -hostname="172.16.10.1,172.16.10.2,172.16.10.3,127.0.0.1" - | cfssljson -bare pd-server +``` + +The command above generates the following files: + +```Bash +tidb-server-key.pem tikv-server-key.pem pd-server-key.pem +tidb-server.csr tikv-server.csr pd-server.csr +tidb-server.pem tikv-server.pem pd-server.pem +``` + +### Generate the client certificate + +```bash +echo '{"CN":"client","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client -hostname="" - | cfssljson -bare client +``` + +The command above generates the following files: + +```bash +client-key.pem +client.csr +client.pem +``` \ No newline at end of file diff --git a/v1.0/op-guide/history-read.md b/v1.0/op-guide/history-read.md new file mode 100755 index 0000000000000..95002ace05c92 --- /dev/null +++ b/v1.0/op-guide/history-read.md @@ -0,0 +1,164 @@ +--- +title: Reading Data from History Versions +category: advanced +--- + +# Reading Data From History Versions + +This document describes how TiDB reads data from the history versions, how TiDB manages the data versions, as well as an example to show how to use the feature. + +## Feature description + +TiDB implements a feature to read history data using the standard SQL interface directly without special clients or drivers. By using this feature, +- Even when data is updated or removed, its history versions can be read using the SQL interface. +- Even if the table structure changes after the data is updated, TiDB can use the old structure to read the history data. + +## How TiDB reads data from history versions + +The `tidb_snapshot` system variable is introduced to support reading history data. About the `tidb_snapshot` variable: + +- The variable is valid in the `Session` scope. +- Its value can be modified using the `Set` statement. +- The data type for the variable is text. +- The variable is to record time in the following format: “2016-10-08 16:45:26.999”. Generally, the time can be set to seconds like in “2016-10-08 16:45:26”. +- When the variable is set, TiDB creates a Snapshot using its value as the timestamp, just for the data structure and there is no any overhead. After that, all the `Select` operations will read data from this Snapshot. + +> **Note:** Because the timestamp in TiDB transactions is allocated by Placement Driver (PD), the version of the stored data is also marked based on the timestamp allocated by PD. When a Snapshot is created, the version number is based on the value of the `tidb_snapshot` variable. If there is a large difference between the local time of the TiDB server and the PD server, use the time of the PD server. + +After reading data from history versions, you can read data from the latest version by ending the current Session or using the `Set` statement to set the value of the `tidb_snapshot` variable to "" (empty string). + +## How TiDB manages the data versions + +TiDB implements Multi-Version Concurrency Control (MVCC) to manage data versions. The history versions of data are kept because each update / removal creates a new version of the data object instead of updating / removing the data object in-place. But not all the versions are kept. If the versions are older than a specific time, they will be removed completely to reduce the storage occupancy and the performance overhead caused by too many history versions. + +In TiDB, Garbage Collection (GC) runs periodically to remove the obsolete data versions. GC is triggered in the following way: There is a `gc_worker` goroutine running in the background of each TiDB server. In a cluster with multiple TiDB servers, one of the `gc_worker` goroutines will be automatically selected to be the leader. The leader is responsible for maintaining the GC state and sends GC commands to each TiKV region leader. + +The running record of GC is recorded in the system table of `mysql.tidb` as follows and can be monitored and configured using the SQL statements: + +``` +mysql> select variable_name, variable_value from mysql.tidb; ++-----------------------+----------------------------+ +| variable_name | variable_value | ++-----------------------+----------------------------+ +| bootstrapped | True | +| tikv_gc_leader_uuid | 55daa0dfc9c0006 | +| tikv_gc_leader_desc | host:pingcap-pc5 pid:10549 | +| tikv_gc_leader_lease | 20160927-13:18:28 +0800 CST| +| tikv_gc_run_interval | 10m0s | +| tikv_gc_life_time | 10m0s | +| tikv_gc_last_run_time | 20160927-13:13:28 +0800 CST| +| tikv_gc_safe_point | 20160927-13:03:28 +0800 CST| ++-----------------------+----------------------------+ +7 rows in set (0.00 sec) +``` + +Pay special attention to the following two rows: + +- `tikv_gc_life_time`: This row is to configure the retention time of the history version and its default value is 10m. You can use SQL statements to configure it. For example, if you want all the data within one day to be readable, set this row to 24h by using the `update mysql.tidb set variable_value='24h' where variable_name='tikv_gc_life_time'` statement. The format is: "24h", "2h30m", "2.5h". The unit of time can be: "h", "m", "s". + +> **Note:** If your data is updated very frequently, the following issues might occur if the value of the `tikv_gc_life_time` is set to be too large like in days or months: +> +> - The more versions of the data, the more disk storage is occupied. +> - A large amount of the history versions might slow down the query, especially the range queries like `select count(*) from t`. +> - If the value of the `tikv_gc_life_time` variable is suddenly changed to be smaller while the database is running, it might lead to the removal of large amounts of history data and cause huge I/O burden. +> - `tikv_gc_safe_point`: This row records the current safePoint. You can safely create the Snapshot to read the history data using the timestamp that is later than the safePoint. The safePoint automatically updates every time GC runs. + +## Example + +1. At the initial stage, create a table and insert several rows of data: + + ```sql + mysql> create table t (c int); + Query OK, 0 rows affected (0.01 sec) + + mysql> insert into t values (1), (2), (3); + Query OK, 3 rows affected (0.00 sec) + ``` + +2. View the data in the table: + + ```sql + mysql> select * from t; + +------+ + | c | + +------+ + | 1 | + | 2 | + | 3 | + +------+ + 3 rows in set (0.00 sec) + ``` + +3. View the timestamp of the table: + + ```sql + mysql> select now(); + +---------------------+ + | now() | + +---------------------+ + | 2016-10-08 16:45:26 | + +---------------------+ + 1 row in set (0.00 sec) + ``` + +4. Update the data in one row: + + ```sql + mysql> update t set c=22 where c=2; + Query OK, 1 row affected (0.00 sec) + ``` + +5. Make sure the data is updated: + + ```sql + mysql> select * from t; + +------+ + | c | + +------+ + | 1 | + | 22 | + | 3 | + +------+ + 3 rows in set (0.00 sec) + ``` + +6. Set the `tidb_snapshot` variable whose scope is Session. The variable is set so that the latest version before the value can be read. + + > **Note:** In this example, the value is set to be the time before the update operation. + + ```sql + mysql> set @@tidb_snapshot="2016-10-08 16:45:26"; + Query OK, 0 rows affected (0.00 sec) + ``` + **Result:** The read from the following statement is the data before the update operation, which is the history data. + + ```sql + mysql> select * from t; + +------+ + | c | + +------+ + | 1 | + | 2 | + | 3 | + +------+ + 3 rows in set (0.00 sec) + ``` + +7. Set the `tidb_snapshot` variable to be "" (empty string) and you can read the data from the latest version: + + ```sql + mysql> set @@tidb_snapshot=""; + Query OK, 0 rows affected (0.00 sec) + ``` + + ```sql + mysql> select * from t; + +------+ + | c | + +------+ + | 1 | + | 22 | + | 3 | + +------+ + 3 rows in set (0.00 sec) + ``` \ No newline at end of file diff --git a/v1.0/op-guide/horizontal-scale.md b/v1.0/op-guide/horizontal-scale.md new file mode 100755 index 0000000000000..28ceae77e763a --- /dev/null +++ b/v1.0/op-guide/horizontal-scale.md @@ -0,0 +1,120 @@ +--- +title: Scale a TiDB cluster +category: operations +--- + +# Scale a TiDB cluster + +## Overview + +The capacity of a TiDB cluster can be increased or reduced without affecting online services. + +The following part shows you how to add or delete PD, TiKV or TiDB nodes. + +About pd-ctl usage, please refer to [PD Control User Guide](../tools/pd-control.md). + +## PD + +Assume we have three PD servers with the following details: + +| Name | ClientUrls | PeerUrls | +|:-----|:------------------|:------------------| +| pd1 | http://host1:2379 | http://host1:2380 | +| pd2 | http://host2:2379 | http://host2:2380 | +| pd3 | http://host3:2379 | http://host3:2380 | + +Get the information about the existing PD nodes through pd-ctl: + +```bash +./pd-ctl -u http://host1:2379 +>> member +``` + +### Add a node dynamically + +Add a new PD server to the current PD cluster by using the parameter `join`. +To add `pd4`, you just need to specify the client url of any PD server in the PD cluster in the parameter `--join`, like: + +```bash +./bin/pd-server --name=pd4 \ + --client-urls="http://host4:2379" \ + --peer-urls="http://host4:2380" \ + --join="http://host1:2379" +``` + +### Delete a node dynamically + +Delete `pd4` through pd-ctl: + +```bash +./pd-ctl -u http://host1:2379 +>> member delete pd4 +``` + +### Migrate a node dynamically + +If you want to migrate a node to a new machine, you need to, first of all, add a node on the new machine and then delete the node on the old machine. +As you can just migrate one node at a time, if you want to migrate multiple nodes, you need to repeat the above steps until you have migrated all nodes. After completing each step, you can verify the process by checking the information of all nodes. + +## TiKV + +Get the information about the existing TiKV nodes through pd-ctl: + +```bash +./pd-ctl -u http://host1:2379 +>> store +``` + +### Add a node dynamically + +It is very easy to add a new TiKV server dynamically. You just need to start a TiKV server on the new machine. +The newly started TiKV server will automatically register in the existing PD of the cluster. To reduce the pressure of the existing TiKV servers, PD loads balance automatically, which means PD gradually migrates some data to the new TiKV server. + +### Delete a node dynamically + +To delete (make it offline) a TiKV server safely, you need to inform PD in advance. After that, PD is able to migrate the data on this TiKV server to other TiKV servers, ensuring that data have enough replicas. + +Assume that you need to delete the TiKV server with a store id 1, you can complete this through pd-ctl: + +```bash +./pd-ctl -u http://host1:2379 +>> store delete 1 +``` + +Then you can check the state of this TiKV: + +```bash +./pd-ctl -u http://host1:2379 +>> store 1 +{ + "store": { + "id": 1, + "address": "127.0.0.1:21060", + "state": 1, + "state_name": "Offline" + }, + "status": { + ... + } +} +``` + +You can verify the state of this store using `state_name`: + + - `state_name=Up`: This store is in service. + - `state_name=Disconnected`: The heartbeats of this store cannot be detected currently, which might be caused by a failure or network interruption. + - `state_name=Down`: PD does not receive heartbeats from the TiKV store for more than an hour (the time can be configured using `max-down-time`). At this time, PD adds a replica for the data on this store. + - `state_name=Offline`: This store is shutting down, but the store is still in service. + - `state_name=Tombstone`: This store is shut down and has no data on it, so the instance can be deleted. + + +### Migrate a node dynamically + +To migrate TiKV servers to a new machine, you also need to add nodes on the new machine and then make all nodes on the old machine offline. +In the process of migration, you can add all machines in the new cluster to the existing cluster, then make old nodes offline one by one. +To verify whether a node has been made offline, you can check the state information of the node in process. After verifying, you can make the next node offline. + +## TiDB + +TiDB is a stateless server, which means it can be added or deleted directly. +It should be noted that if you deploy a proxy (such as HAProxy) in front of TiDB, you need to update the proxy configuration and reload it. diff --git a/v1.0/op-guide/location-awareness.md b/v1.0/op-guide/location-awareness.md new file mode 100755 index 0000000000000..2752f4d8f0ba9 --- /dev/null +++ b/v1.0/op-guide/location-awareness.md @@ -0,0 +1,87 @@ +--- +title: Cross-Region Deployment +category: operations +--- + +# Cross-Region Deployment + +## Overview + +PD schedules according to the topology of the TiKV cluster to maximize the TiKV's capability for disaster recovery. + +Before you begin, see [Ansible Deployment (Recommended)](ansible-deployment.md) and [Docker Deployment](docker-deployment.md). + +## TiKV reports the topological information + +TiKV reports the topological information to PD according to the startup parameter or configuration of TiKV. + +Assuming that the topology has three structures: zone > rack > host, use lables to specify the following information: + +Startup parameter: + +``` +tikv-server --labels zone=,rack=,host= +``` + +Configuration: + +``` toml +[server] +labels = "zone=,rack=,host=" +``` + +## PD understands the TiKV topology + +PD gets the topology of TiKV cluster through the PD configuration. + +``` toml +[replication] +max-replicas = 3 +location-labels = ["zone", "rack", "host"] +``` + +`location-labels` needs to correspond to the TiKV `labels` name so that PD can understand that the `labels` represents the TiKV topology. + +## PD schedules based on the TiKV topology + +PD makes optimal schedulings according to the topological information. You just need to care about what kind of topology can achieve the desired effect. + +If you use 3 replicas and hope that everything still works well when a data zone hangs up, you need at least 4 data zones. +(Theoretically, three data zones are feasible but the current implementation cannot guarantee.) + +Assume that we have 4 data zones, each zone has 2 racks and each rack has 2 hosts. +We can start 2 TiKV instances on each host: + +``` +# zone=z1 +tikv-server --labels zone=z1,rack=r1,host=h1 +tikv-server --labels zone=z1,rack=r1,host=h2 +tikv-server --labels zone=z1,rack=r2,host=h1 +tikv-server --labels zone=z1,rack=r2,host=h2 + +# zone=z2 +tikv-server --labels zone=z2,rack=r1,host=h1 +tikv-server --labels zone=z2,rack=r1,host=h2 +tikv-server --labels zone=z2,rack=r2,host=h1 +tikv-server --labels zone=z2,rack=r2,host=h2 + +# zone=z3 +tikv-server --labels zone=z3,rack=r1,host=h1 +tikv-server --labels zone=z3,rack=r1,host=h2 +tikv-server --labels zone=z3,rack=r2,host=h1 +tikv-server --labels zone=z3,rack=r2,host=h2 + +# zone=z4 +tikv-server --labels zone=z4,rack=r1,host=h1 +tikv-server --labels zone=z4,rack=r1,host=h2 +tikv-server --labels zone=z4,rack=r2,host=h1 +tikv-server --labels zone=z4,rack=r2,host=h2 +``` + +In other words, 16 TiKV instances are distributed across 4 data zones, 8 racks and 16 machines. + +In this case, PD will schedule different replicas of each datum to different data zones. +- If one of the data zones hangs up, everything still works well. +- If the data zone cannot recover within a period of time, PD will remove the replica from this data zone. + +To sum up, PD maximizes the disaster recovery of the cluster according to the current topology. Therefore, if you want to reach a certain level of disaster recovery, deploy many machines in different sites according to the topology. The number of machines must be more than the number of `max-replicas`. diff --git a/v1.0/op-guide/migration-overview.md b/v1.0/op-guide/migration-overview.md new file mode 100755 index 0000000000000..6ac961dca7f90 --- /dev/null +++ b/v1.0/op-guide/migration-overview.md @@ -0,0 +1,140 @@ +--- +title: Migration Overview +category: operations +--- + +# Migration Overview + +## Overview + +This document describes how to migrate data from MySQL to TiDB in detail. + +See the following for the assumed MySQL and TiDB server information: + +|Name|Address|Port|User|Password| +|----|-------|----|----|--------| +|MySQL|127.0.0.1|3306|root|* | +|TiDB|127.0.0.1|4000|root|* | + +## Scenarios + ++ To import all the history data. This needs the following tools: + - `Checker`: to check if the shema is compatible with TiDB. + - `Mydumper`: to export data from MySQL. + - `Loader`: to import data to TiDB. + ++ To incrementally synchronise data after all the history data is imported. This needs the following tools: + - `Checker`: to check if the shema is compatible with TiDB. + - `Mydumper`: to export data from MySQL. + - `Loader`: to import data to TiDB. + - `Syncer`: to incrementally synchronize data from MySQL to TiDB. + + > **Note:** To incrementally synchronize data from MySQL to TiDB, the binary logging (binlog) must be enabled and must use the `row` format in MySQL. + +### Enable binary logging (binlog) in MySQL + +Before using the `syncer` tool, make sure: ++ Binlog is enabled in MySQL. See [Setting the Replication Master Configuration](http://dev.mysql.com/doc/refman/5.7/en/replication-howto-masterbaseconfig.html). + ++ Binlog must use the `row` format which is the recommended binlog format in MySQL 5.7. It can be configured using the following statement: + + ```bash + SET GLOBAL binlog_format = ROW; + ``` + +## Use the `checker` tool to check the schema + +Before migrating, you can use the `checker` tool in TiDB to check if TiDB supports the table schema of the data to be migrated. If the `checker` fails to check a certain table schema, it means that the table is not currently supported by TiDB and therefore the data in the table cannot be migrated. + +See [Download the TiDB toolset](#download-the-tidb-toolset-linux) to download the `checker` tool. + +### Download the TiDB toolset (Linux) + +```bash +# Download the tool package. +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf tidb-enterprise-tools-latest-linux-amd64.tar.gz +cd tidb-enterprise-tools-latest-linux-amd64 +``` + +### A sample to use the `checker` tool + +1. Create several tables in the `test` database in MySQL and insert data. + + ```sql + USE test; + CREATE TABLE t1 (id INT, age INT, PRIMARY KEY(id)) ENGINE=InnoDB; + CREATE TABLE t2 (id INT, name VARCHAR(256), PRIMARY KEY(id)) ENGINE=InnoDB; + + INSERT INTO t1 VALUES (1, 1), (2, 2), (3, 3); + INSERT INTO t2 VALUES (1, "a"), (2, "b"), (3, "c"); + ``` + +2. Use the `checker` tool to check all the tables in the `test` database. + + ```bash + ./bin/checker -host 127.0.0.1 -port 3306 -user root test + 2016/10/27 13:11:49 checker.go:48: [info] Checking database test + 2016/10/27 13:11:49 main.go:37: [info] Database DSN: root:@tcp(127.0.0.1:3306)/test?charset=utf8 + 2016/10/27 13:11:49 checker.go:63: [info] Checking table t1 + 2016/10/27 13:11:49 checker.go:69: [info] Check table t1 succ + 2016/10/27 13:11:49 checker.go:63: [info] Checking table t2 + 2016/10/27 13:11:49 checker.go:69: [info] Check table t2 succ + ``` + +3. Use the `checker` tool to check one of the tables in the `test` database. + + **Note:** Assuming you need to migrate the `t1` table only in this sample. + + ```bash + ./bin/checker -host 127.0.0.1 -port 3306 -user root test t1 + 2016/10/27 13:13:56 checker.go:48: [info] Checking database test + 2016/10/27 13:13:56 main.go:37: [info] Database DSN: root:@tcp(127.0.0.1:3306)/test?charset=utf8 + 2016/10/27 13:13:56 checker.go:63: [info] Checking table t1 + 2016/10/27 13:13:56 checker.go:69: [info] Check table t1 succ + Check database succ! + ``` + +### A sample of a table that cannot be migrated + +1. Create the following `t_error` table in MySQL: + + ```sql + CREATE TABLE t_error ( a INT NOT NULL, PRIMARY KEY (a)) + ENGINE=InnoDB TABLESPACE ts1 + PARTITION BY RANGE (a) PARTITIONS 3 ( + PARTITION P1 VALUES LESS THAN (2), + PARTITION P2 VALUES LESS THAN (4) TABLESPACE ts2, + PARTITION P3 VALUES LESS THAN (6) TABLESPACE ts3); + ``` +2. Use the `checker` tool to check the table. If the following error is displayed, the `t_error` table cannot be migrated. + + ```bash + ./bin/checker -host 127.0.0.1 -port 3306 -user root test t_error + 2017/08/04 11:14:35 checker.go:48: [info] Checking database test + 2017/08/04 11:14:35 main.go:39: [info] Database DSN: root:@tcp(127.0.0.1:3306)/test?charset=utf8 + 2017/08/04 11:14:35 checker.go:63: [info] Checking table t1 + 2017/08/04 11:14:35 checker.go:67: [error] Check table t1 failed with err: line 3 column 29 near " ENGINE=InnoDB DEFAULT CHARSET=latin1 + /*!50100 PARTITION BY RANGE (a) + (PARTITION P1 VALUES LESS THAN (2) ENGINE = InnoDB, + PARTITION P2 VALUES LESS THAN (4) TABLESPACE = ts2 ENGINE = InnoDB, + PARTITION P3 VALUES LESS THAN (6) TABLESPACE = ts3 ENGINE = InnoDB) */" (total length 354) + github.com/pingcap/tidb/parser/yy_parser.go:96: + github.com/pingcap/tidb/parser/yy_parser.go:109: + /home/jenkins/workspace/build_tidb_tools_master/go/src/github.com/pingcap/tidb-tools/checker/checker.go:122: parse CREATE TABLE `t1` ( + `a` int(11) NOT NULL, + PRIMARY KEY (`a`) + ) /*!50100 TABLESPACE ts1 */ ENGINE=InnoDB DEFAULT CHARSET=latin1 + /*!50100 PARTITION BY RANGE (a) + (PARTITION P1 VALUES LESS THAN (2) ENGINE = InnoDB, + PARTITION P2 VALUES LESS THAN (4) TABLESPACE = ts2 ENGINE = InnoDB, + PARTITION P3 VALUES LESS THAN (6) TABLESPACE = ts3 ENGINE = InnoDB) */ error + /home/jenkins/workspace/build_tidb_tools_master/go/src/github.com/pingcap/tidb-tools/checker/checker.go:114: + 2017/08/04 11:14:35 main.go:83: [error] Check database test with 1 errors and 0 warnings. + ``` diff --git a/v1.0/op-guide/migration.md b/v1.0/op-guide/migration.md new file mode 100755 index 0000000000000..7c6373f66884a --- /dev/null +++ b/v1.0/op-guide/migration.md @@ -0,0 +1,254 @@ +--- +title: Migrate Data from MySQL to TiDB +category: operations +--- + +# Migrate Data from MySQL to TiDB + +## Use the `mydumper` / `loader` tool to export and import all the data + +You can use `mydumper` to export data from MySQL and `loader` to import the data into TiDB. + +> **Note:** Although TiDB also supports the official `mysqldump` tool from MySQL for data migration, it is not recommended to use it. Its performance is much lower than `mydumper` / `loader` and it takes much time to migrate large amounts of data. `mydumper`/`loader` is more powerful. For more information, see [https://github.com/maxbube/mydumper](https://github.com/maxbube/mydumper). + +### Export data from MySQL + +Use the `mydumper` tool to export data from MySQL by using the following command: + +```bash +./bin/mydumper -h 127.0.0.1 -P 3306 -u root -t 16 -F 64 -B test -T t1,t2 --skip-tz-utc -o ./var/test +``` +In this command, + +- `-B test`: means the data is exported from the `test` database. +- `-T t1,t2`: means only the `t1` and `t2` tables are exported. +- `-t 16`: means 16 threads are used to export the data. +- `-F 64`: means a table is partitioned into chunks and one chunk is 64MB. +- `--skip-tz-utc`: the purpose of adding this parameter is to ignore the inconsistency of time zone setting between MySQL and the data exporting machine and to disable automatic conversion. + +> **Note**: On the Cloud platforms which require the `super privilege`, such as on the Aliyun platform, add the `--no-locks` parameter to the command. If not, you might get the error message that you don't have the privilege. + +### Import data to TiDB + +Use `loader` to import the data from MySQL to TiDB. See [Loader instructions](./tools/loader.md) for more information. + +```bash +./bin/loader -h 127.0.0.1 -u root -P 4000 -t 32 -d ./var/test +``` + +After the data is imported, you can view the data in TiDB using the MySQL client: + +```sql +mysql -h127.0.0.1 -P4000 -uroot + +mysql> show tables; ++----------------+ +| Tables_in_test | ++----------------+ +| t1 | +| t2 | ++----------------+ + +mysql> select * from t1; ++----+------+ +| id | age | ++----+------+ +| 1 | 1 | +| 2 | 2 | +| 3 | 3 | ++----+------+ + +mysql> select * from t2; ++----+------+ +| id | name | ++----+------+ +| 1 | a | +| 2 | b | +| 3 | c | ++----+------+ +``` + +### Best practice + +To migrate data quickly, especially for huge amount of data, you can refer to the following recommendations. + +- Keep the exported data file as small as possible and it is recommended keep it within 64M. You can use the `-F` parameter to set the value. +- You can adjust the `-t` parameter of `loader` based on the number and the load of TiKV instances. For example, if there are three TiKV instances, `-t` can be set to 3 * (1 ~ n). If the load of TiKV is too high and the log `backoffer.maxSleep 15000ms is exceeded` is displayed many times, decrease the value of `-t`; otherwise, increase it. + +### A sample and the configuration + + - The total size of the exported files is 214G. A single table has 8 columns and 2 billion rows. + - The cluster topology: + - 12 TiKV instances: 4 nodes, 3 TiKV instances per node + - 4 TiDB instances + - 3 PD instances + - The configuration of each node: + - CPU: Intel Xeon E5-2670 v3 @ 2.30GHz + - 48 vCPU [2 x 12 physical cores] + - Memory: 128G + - Disk: sda [raid 10, 300G] sdb[RAID 5, 2T] + - Operating System: CentOS 7.3 + - The `-F` parameter of `mydumper` is set to 16 and the `-t` parameter of `loader` is set to 64. + +**Results**: It takes 11 hours to import all the data, which is 19.4G/hour. + +## Use the `syncer` tool to import data incrementally (optional) + +The previous section introduces how to import all the history data from MySQL to TiDB using `mydumper`/`loader`. But this is not applicable if the data in MySQL is updated after the migration and it is expected to import the updated data quickly. + +Therefore, TiDB provides the `syncer` tool for an incremental data import from MySQL to TiDB. + +See [Download the TiDB enterprise toolset](#download-the-tidb-enterprise-toolset-linux) to download the `syncer` tool. + +### Download the TiDB enterprise toolset (Linux) + +```bash +# Download the enterprise tool package. +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf tidb-enterprise-tools-latest-linux-amd64.tar.gz +cd tidb-enterprise-tools-latest-linux-amd64 +``` + +Assuming the data from `t1` and `t2` is already imported to TiDB using `mydumper`/`loader`. Now we hope that any updates to these two tables are synchronised to TiDB in real time. + +### Obtain the position to synchronise + +The data exported from MySQL contains a metadata file which includes the position information. Take the following metadata information as an example: +``` +Started dump at: 2017-04-28 10:48:10 +SHOW MASTER STATUS: + Log: mysql-bin.000003 + Pos: 930143241 + GTID: + +Finished dump at: 2017-04-28 10:48:11 + +``` +The position information (`Pos: 930143241`) needs to be stored in the `syncer.meta` file for `syncer` to synchronize: + +```bash +# cat syncer.meta +binlog-name = "mysql-bin.000003" +binlog-pos = 930143241 +``` + +> **Note:** The `syncer.meta` file only needs to be configured once when it is first used. The position will be automatically updated when binlog is synchronised. + +### Start `syncer` + +The `config.toml` file for `syncer`: + +```toml +log-level = "info" + +server-id = 101 + +# The file path for meta: +meta = "./syncer.meta" +worker-count = 16 +batch = 10 + +# The testing address for pprof. It can also be used by Prometheus to pull the syncer metrics. +status-addr = ":10081" + +skip-sqls = ["ALTER USER", "CREATE USER"] + +# Support whitelist filter. You can specify the database and table to be synchronised. For example: +# Synchronise all the tables of db1 and db2: +replicate-do-db = ["db1","db2"] + +# Synchronise db1.table1. +[[replicate-do-table]] +db-name ="db1" +tbl-name = "table1" + +# Synchronise db3.table2. +[[replicate-do-table]] +db-name ="db3" +tbl-name = "table2" + +# Support regular expressions. Start with '~' to use regular expressions. +# To synchronise all the databases that start with `test`: +replicate-do-db = ["~^test.*"] + +# The sharding synchronising rules support wildcharacter. +# 1. The asterisk character (*, also called "star") matches zero or more characters, +# for example, "doc*" matches "doc" and "document" but not "dodo"; +# asterisk character must be in the end of the wildcard word, +# and there is only one asterisk in one wildcard word. +# 2. The question mark ? matches exactly one character. +#[[route-rules]] +#pattern-schema = "route_*" +#pattern-table = "abc_*" +#target-schema = "route" +#target-table = "abc" + +#[[route-rules]] +#pattern-schema = "route_*" +#pattern-table = "xyz_*" +#target-schema = "route" +#target-table = "xyz" + +[from] +host = "127.0.0.1" +user = "root" +password = "" +port = 3306 + +[to] +host = "127.0.0.1" +user = "root" +password = "" +port = 4000 + +``` +Start `syncer`: + +```bash +./bin/syncer -config config.toml +2016/10/27 15:22:01 binlogsyncer.go:226: [info] begin to sync binlog from position (mysql-bin.000003, 1280) +2016/10/27 15:22:01 binlogsyncer.go:130: [info] register slave for master server 127.0.0.1:3306 +2016/10/27 15:22:01 binlogsyncer.go:552: [info] rotate to (mysql-bin.000003, 1280) +2016/10/27 15:22:01 syncer.go:549: [info] rotate binlog to (mysql-bin.000003, 1280) +``` + +### Insert data into MySQL + +```bash +INSERT INTO t1 VALUES (4, 4), (5, 5); +``` + +### Log in TiDB and view the data + +```sql +mysql -h127.0.0.1 -P4000 -uroot -p +mysql> select * from t1; ++----+------+ +| id | age | ++----+------+ +| 1 | 1 | +| 2 | 2 | +| 3 | 3 | +| 4 | 4 | +| 5 | 5 | ++----+------+ +``` + +`syncer` outputs the current synchronised data statistics every 30 seconds: + +```bash +2017/06/08 01:18:51 syncer.go:934: [info] [syncer]total events = 15, total tps = 130, recent tps = 4, +master-binlog = (ON.000001, 11992), master-binlog-gtid=53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-74, +syncer-binlog = (ON.000001, 2504), syncer-binlog-gtid = 53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-17 +2017/06/08 01:19:21 syncer.go:934: [info] [syncer]total events = 15, total tps = 191, recent tps = 2, +master-binlog = (ON.000001, 11992), master-binlog-gtid=53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-74, +syncer-binlog = (ON.000001, 2504), syncer-binlog-gtid = 53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-35 +``` + +You can see that by using `syncer`, the updates in MySQL are automatically synchronised in TiDB. \ No newline at end of file diff --git a/v1.0/op-guide/monitor-overview.md b/v1.0/op-guide/monitor-overview.md new file mode 100755 index 0000000000000..92028cf1b3d92 --- /dev/null +++ b/v1.0/op-guide/monitor-overview.md @@ -0,0 +1,28 @@ +--- +title: Overview of the TiDB Monitoring Framework +category: operations +--- + +# Overview of the Monitoring Framework + +The TiDB monitoring framework adopts two open source projects: Prometheus and Grafana. TiDB uses Prometheus to store the monitoring and performance metrics and Grafana to visualize these metrics. + +## About Prometheus in TiDB + +As a time series database, Prometheus has a multi-dimensional data model and flexible query language. As one of the most popular open source projects, many companies and organizations have adopted Prometheus, and the project has a very active community. PingCAP is one of the active developers and adoptors of Prometheus for monitoring and alerting in TiDB, TiKV and PD. + +Prometheus consists of multiple components. Currently, TiDB uses the following of them: + +- The Prometheus Server to scrape and store time series data. +- The client libraries to customize necessary metrics in the application. +- A push GateWay to receive the data from Client Push for the Prometheus main server. +- An AlertManager for the alerting mechanism. + +The diagram is as follows: + + + +## About Grafana in TiDB +Grafana is an open source project for analysing and visualizing metrics. TiDB uses Grafana to display the performance metrics as follows: + +![screenshot](../media/grafana-screenshot.png) diff --git a/v1.0/op-guide/monitor.md b/v1.0/op-guide/monitor.md new file mode 100755 index 0000000000000..317850954d920 --- /dev/null +++ b/v1.0/op-guide/monitor.md @@ -0,0 +1,244 @@ +--- +title: Monitor a TiDB Cluster +category: operations +--- + +# Monitor a TiDB Cluster + +Currently there are two types of interfaces to monitor the state of the TiDB cluster: + +- Using the HTTP interface to get the internal information of a component, which is called the component state interface. +- Using Prometheus to record the detailed information of the various operations in the components, which is called the Metrics interface. + +## The component state interface + +You can use this type of interface to monitor the basic information of the component. This interface can act as the interface to monitor Keepalive. In addition, the interface of the Placement Driver (PD) can get the details of the entire TiKV cluster. + +### TiDB server + +The HTTP interface of TiDB is: `http://host:port/status` + +The default port number is: 10080 which can be set using the `--status` flag. + +The interface can be used to get the current TiDB server state and to determine whether the server is alive. The result is returned in the following JSON format: + +```bash +curl http://127.0.0.1:10080/status +{ + connections: 0, + version: "5.5.31-TiDB-1.0", + git_hash: "b99521846ff6f71f06e2d49a3f98fa1c1d93d91b" +} +``` + +In this example, + +- connection: the current number of clients connected to the TiDB server +- version: the TiDB version number +- git_hash: the Git Hash of the current TiDB code + +### PD server + +The API address of PD is: `http://${host}:${port}/pd/api/v1/${api_name}` + +The default port number is: 2379. + +See [PD API doc](https://cdn.rawgit.com/pingcap/docs/master/op-guide/pd-api-v1.html) for detailed information about various API names. + +The interface can be used to get the state of all the TiKV servers and the information about load balancing. It is the most important and frequently-used interface to get the state information of all the TiKV nodes. See the following example for the the information about a single-node TiKV cluster: + +```bash +curl http://127.0.0.1:2379/pd/api/v1/stores +{ + "count": 1 // the number of the TiKV node + "stores": [ // the list of the TiKV node + // the detailed information about the single TiKV node + { + "store": { + "id": 1, + "address": "127.0.0.1:22161", + "state": 0 + }, + "status": { + "store_id": 1, // the ID of the node + "capacity": 1968874332160, // the total capacity + "available": 1264847716352, // the available capacity + "region_count": 1, // the count of Regions in this node + "sending_snap_count": 0, + "receiving_snap_count": 0, + "start_ts": "2016-10-24T19:54:00.110728339+08:00", // the starting timestamp + "last_heartbeat_ts": "2016-10-25T10:52:54.973669928+08:00", // the timestamp of the last heartbeat + "total_region_count": 1, // the count of the total Regions + "leader_region_count": 1, // the count of the Leader Regions + "uptime": "14h58m54.862941589s" + }, + "scores": [ + 100, + 35 + ] + } + ] +} +``` + +## The metrics interface + +You can use this type of interface to monitor the state and performance of the entire cluster. The metrics data is displayed in Prometheus and Grafana. See [Use Prometheus and Grafana](#use-prometheus-and-grafana) for how to set up the monitoring system. + +You can get the following metrics for each component: + +### TiDB server + +- query processing time to monitor the latency and throughput + +- the DDL process monitoring + +- TiKV client related monitoring + +- PD client related monitoring + +### PD server + +- the total number of times that the command executes + +- the total number of times that a certain command fails + +- the duration that a command succeeds + +- the duration that a command fails + +- the duration that a command finishes and returns result + +### TiKV server + +- Garbage Collection (GC) monitoring + +- the total number of times that the TiKV command executes + +- the duration that Scheduler executes commands + +- the total number of times of the Raft propose command + +- the duration that Raft executes commands + +- the total number of times that Raft commands fail + +- the total number of times that Raft processes the ready state + +## Use Prometheus and Grafana + +### The deployment architecture + +See the following diagram for the deployment architecture: + +![image alt text](../media/monitor-architecture.png) + +> **Note:** You must add the Prometheus Pushgateway addresses to the startup parameters of the TiDB, PD and TiKV components. + +### Set up the monitoring system + +See the following links for your reference: + +- Prometheus Push Gateway: [https://github.com/prometheus/pushgateway](https://github.com/prometheus/pushgateway) + +- Prometheus Server: [https://github.com/prometheus/prometheus#install](https://github.com/prometheus/prometheus#install) + +- Grafana: [http://docs.grafana.org](http://docs.grafana.org/) + +## Configuration + +### Configure TiDB, PD and TiKV + ++ TiDB: Set the two parameters: `--metrics-addr` and `--metrics-interval`. + + - Set the Push Gateway address as the `--metrics-addr` parameter. + - Set the push frequency as the `--metrics-interval` parameter. The unit is s, and the default value is 15. + ++ PD: update the toml configuration file with the Push Gateway address and the the push frequency: + + ```toml + [metric] + # prometheus client push interval, set "0s" to disable prometheus. + interval = "15s" + # prometheus pushgateway address, leaves it empty will disable prometheus. + address = "host:port" + ``` + ++ TiKV: update the toml configuration file with the Push Gateway address and the the push frequency. Set the job field as "tikv". + + ```toml + [metric] + # the Prometheus client push interval. Setting the value to 0s stops Prometheus client from pushing. + interval = "15s" + # the Prometheus pushgateway address. Leaving it empty stops Prometheus client from pushing. + address = "host:port" + # the Prometheus client push job name. Note: A node id will automatically append, e.g., "tikv_1". + job = "tikv" + ``` + +### Configure PushServer + +Generally, it does not need to be configured. You can use the default port: 9091. + +### Configure Prometheus + +Add the Push Gateway address to the yaml configuration file: + +```yaml + scrape_configs: +# The job name is added as a label `job=` to any timeseries scraped from this config. +- job_name: 'TiDB' + + # Override the global default and scrape targets from this job every 5 seconds. + scrape_interval: 5s + + honor_labels: true + + static_configs: + - targets: ['host:port'] # use the Push Gateway address +labels: + group: 'production' + ``` + +### Configure Grafana + +#### Create a Prometheus data source + +1. Login the Grafana Web interface. + + - The default address is: [http://localhost:3000](http://localhost:3000) + + - The default account name: admin + + - The password for the default account: admin + +2. Click the Grafana logo to open the sidebar menu. + +3. Click "Data Sources" in the sidebar. + +4. Click "Add data source". + +5. Specify the data source information: + + - Specify the name for the data source. + + - For Type, select Prometheus. + + - For Url, specify the Prometheus address. + + - Specify other fields as needed. + +6. Click "Add" to save the new data source. + +#### Create a Grafana dashboard + +1. Click the Grafana logo to open the sidebar menu. + +2. On the sidebar menu, click "Dashboards" -> "Import" to open the "Import Dashboard" window. + +3. Click "Upload .json File" to upload a JSON file ( Download [TiDB Grafana Config](https://grafana.com/tidb) ). + +4. Click "Save & Open". + +5. A Prometheus dashboard is created. + diff --git a/v1.0/op-guide/offline-ansible-deployment.md b/v1.0/op-guide/offline-ansible-deployment.md new file mode 100755 index 0000000000000..9244481a536dd --- /dev/null +++ b/v1.0/op-guide/offline-ansible-deployment.md @@ -0,0 +1,99 @@ +--- +title: Offline Deployment Using Ansible +category: operations +--- + +# Offline Deployment Using Ansible + +## Prepare + +Before you start, make sure that you have: + +1. A download machine + + - The machine must have access to the Internet in order to download TiDB-Ansible, TiDB and related packages. + - For Linux operating system, it is recommended to install CentOS 7.3 or later. + +2. Several target machines and one Control Machine + + - For system requirements and configuration, see [Prepare the environment](ansible-deployment.md#prepare). + - It is acceptable without access to the Internet. + +## Install Ansible and dependencies in the Control Machine + +1. Install Ansible offline on the CentOS 7 system: + + > Download the [Ansible](http://download.pingcap.org/ansible-2.4-rpms.el7.tar.gz) offline installation package to the Control Machine. + + ```bash + # tar -xzvf ansible-2.4-rpms.el7.tar.gz + + # cd ansible-2.4-rpms.el7 + + # rpm -ivh PyYAML*rpm libyaml*rpm python-babel*rpm python-backports*rpm python-backports-ssl_match_hostname*rpm python-cffi*rpm python-enum34*rpm python-httplib2*rpm python-idna*rpm python-ipaddress*rpm python-jinja2*rpm python-markupsafe*rpm python-paramiko*rpm python-passlib*rpm python-ply*rpm python-pycparser*rpm python-setuptools*rpm python-six*rpm python2-cryptography*rpm python2-jmespath*rpm python2-pyasn1*rpm sshpass*rpm + + # rpm -ivh ansible-2.4.2.0-2.el7.noarch.rpm + ``` + +2. After Ansible is installed, you can view the version using `ansible --version`. + + ```bash + # ansible --version + ansible 2.4.2.0 + ``` + +## Download TiDB-Ansible and TiDB packages on the download machine + +1. Install Ansible on the download machine. + + Use the following method to install Ansible online on the download machine installed with the CentOS 7 system. Installing using the EPEL source automatically installs the related Ansible dependencies (such as `Jinja2==2.7.2 MarkupSafe==0.11`). After Ansible is installed, you can view the version using `ansible --version`. + + ```bash + # yum install epel-release + # yum install ansible curl + # ansible --version + + ansible 2.4.2.0 + ``` + > **Note:** Make sure that the version of Ansible is 2.4 or later, otherwise compatibility problem might occur. + +2. Download TiDB-Ansible. + + Use the following command to download the corresponding version of TiDB-Ansible from the GitHub [TiDB-Ansible project](https://github.com/pingcap/tidb-ansible). The default folder name is `tidb-ansible`. + + Download the 1.0 (GA) version: + + ``` + git clone -b release-1.0 https://github.com/pingcap/tidb-ansible.git + ``` + + OR + + Download the master version: + + ``` + git clone https://github.com/pingcap/tidb-ansible.git + ``` + > **Note:** For production environment, download TiDB-Ansible 1.0 to deploy TiDB. + +3. Run the `local_prepare.yml` playbook, and download TiDB binary online to the download machine. + + ``` + cd tidb-ansible + ansible-playbook local_prepare.yml + ``` + +4. After running the above command, copy the `tidb-ansible` folder to the `/home/tidb` directory of the Control Machine. The ownership authority of the file must be the `tidb` user. + +## Orchestrate the TiDB cluster + +See [Orchestrate the TiDB cluster](op-guide/ansible-deployment.md#orchestrate-the-tidb-cluster). + +## Deploy the TiDB cluster + +1. See [Deploy the TiDB cluster](op-guide/ansible-deployment.md#deploy-the-tidb-cluster). +2. You do not need to run the `ansible-playbook local_prepare.yml` playbook again. + +## Test the cluster + +See [Test the cluster](op-guide/ansible-deployment.md#test-the-cluster). \ No newline at end of file diff --git a/v1.0/op-guide/pd-api-v1.html b/v1.0/op-guide/pd-api-v1.html new file mode 100755 index 0000000000000..4e67e907278d9 --- /dev/null +++ b/v1.0/op-guide/pd-api-v1.html @@ -0,0 +1,6704 @@ + + + + + Placement Driver API + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+
+
+

Placement Driver API

+
+
+
+ +
+ + +
+

Default

+ + + + + + + +
+ +
+
+

pdApiV1BalancersGet

+
+
+ +
+
+ +

+

Get all PD balancers.

+

+
+ +
/pd/api/v1/balancers
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/balancers"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Balancers result = apiInstance.pdApiV1BalancersGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1BalancersGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Balancers result = apiInstance.pdApiV1BalancersGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1BalancersGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1BalancersGetWithCompletionHandler: 
+              ^(Balancers output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1BalancersGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1BalancersGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                Balancers result = apiInstance.pdApiV1BalancersGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1BalancersGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1BalancersGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1BalancersGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - A balancers object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1ConfigGet

+
+
+ +
+
+ +

+

Get the PD config.

+

+
+ +
/pd/api/v1/config
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/config"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Config result = apiInstance.pdApiV1ConfigGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1ConfigGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Config result = apiInstance.pdApiV1ConfigGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1ConfigGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1ConfigGetWithCompletionHandler: 
+              ^(Config output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1ConfigGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1ConfigGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                Config result = apiInstance.pdApiV1ConfigGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1ConfigGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1ConfigGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1ConfigGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - A config object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - Unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1EventsGet

+
+
+ +
+
+ +

+

Get all PD events.

+

+
+ +
/pd/api/v1/events
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/events"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            array[LogEvent] result = apiInstance.pdApiV1EventsGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1EventsGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            array[LogEvent] result = apiInstance.pdApiV1EventsGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1EventsGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1EventsGetWithCompletionHandler: 
+              ^(array[LogEvent] output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1EventsGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1EventsGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                array[LogEvent] result = apiInstance.pdApiV1EventsGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1EventsGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1EventsGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1EventsGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - An array of event objects.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1LeaderGet

+
+
+ +
+
+ +

+

Get the PD leader.

+

+
+ +
/pd/api/v1/leader
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/leader"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Leader result = apiInstance.pdApiV1LeaderGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1LeaderGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Leader result = apiInstance.pdApiV1LeaderGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1LeaderGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1LeaderGetWithCompletionHandler: 
+              ^(Leader output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1LeaderGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1LeaderGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                Leader result = apiInstance.pdApiV1LeaderGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1LeaderGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1LeaderGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1LeaderGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - A leader object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - Unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1MembersGet

+
+
+ +
+
+ +

+

Get all PD members.

+

+
+ +
/pd/api/v1/members
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/members"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            array[Member] result = apiInstance.pdApiV1MembersGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1MembersGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            array[Member] result = apiInstance.pdApiV1MembersGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1MembersGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1MembersGetWithCompletionHandler: 
+              ^(array[Member] output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1MembersGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1MembersGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                array[Member] result = apiInstance.pdApiV1MembersGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1MembersGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1MembersGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1MembersGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - An array of member objects.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - Unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1MembersNameDelete

+
+
+ +
+
+ +

+

Delete a PD member.

+

+
+ +
/pd/api/v1/members/{name}
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X delete -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/members/{name}"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        String name = name_example; // String | The name of the member to delete.
+        try {
+            apiInstance.pdApiV1MembersNameDelete(name);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1MembersNameDelete");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        String name = name_example; // String | The name of the member to delete.
+        try {
+            apiInstance.pdApiV1MembersNameDelete(name);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1MembersNameDelete");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+String *name = name_example; // The name of the member to delete.
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1MembersNameDeleteWith:name
+              completionHandler: ^(NSError* error) {
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var name = name_example; // {String} The name of the member to delete.
+
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully.');
+  }
+};
+api.pdApiV1MembersNameDelete(name, callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1MembersNameDeleteExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+            var name = name_example;  // String | The name of the member to delete.
+
+            try
+            {
+                apiInstance.pdApiV1MembersNameDelete(name);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1MembersNameDelete: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1MembersNameDelete($name);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1MembersNameDelete: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + +
Path parameters
+ + + + + + + + + + +
NameDescription
name* + + + +
+
+ + + + + +

Responses

+ +

Status: 200 - Member deleted

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 404 - Member not found

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - Unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1RegionIdGet

+
+
+ +
+
+ +

+

Get a TiKV region.

+

+
+ +
/pd/api/v1/region/{id}
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/region/{id}"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        Integer id = 56; // Integer | The id of the region to get.
+        try {
+            Region result = apiInstance.pdApiV1RegionIdGet(id);
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1RegionIdGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        Integer id = 56; // Integer | The id of the region to get.
+        try {
+            Region result = apiInstance.pdApiV1RegionIdGet(id);
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1RegionIdGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+Integer *id = 56; // The id of the region to get.
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1RegionIdGetWith:id
+              completionHandler: ^(Region output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var id = 56; // {Integer} The id of the region to get.
+
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1RegionIdGet(id, callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1RegionIdGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+            var id = 56;  // Integer | The id of the region to get.
+
+            try
+            {
+                Region result = apiInstance.pdApiV1RegionIdGet(id);
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1RegionIdGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1RegionIdGet($id);
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1RegionIdGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + +
Path parameters
+ + + + + + + + + + +
NameDescription
id* + + + +
+
+ + + + + +

Responses

+ +

Status: 200 - A region object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1RegionsGet

+
+
+ +
+
+ +

+

Get all TiKV regions.

+

+
+ +
/pd/api/v1/regions
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/regions"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Regions result = apiInstance.pdApiV1RegionsGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1RegionsGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Regions result = apiInstance.pdApiV1RegionsGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1RegionsGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1RegionsGetWithCompletionHandler: 
+              ^(Regions output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1RegionsGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1RegionsGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                Regions result = apiInstance.pdApiV1RegionsGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1RegionsGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1RegionsGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1RegionsGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - A regions object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1StoreIdDelete

+
+
+ +
+
+ +

+

Delete a TiKV store.

+

+
+ +
/pd/api/v1/store/{id}
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X delete -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/store/{id}"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        Integer id = 56; // Integer | The id of the store to delete.
+        try {
+            apiInstance.pdApiV1StoreIdDelete(id);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1StoreIdDelete");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        Integer id = 56; // Integer | The id of the store to delete.
+        try {
+            apiInstance.pdApiV1StoreIdDelete(id);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1StoreIdDelete");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+Integer *id = 56; // The id of the store to delete.
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1StoreIdDeleteWith:id
+              completionHandler: ^(NSError* error) {
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var id = 56; // {Integer} The id of the store to delete.
+
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully.');
+  }
+};
+api.pdApiV1StoreIdDelete(id, callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1StoreIdDeleteExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+            var id = 56;  // Integer | The id of the store to delete.
+
+            try
+            {
+                apiInstance.pdApiV1StoreIdDelete(id);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1StoreIdDelete: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1StoreIdDelete($id);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1StoreIdDelete: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + +
Path parameters
+ + + + + + + + + + +
NameDescription
id* + + + +
+
+ + + + + +

Responses

+ +

Status: 200 - Store deleted

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1StoreIdGet

+
+
+ +
+
+ +

+

Get a TiKV store.

+

+
+ +
/pd/api/v1/store/{id}
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/store/{id}"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        Integer id = 56; // Integer | The id of the store to get.
+        try {
+            Store result = apiInstance.pdApiV1StoreIdGet(id);
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1StoreIdGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        Integer id = 56; // Integer | The id of the store to get.
+        try {
+            Store result = apiInstance.pdApiV1StoreIdGet(id);
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1StoreIdGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+Integer *id = 56; // The id of the store to get.
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1StoreIdGetWith:id
+              completionHandler: ^(Store output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var id = 56; // {Integer} The id of the store to get.
+
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1StoreIdGet(id, callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1StoreIdGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+            var id = 56;  // Integer | The id of the store to get.
+
+            try
+            {
+                Store result = apiInstance.pdApiV1StoreIdGet(id);
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1StoreIdGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1StoreIdGet($id);
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1StoreIdGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + +
Path parameters
+ + + + + + + + + + +
NameDescription
id* + + + +
+
+ + + + + +

Responses

+ +

Status: 200 - A store object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1StoresGet

+
+
+ +
+
+ +

+

Get all TiKV stores.

+

+
+ +
/pd/api/v1/stores
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/stores"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Stores result = apiInstance.pdApiV1StoresGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1StoresGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Stores result = apiInstance.pdApiV1StoresGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1StoresGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1StoresGetWithCompletionHandler: 
+              ^(Stores output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1StoresGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1StoresGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                Stores result = apiInstance.pdApiV1StoresGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1StoresGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1StoresGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1StoresGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - A stores object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1VersionGet

+
+
+ +
+
+ +

+

Get the PD version.

+

+
+ +
/pd/api/v1/version
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/version"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Version result = apiInstance.pdApiV1VersionGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1VersionGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Version result = apiInstance.pdApiV1VersionGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1VersionGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1VersionGetWithCompletionHandler: 
+              ^(Version output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1VersionGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1VersionGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                Version result = apiInstance.pdApiV1VersionGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1VersionGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1VersionGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1VersionGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - A version object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ +
+ + + + + + +
+ + + + + + + +
+
+ Generated 2016-09-14T04:08:53.357Z +
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + diff --git a/v1.0/op-guide/recommendation.md b/v1.0/op-guide/recommendation.md new file mode 100755 index 0000000000000..652870904662d --- /dev/null +++ b/v1.0/op-guide/recommendation.md @@ -0,0 +1,78 @@ +--- +title: Software and Hardware Requirements +category: operations +--- + +# Software and Hardware Requirements + +## About + +As an open source distributed NewSQL database with high performance, TiDB can be deployed in the Intel architecture server and major virtualization environments and runs well. TiDB supports most of the major hardware networks and Linux operating systems. + +## Linux OS version requirements + +| Linux OS Platform | Version | +| :-----------------------:| :----------: | +| Red Hat Enterprise Linux | 7.3 and above| +| CentOS | 7.3 and above| +| Oracle Enterprise Linux | 7.3 and above| +| Ubuntu LTS | 16.04 and above| + +> **Note**: +> +> - For Oracle Enterprise Linux, TiDB supports the Red Hat Compatible Kernel (RHCK) and does not support the Unbreakable Enterprise Kernel provided by Oracle Enterprise Linux. +> - The support for the Linux operating systems above include the deployment and operation in physical servers as well as in major virtualized environments like VMware, KVM and XEM. + +## Server requirements + +You can deploy and run TiDB on the 64-bit generic hardware server platform in the Intel x86-64 architecture. The requirements and recommendations about server hardware configuration for development, testing and production environments are as follows: + +### Development and testing environments + +| Component | CPU | Memory | Local Storage | Network | Instance Number (Minimum Requirement) | +| :------: | :-----: | :-----: | :----------: | :------: | :----------------: | +| TiDB | 8 core+ | 16 GB+ | SAS, 200 GB+ | Gigabit network card | 1 (can be deployed on the same machine with PD) | +| PD | 8 core+ | 16 GB+ | SAS, 200 GB+ | Gigabit network card | 1 (can be deployed on the same machine with TiDB) | +| TiKV | 8 core+ | 32 GB+ | SAS, 200 GB+ | Gigabit network card | 3 | +| | | | | Total Server Number | 4 | + +> **Note**: +> +> - In the test environment, the TiDB and PD can be deployed on the same server. +> - For performance-related testing, do not use low-performance storage and network hardware configuration, in order to guarantee the correctness of the test result. + +### Production environment + +| Component | CPU | Memory | Hard Disk Type | Network | Instance Number (Minimum Requirement) | +| :-----: | :------: | :------: | :------: | :------: | :-----: | +| TiDB | 16 core+ | 48 GB+ | SAS | 10 Gigabit network card (2 preferred) | 2 | +| PD | 8 core+ | 16 GB+ | SSD | 10 Gigabit network card (2 preferred) | 3 | +| TiKV | 16 core+ | 48 GB+ | SSD | 10 Gigabit network card (2 preferred) | 3 | +| Monitor | 8 core+ | 16 GB+ | SAS | Gigabit network card | 1 | +| | | | | Total Server Number | 9 | + +> **Note**: +> +> - In the production environment, you can deploy and run TiDB and PD on the same server. If you have a higher requirement for performance and reliability, try to deploy them separately. +> - It is strongly recommended to use higher configuration in the production environment. +> - It is recommended to keep the size of TiKV hard disk within 800G in case it takes too long to restore data when the hard disk is damaged. + +## Network requirements + +As an open source distributed NewSQL database, TiDB requires the following network port configuration to run. Based on the TiDB deployment in actual environments, the administrator can enable relevant ports in the network side and host side. + +| Component | Default Port | Description | +| :--:| :--: | :-- | +| TiDB | 4000 | the communication port for the application and DBA tools| +| TiDB | 10080 | the communication port to report TiDB status| +| TiKV | 20160 | the TiKV communication port | +| PD | 2379 | the communication port between TiDB and PD | +| PD | 2380 | the inter-node communication port within the PD cluster | +| Prometheus | 9090| the communication port for the Prometheus service| +| Pushgateway | 9091| the aggregation and report port for TiDB, TiKV, and PD monitor | +| Node_exporter | 9100| the communication port to report the system information of every TiDB cluster node | +| Grafana | 3000 | the port for the external Web monitoring service and client (Browser) access| + +## Web browser requirements + +Based on the Prometheus and Grafana platform, TiDB provides a visual data monitoring solution to monitor the TiDB cluster status. To visit the Grafana monitor interface, it is recommended to use a higher version of Microsoft IE, Google Chrome or Mozilla Firefox. diff --git a/v1.0/op-guide/root-ansible-deployment.md b/v1.0/op-guide/root-ansible-deployment.md new file mode 100755 index 0000000000000..26fbd2c2c323d --- /dev/null +++ b/v1.0/op-guide/root-ansible-deployment.md @@ -0,0 +1,59 @@ +--- +title: Ansible Deployment Using the Root User Account +category: operations +--- + +# Ansible Deployment Using the Root User Account + +> **Note:** The remote Ansible user (the `ansible_user` in the `incentory.ini` file) can use the root user account to deploy TiDB, but it is not recommended. + +The following example uses the `tidb` user account as the user running the service. + +To deploy TiDB using a root user account, take the following steps: + +1. Edit `inventory.ini` as follows. + + Remove the code comments for `ansible_user = root`, `ansible_become = true` and `ansible_become_user`. Add comments for `ansible_user = tidb`. + + ``` + ## Connection + # ssh via root: + ansible_user = root + ansible_become = true + ansible_become_user = tidb + + # ssh via normal user + # ansible_user = tidb + ``` + +2. Connect to the network and download TiDB binary to the Control Machine. + + ``` + ansible-playbook local_prepare.yml + ``` + +3. Initialize the system environment and edit the kernel parameters. + + ``` + ansible-playbook bootstrap.yml + ``` + + > **Note**: If the service user does not exist, the initialization operation will automatically create the user. + + If the remote connection using the root user requires a password, use the `-k` (lower case) parameter. This applies to other playbooks as well: + + ``` + ansible-playbook bootstrap.yml -k + ``` + +4. Deploy the TiDB cluster. + + ``` + ansible-playbook deploy.yml -k + ``` + +5. Start the TiDB cluster. + + ``` + ansible-playbook start.yml -k + ``` \ No newline at end of file diff --git a/v1.0/op-guide/security.md b/v1.0/op-guide/security.md new file mode 100755 index 0000000000000..6650e86e1bf73 --- /dev/null +++ b/v1.0/op-guide/security.md @@ -0,0 +1,127 @@ +--- +title: Enable TLS Authentication +category: deployment +--- + +# Enable TLS Authentication + +## Overview + +This document describes how to enable TLS authentication in the TiDB cluster. The TLS authentication includes the following two conditions: + +- The mutual authentication between TiDB components, including the authentication among TiDB, TiKV and PD, between TiKV Control and TiKV, between PD Control and PD, between TiKV peers, and between PD peers. Once enabled, the mutual authentication applies to all components, and it does not support applying to only part of the components. +- The one-way and mutual authentication between the TiDB server and the MySQL Client. + +> **Note:** The authentication between the MySQL Client and the TiDB server uses one set of certificates, while the authentication among TiDB components uses another set of certificates. + +## Enable mutual TLS authentication among TiDB components + +### Prepare certificates + +It is recommended to prepare a separate server certificate for TiDB, TiKV and PD, and make sure that they can authenticate each other. The clients of TiDB, TiKV and PD share one client certificate. + +You can use multiple tools to generate self-signed certificates, such as `openssl`, `easy-rsa ` and `cfssl`. + +See an example of [generating self-signed certificates](generate-self-signed-certificates.md) using `cfssl`. + +### Configure certificates + +To enable mutual authentication among TiDB components, configure the certificates of TiDB, TiKV and PD as follows. + +#### TiDB + +Configure in the configuration file or command line arguments: + +```toml +[security] +# Path of file that contains list of trusted SSL CAs for connection with cluster components. +cluster-ssl-ca = "/path/to/ca.pem" +# Path of file that contains X509 certificate in PEM format for connection with cluster components. +cluster-ssl-cert = "/path/to/tidb-server.pem" +# Path of file that contains X509 key in PEM format for connection with cluster components. +cluster-ssl-key = "/path/to/tidb-server-key.pem" +``` + +#### TiKV + +Configure in the configuration file or command line arguments, and set the corresponding URL to https: + +```toml +[security] +# set the path for certificates. Empty string means disabling secure connections. +ca-path = "/path/to/ca.pem" +cert-path = "/path/to/client.pem" +key-path = "/path/to/client-key.pem" +``` + +#### PD + +Configure in the configuration file or command line arguments, and set the corresponding URL to https: + +```toml +[security] +# Path of file that contains list of trusted SSL CAs. If set, following four settings shouldn't be empty +cacert-path = "/path/to/ca.pem" +# Path of file that contains X509 certificate in PEM format. +cert-path = "/path/to/server.pem" +# Path of file that contains X509 key in PEM format. +key-path = "/path/to/server-key.pem" +``` + +Now mutual authentication among TiDB components is enabled. + +When you connect the server using the client, it is required to specify the client certificate. For example: + +```bash +./pd-ctl -u https://127.0.0.1:2379 --cacert /path/to/ca.pem --cert /path/to/pd-client.pem --key /path/to/pd-client-key.pem + +./tikv-ctl --host="127.0.0.1:20160" --ca-path="/path/to/ca.pem" --cert-path="/path/to/client.pem" --key-path="/path/to/clinet-key.pem" +``` + +## Enable TLS authentication between the MySQL client and TiDB server + +### Prepare certificates + +```bash +mysql_ssl_rsa_setup --datadir=certs +``` + +### Configure one-way authentication + +Configure in the configuration file or command line arguments of TiDB: + +```toml +[security] +# Path of file that contains list of trusted SSL CAs. +ssl-ca = "" +# Path of file that contains X509 certificate in PEM format. +ssl-cert = "/path/to/certs/server.pem" +# Path of file that contains X509 key in PEM format. +ssl-key = "/path/to/certs/server-key.pem" +``` + +Configure in the MySQL client: + +```bash +mysql -u root --host 127.0.0.1 --port 4000 --ssl-mode=REQUIRED +``` + +### Configure mutual authentication + +Configure in the configuration file or command line arguments of TiDB: + +```toml +[security] +# Path of file that contains list of trusted SSL CAs for connection with mysql client. +ssl-ca = "/path/to/certs/ca.pem" +# Path of file that contains X509 certificate in PEM format for connection with mysql client. +ssl-cert = "/path/to/certs/server.pem" +# Path of file that contains X509 key in PEM format for connection with mysql client. +ssl-key = "/path/to/certs/server-key.pem" +``` + +Specify the client certificate in the client: + +```bash +mysql -u root --host 127.0.0.1 --port 4000 --ssl-cert=/path/to/certs/client-cert.pem --ssl-key=/path/to/certs/client-key.pem --ssl-ca=/path/to/certs/ca.pem --ssl-mode=VERIFY_IDENTITY +``` diff --git a/v1.0/op-guide/tune-tikv.md b/v1.0/op-guide/tune-tikv.md new file mode 100755 index 0000000000000..21c53047d4f56 --- /dev/null +++ b/v1.0/op-guide/tune-tikv.md @@ -0,0 +1,257 @@ +--- +title: Tune TiKV Performance +category: tuning +--- + +# Tune TiKV Performance + +This document describes how to tune the TiKV parameters for optimal performance. + +TiKV uses RocksDB for persistent storage at the bottom level of the TiKV architecture. Therefore, many of the performance parameters are related to RocksDB. +TiKV uses two RocksDB instances: the default RocksDB instance stores KV data, the Raft RocksDB instance (RaftDB) stores Raft logs. + +TiKV implements `Column Families` (CF) from RocksDB. + +The default RocksDB instance stores KV data in the `default`, `write` and `lock` CFs. ++ The `default` CF stores the actual data. The corresponding parameters are in `[rocksdb.defaultcf]`. ++ The `write` CF stores the version information in Multi-Version Concurrency Control (MVCC) and index-related data. The corresponding parameters are in `[rocksdb.writecf]`. ++ The `lock` CF stores the lock information. The system uses the default parameters. + +The Raft RocksDB (RaftDB) instance stores Raft logs. ++ The `default` CF stores the Raft log. The corresponding parameters are in `[raftdb.defaultcf]`. + +Each CF has a separate `block cache` to cache data blocks to accelerate the data reading speed in RocksDB. You can configure the size of the `block cache` by setting the `block-cache-size` parameter. The bigger the `block-cache-size`, the more hot data can be cached, and the easier to read data, in the meantime, the more system memory will be occupied. + +Each CF also has a separate `write buffer`. You can configure the size by setting the `write-buffer-size` parameter. + +## Parameter specification + +``` +# Log level: trace, debug, info, warn, error, off. +log-level = "info" + +[server] +# Set listening address +# addr = "127.0.0.1:20160" + +# It is recommended to use the default value. +# notify-capacity = 40960 +# messages-per-tick = 4096 + +# Size of thread pool for gRPC +# grpc-concurrency = 4 +# The number of gRPC connections between each TiKV instance +# grpc-raft-conn-num = 10 + +# Most read requests from TiDB are sent to the coprocessor of TiKV. This parameter is used to set the number of threads +# of the coprocessor. If many read requests exist, add the number of threads and keep the number within that of the +# system CPU cores. For example, for a 32-core machine deployed with TiKV, you can even set this parameter to 30 in +# repeatable read scenarios. If this parameter is not set, TiKV automatically sets it to CPU cores * 0.8. +# end-point-concurrency = 8 + +# Tag the TiKV instances to schedule replicas. +# labels = {zone = "cn-east-1", host = "118", disk = "ssd"} + +[storage] +# The data directory +# data-dir = "/tmp/tikv/store" + +# In most cases, you can use the default value. When importing data, it is recommended to set the parameter to 1024000. +# scheduler-concurrency = 102400 +# This parameter controls the number of write threads. When write operations occur frequently, set this parameter value +# higher. Run `top -H -p tikv-pid` and if the threads named `sched-worker-pool` are busy, set the value of parameter +# `scheduler-worker-pool-size` higher and increase the number of write threads. +# scheduler-worker-pool-size = 4 + +[pd] +# PD address +# endpoints = ["127.0.0.1:2379","127.0.0.2:2379","127.0.0.3:2379"] + +[metric] +# The interval of pushing metrics to Prometheus pushgateway +interval = "15s" +# Prometheus pushgateway adress +address = "" +job = "tikv" + +[raftstore] +# The default value is true,which means writing the data on the disk compulsorily. If it is not in a business scenario +# of the financial security level, it is recommended to set the value to false to achieve better performance. +sync-log = true + +# Raft RocksDB directory. The default value is Raft subdirectory of [storage.data-dir]. +# If there are multiple disks on the machine, store the data of Raft RocksDB on different disks to improve TiKV performance. +# raftdb-dir = "/tmp/tikv/store/raft" + +region-max-size = "384MB" +# The threshold value of Region split +region-split-size = "256MB" +# When the data size in a Region is larger than the threshold value, TiKV checks whether this Region needs split. +# To reduce the costs of scanning data in the checking process,set the value to 32MB during checking and set it to +# the default value in normal operation. +region-split-check-diff = "32MB" + +[rocksdb] +# The maximum number of threads of RocksDB background tasks. The background tasks include compaction and flush. +# For detailed information why RocksDB needs to implement compaction, see RocksDB-related materials. When write +# traffic (like the importing data size) is big,it is recommended to enable more threads. But set the number of the enabled +# threads smaller than that of CPU cores. For example, when importing data, for a machine with a 32-core CPU, +# set the value to 28. +# max-background-jobs = 8 + +# The maximum number of file handles RocksDB can open +# max-open-files = 40960 + +# The file size limit of RocksDB MANIFEST. For more details, see https://github.com/facebook/rocksdb/wiki/MANIFEST +max-manifest-file-size = "20MB" + +# The directory of RocksDB write-ahead logs. If there are two disks on the machine, store the RocksDB data and WAL logs +# on different disks to improve TiKV performance. +# wal-dir = "/tmp/tikv/store" + +# Use the following two parameters to deal with RocksDB archiving WAL. +# For more details, see https://github.com/facebook/rocksdb/wiki/How-to-persist-in-memory-RocksDB-database%3F +# wal-ttl-seconds = 0 +# wal-size-limit = 0 + +# In most cases, set the maximum total size of RocksDB WAL logs to the default value. +# max-total-wal-size = "4GB" + +# Use this parameter to enable or disable the statistics of RocksDB. +# enable-statistics = true + +# Use this parameter to enable the readahead feature during RocksDB compaction. If you are using mechanical disks, it is recommended to set the value to 2MB at least. +# compaction-readahead-size = "2MB" + +[rocksdb.defaultcf] +# The data block size. RocksDB compresses data based on the unit of block. +# Similar to page in other databases, block is the smallest unit cached in block-cache. +block-size = "64KB" + +# The compaction mode of each layer of RocksDB data. The optional values include no, snappy, zlib, +# bzip2, lz4, lz4hc, and zstd. +# "no:no:lz4:lz4:lz4:zstd:zstd" indicates there is no compaction of level0 and level1; lz4 compaction algorithm is used +# from level2 to level4; zstd compaction algorithm is used from level5 to level6. +# "no" means no compaction. "lz4" is a compaction algorithm with moderate speed and compaction ratio. The +# compaction ratio of zlib is high. It is friendly to the storage space, but its compaction speed is slow. This +# compaction occupies many CPU resources. Different machines deploy compaction modes according to CPU and I/O resources. +# For example, if you use the compaction mode of "no:no:lz4:lz4:lz4:zstd:zstd" and find much I/O pressure of the +# system (run the iostat command to find %util lasts 100%, or run the top command to find many iowaits) when writing +# (importing) a lot of data while the CPU resources are adequate, you can compress level0 and level1 and exchange CPU +# resources for I/O resources. If you use the compaction mode of "no:no:lz4:lz4:lz4:zstd:zstd" and you find the I/O +# pressure of the system is not big when writing a lot of data, but CPU resources are inadequate. Then run the top +# command and choose the -H option. If you find a lot of bg threads (namely the compaction thread of RocksDB) are +# running, you can exchange I/O resources for CPU resources and change the compaction mode to "no:no:no:lz4:lz4:zstd:zstd". +# In a word, it aims at making full use of the existing resources of the system and improving TiKV performance +# in terms of the current resources. +compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"] + +# The RocksDB memtable size +write-buffer-size = "128MB" + +# The maximum number of the memtables. The data written into RocksDB is first recorded in the WAL log, and then inserted +# into memtables. When the memtable reaches the size limit of `write-buffer-size`, it turns into read only and generates +# a new memtable receiving new write operations. The flush threads of RocksDB will flush the read only memtable to the +# disks to become an sst file of level0. `max-background-flushes` controls the maximum number of flush threads. When the +# flush threads are busy, resulting in the number of the memtables waiting to be flushed to the disks reaching the limit +# of `max-write-buffer-number`, RocksDB stalls the new operation. +# "Stall" is a flow control mechanism of RocksDB. When importing data, you can set the `max-write-buffer-number` value +# higher, like 10. +max-write-buffer-number = 5 + +# When the number of sst files of level0 reaches the limit of `level0-slowdown-writes-trigger`, RocksDB +# tries to slow down the write operation, because too many sst files of level0 can cause higher read pressure of +# RocksDB. `level0-slowdown-writes-trigger` and `level0-stop-writes-trigger` are for the flow control of RocksDB. +# When the number of sst files of level0 reaches 4 (the default value), the sst files of level0 and the sst files +# of level1 which overlap those of level0 implement compaction to relieve the read pressure. +level0-slowdown-writes-trigger = 20 + +# When the number of sst files of level0 reaches the limit of `level0-stop-writes-trigger`, RocksDB stalls the new +# write operation. +level0-stop-writes-trigger = 36 + +# When the level1 data size reaches the limit value of `max-bytes-for-level-base`, the sst files of level1 +# and their overlap sst files of level2 implement compaction. The golden rule: the first reference principle +# of setting `max-bytes-for-level-base` is guaranteeing that the `max-bytes-for-level-base` value is roughly equal to the +# data volume of level0. Thus unnecessary compaction is reduced. For example, if the compaction mode is +# "no:no:lz4:lz4:lz4:lz4:lz4", the `max-bytes-for-level-base` value is write-buffer-size * 4, because there is no +# compaction of level0 and level1 and the trigger condition of compaction for level0 is that the number of the +# sst files reaches 4 (the default value). When both level0 and level1 adopt compaction, it is necessary to analyze +# RocksDB logs to know the size of an sst file compressed from an mentable. For example, if the file size is 32MB, +# the proposed value of `max-bytes-for-level-base` is 32MB * 4 = 128MB. +max-bytes-for-level-base = "512MB" + +# The sst file size. The sst file size of level0 is influenced by the compaction algorithm of `write-buffer-size` +# and level0. `target-file-size-base` is used to control the size of a single sst file of level1-level6. +target-file-size-base = "32MB" + +# When the parameter is not configured, TiKV sets the value to 40% of the system memory size. To deploy multiple +# TiKV nodes on one physical machine, configure this parameter explicitly. Otherwise, the OOM problem might occur +# in TiKV. +# block-cache-size = "1GB" + +[rocksdb.writecf] +# Set it the same as `rocksdb.defaultcf.compression-per-level`. +compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"] + +# Set it the same as `rocksdb.defaultcf.write-buffer-size`. +write-buffer-size = "128MB" +max-write-buffer-number = 5 +min-write-buffer-number-to-merge = 1 + +# Set it the same as `rocksdb.defaultcf.max-bytes-for-level-base`. +max-bytes-for-level-base = "512MB" +target-file-size-base = "32MB" + +# When this parameter is not configured, TiKV sets this parameter value to 15% of the system memory size. To +# deploy multiple TiKV nodes on a single physical machine, configure this parameter explicitly. The related data +# of the version information (MVCC) and the index-related data are recorded in write CF. In scenarios that +# include many single table indexes, set this parameter value higher. +# block-cache-size = "256MB" + +[raftdb] +# The maximum number of the file handles RaftDB can open +# max-open-files = 40960 + +# Configure this parameter to enable or disable the RaftDB statistics information. +# enable-statistics = true + +# Enable the readahead feature in RaftDB compaction. If you are using mechanical disks, it is recommended to set +# this value to 2MB at least. +# compaction-readahead-size = "2MB" + +[raftdb.defaultcf] +# Set it the same as `rocksdb.defaultcf.compression-per-level`. +compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"] + +# Set it the same as `rocksdb.defaultcf.write-buffer-size`. +write-buffer-size = "128MB" +max-write-buffer-number = 5 +min-write-buffer-number-to-merge = 1 + +# Set it the same as `rocksdb.defaultcf.max-bytes-for-level-base`. +max-bytes-for-level-base = "512MB" +target-file-size-base = "32MB" + +# Generally,you can set it from 256MB to 2GB. In most cases, you can use the default value. But if the system +# resources are adequate, you can set it higher. +block-cache-size = "256MB" +``` + + +## TiKV memory usage + +Besides `block cache` and `write buffer` which occupy the system memory, the system memory is occupied in the +following scenarios: + ++ Some of the memory is reserved as the system's page cache. + ++ When TiKV processes large queries such as `select * from ...`, it reads data, generates the corresponding data structure in the memory, and returns this structure to TiDB. During this process, TiKV occupies some of the memory. + +## Recommended configuration of TiKV + ++ In production environments, it is not recommended to deploy TiKV on the machine whose CPU cores are less than 8 or the memory is less than 32GB. + ++ If you demand a high write throughput, it is recommended to use a disk with good throughput capacity. + ++ If you demand a very low read-write latency, it is recommended to use SSD with high IOPS. \ No newline at end of file diff --git a/v1.0/overview.md b/v1.0/overview.md new file mode 100755 index 0000000000000..0d025e07e91af --- /dev/null +++ b/v1.0/overview.md @@ -0,0 +1,99 @@ +--- +title: About TiDB +category: introduction +--- + +# About TiDB + +## TiDB introduction + +TiDB (The pronunciation is: /'taɪdiːbi:/ tai-D-B, etymology: titanium) is a Hybrid Transactional/Analytical Processing (HTAP) database. Inspired by the design of Google F1 and Google Spanner, TiDB features infinite horizontal scalability, strong consistency, and high availability. The goal of TiDB is to serve as a one-stop solution for online transactions and analyses. + +- __Horizontal and linear scalability__ +- __Compatible with MySQL protocol__ +- __Automatic failover and high availability__ +- __Consistent distributed transactions__ +- __Online DDL__ +- __Multiple storage engine support__ +- __Highly concurrent and real-time writing and query of large volume of data (HTAP)__ + +TiDB is designed to support both OLTP (Online Transactional Processing) and OLAP (Online Analytical Processing) scenarios. For complex OLAP scenarios, use [TiSpark](tispark/tispark-user-guide.md). + +Read the following three articles to understand TiDB techniques: + +- [Data Storage](https://pingcap.github.io/blog/2017/07/11/tidbinternal1/) +- [Computing](https://pingcap.github.io/blog/2017/07/11/tidbinternal2/) +- [Scheduling](https://pingcap.github.io/blog/2017/07/20/tidbinternal3/) + +## Roadmap + +Read the [Roadmap](https://github.com/pingcap/docs/blob/master/ROADMAP.md). + +## Connect with us + +- **Twitter**: [@PingCAP](https://twitter.com/PingCAP) +- **Reddit**: https://www.reddit.com/r/TiDB/ +- **Stack Overflow**: https://stackoverflow.com/questions/tagged/tidb +- **Mailing list**: [Google Group](https://groups.google.com/forum/#!forum/tidb-user) + +## TiDB architecture + +To better understand TiDB’s features, you need to understand the TiDB architecture. + +![image alt text](media/tidb-architecture.png) + +The TiDB cluster has three components: the TiDB server, the PD server, and the TiKV server. + +### TiDB server + +The TiDB server is in charge of the following operations: + +1. Receiving the SQL requests + +2. Processing the SQL related logics + +3. Locating the TiKV address for storing and computing data through Placement Driver (PD) + +4. Exchanging data with TiKV + +5. Returning the result + +The TiDB server is stateless. It does not store data and it is for computing only. TiDB is horizontally scalable and provides the unified interface to the outside through the load balancing components such as Linux Virtual Server (LVS), HAProxy, or F5. + +### Placement Driver server + +The Placement Driver (PD) server is the managing component of the entire cluster and is in charge of the following three operations: + +1. Storing the metadata of the cluster such as the region location of a specific key. + +2. Scheduling and load balancing regions in the TiKV cluster, including but not limited to data migration and Raft group leader transfer. + +3. Allocating the transaction ID that is globally unique and monotonic increasing. + +As a cluster, PD needs to be deployed to an odd number of nodes. Usually it is recommended to deploy to 3 online nodes at least. + +### TiKV server + +The TiKV server is responsible for storing data. From an external view, TiKV is a distributed transactional Key-Value storage engine. Region is the basic unit to store data. Each Region stores the data for a particular Key Range which is a left-closed and right-open interval from StartKey to EndKey. There are multiple Regions in each TiKV node. TiKV uses the Raft protocol for replication to ensure the data consistency and disaster recovery. The replicas of the same Region on different nodes compose a Raft Group. The load balancing of the data among different TiKV nodes are scheduled by PD. Region is also the basic unit for scheduling the load balance. + +## Features + +### Horizontal scalability + +Horizontal scalability is the most important feature of TiDB. The scalability includes two aspects: the computing capability and the storage capacity. The TiDB server processes the SQL requests. As the business grows, the overall processing capability and higher throughput can be achieved by simply adding more TiDB server nodes. Data is stored in TiKV. As the size of the data grows, the scalability of data can be resolved by adding more TiKV server nodes. PD schedules data in Regions among the TiKV nodes and migrates part of the data to the newly added node. So in the early stage, you can deploy only a few service instances. For example, it is recommended to deploy at least 3 TiKV nodes, 3 PD nodes and 2 TiDB nodes. As business grows, more TiDB and TiKV instances can be added on-demand. + +### High availability + +High availability is another important feature of TiDB. All of the three components, TiDB, TiKV and PD, can tolerate the failure of some instances without impacting the availability of the entire cluster. For each component, See the following for more details about the availability, the consequence of a single instance failure and how to recover. + +#### TiDB + +TiDB is stateless and it is recommended to deploy at least two instances. The front-end provides services to the outside through the load balancing components. If one of the instances is down, the Session on the instance will be impacted. From the application’s point of view, it is a single request failure but the service can be regained by reconnecting to the TiDB server. If a single instance is down, the service can be recovered by restarting the instance or by deploying a new one. + +#### PD + +PD is a cluster and the data consistency is ensured using the Raft protocol. If an instance is down but the instance is not a Raft Leader, there is no impact on the service at all. If the instance is a Raft Leader, a new Leader will be elected to recover the service. During the election which is approximately 3 seconds, PD cannot provide service. It is recommended to deploy three instances. If one of the instances is down, the service can be recovered by restarting the instance or by deploying a new one. + +#### TiKV + +TiKV is a cluster and the data consistency is ensured using the Raft protocol. The number of the replicas can be configurable and the default is 3 replicas. The load of TiKV servers are balanced through PD. If one of the node is down, all the Regions in the node will be impacted. If the failed node is the Leader of the Region, the service will be interrupted and a new election will be initiated. If the failed node is a Follower of the Region, the service will not be impacted. If a TiKV node is down for a period of time (the default value is 10 minutes), PD will move the data to another TiKV node. diff --git a/v1.0/releases/101.md b/v1.0/releases/101.md new file mode 100755 index 0000000000000..e9a8fba415763 --- /dev/null +++ b/v1.0/releases/101.md @@ -0,0 +1,23 @@ +--- +title: TiDB 1.0.1 Release Notes +category: Releases +--- + +# TiDB 1.0.1 Release Notes + +On November 1, 2017, TiDB 1.0.1 is released with the following updates: + +## TiDB: + + - Support canceling DDL Job. + - Optimize the `IN` expression. + - Correct the result type of the `Show` statement. + - Support log slow query into a separate log file. + - Fix bugs. + +## TiKV: + + - Support flow control with write bytes. + - Reduce Raft allocation. + - Increase coprocessor stack size to 10MB. + - Remove the useless log from the coprocessor. diff --git a/v1.0/releases/102.md b/v1.0/releases/102.md new file mode 100755 index 0000000000000..2a0a4865d2c69 --- /dev/null +++ b/v1.0/releases/102.md @@ -0,0 +1,29 @@ +--- +title: TiDB 1.0.2 Release Notes +category: Releases +--- + +# TiDB 1.0.2 Release Notes + +On November 13, 2017, TiDB 1.0.2 is released with the following updates: + +## TiDB: + + - Optimize the cost estimation of index point query + - Support the `Alter Table Add Column (ColumnDef ColumnPosition)` syntax + - Optimize the queries whose `where` conditions are contradictory + - Optimize the `Add Index` operation to rectify the progress and reduce repetitive operations + - Optimize the ` Index Look Join` operator to accelerate the query speed for small data size + - Fix the issue with prefix index judgment + +## Placement Driver (PD): + + - Improve the stability of scheduling under exceptional situations + +## TiKV: + + - Support splitting table to ensure one region does not contain data from multiple tables + - Limit the length of a key to be no more than 4 KB + - More accurate read traffic statistics + - Implement deep protection on the coprocessor stack + - Fix the `LIKE` behavior and the `do_div_mod` bug diff --git a/v1.0/releases/103.md b/v1.0/releases/103.md new file mode 100755 index 0000000000000..c1924e388bd98 --- /dev/null +++ b/v1.0/releases/103.md @@ -0,0 +1,33 @@ +--- +title: TiDB 1.0.3 Release Notes +category: Releases +--- + +# TiDB 1.0.3 Release Notes + +On November 28, 2017, TiDB 1.0.3 is released with the following updates: + +## TiDB + +- [Optimize the performance in transaction conflicts scenario](https://github.com/pingcap/tidb/pull/5051) +- [Add the `TokenLimit` option in the config file](https://github.com/pingcap/tidb/pull/5107) +- [Output the default database in slow query logs](https://github.com/pingcap/tidb/pull/5107) +- [Remove the DDL statement from query duration metrics](https://github.com/pingcap/tidb/pull/5107) +- [Optimize the query cost estimation](https://github.com/pingcap/tidb/pull/5140) +- [Fix the index prefix issue when creating tables](https://github.com/pingcap/tidb/pull/5149) +- [Support pushing down the expressions for the Float type to TiKV](https://github.com/pingcap/tidb/pull/5153) +- [Fix the issue that it is slow to add index for tables with discrete integer primary index](https://github.com/pingcap/tidb/pull/5155) +- [Reduce the unnecessary statistics updates](https://github.com/pingcap/tidb/pull/5164) +- [Fix a potential issue during the transaction retry](https://github.com/pingcap/tidb/pull/5219) + +## PD + +- Support adding more types of schedulers using API + +## TiKV + +- Fix the deadlock issue with the PD client +- Fix the issue that the wrong leader value is prompted for `NotLeader` +- Fix the issue that the chunk size is too large in the coprocessor + +To upgrade from 1.0.2 to 1.0.3, follow the rolling upgrade order of PD -> TiKV -> TiDB. diff --git a/v1.0/releases/104.md b/v1.0/releases/104.md new file mode 100755 index 0000000000000..ed509c9194ed5 --- /dev/null +++ b/v1.0/releases/104.md @@ -0,0 +1,24 @@ +--- +title: TiDB 1.0.4 Release Notes +category: Releases +--- + +# TiDB 1.0.4 Release Notes + +On December 11, 2017, TiDB 1.0.4 is released with the following updates: + +## TiDB + +- [Speed up the loading of the statistics when starting the `tidb-server`](https://github.com/pingcap/tidb/pull/5362) +- [Improve the performance of the `show variables` statement](https://github.com/pingcap/tidb/pull/5363) +- [Fix a potential issue when using the `Add Index` statement to handle the combined indexes](https://github.com/pingcap/tidb/pull/5323) +- [Fix a potential issue when using the `Rename Table` statement to move a table to another database](https://github.com/pingcap/tidb/pull/5314) +- [Accelerate the effectiveness for the `Alter/Drop User` statement](https://github.com/pingcap/tidb/pull/5226) + +## TiKV + +- [Fix a possible performance issue when a snapshot is applied ](https://github.com/pingcap/tikv/pull/2559) +- [Fix the performance issue for reverse scan after removing a lot of data](https://github.com/pingcap/tikv/pull/2559) +- [Fix the wrong encoded result for the Decimal type under special circumstances](https://github.com/pingcap/tikv/pull/2571) + +To upgrade from 1.0.3 to 1.0.4, follow the rolling upgrade order of PD -> TiKV -> TiDB. diff --git a/v1.0/releases/105.md b/v1.0/releases/105.md new file mode 100755 index 0000000000000..dc2a7917f7400 --- /dev/null +++ b/v1.0/releases/105.md @@ -0,0 +1,33 @@ +--- +title: TiDB 1.0.5 Release Notes +category: Releases +--- + +# TiDB 1.0.5 Release Notes + +On December 26, 2017, TiDB 1.0.5 is released with the following updates: + +## TiDB + +- [Add the max value for the current Auto_Increment ID in the `Show Create Table` statement.](https://github.com/pingcap/tidb/pull/5489) +- [Fix a potential goroutine leak.](https://github.com/pingcap/tidb/pull/5486) +- [Support outputting slow queries into a separate file.](https://github.com/pingcap/tidb/pull/5484) +- [Load the `TimeZone` variable from TiKV when creating a new session.](https://github.com/pingcap/tidb/pull/5479) +- [Support the schema state check so that the `Show Create Table`and `Analyze` statements process the public table/index only.](https://github.com/pingcap/tidb/pull/5474) +- [The `set transaction read only` should affect the `tx_read_only` variable.](https://github.com/pingcap/tidb/pull/5491) +- [Clean up incremental statistic data when rolling back.](https://github.com/pingcap/tidb/pull/5391) +- [Fix the issue of missing index length in the `Show Create Table` statement.](https://github.com/pingcap/tidb/pull/5421) + +## PD + +- Fix the issue that the leaders stop balancing under some circumstances. + - [869](https://github.com/pingcap/pd/pull/869) + - [874](https://github.com/pingcap/pd/pull/874) +- [Fix potential panic during bootstrapping.](https://github.com/pingcap/pd/pull/889) + +## TiKV + +- Fix the issue that it is slow to get the CPU ID using the [`get_cpuid`](https://github.com/pingcap/tikv/pull/2611) function. +- Support the [`dynamic-level-bytes`](https://github.com/pingcap/tikv/pull/2605) parameter to improve the space collection situation. + +To upgrade from 1.0.4 to 1.0.5, follow the rolling upgrade order of PD -> TiKV -> TiDB. diff --git a/v1.0/releases/106.md b/v1.0/releases/106.md new file mode 100755 index 0000000000000..04f15323f341b --- /dev/null +++ b/v1.0/releases/106.md @@ -0,0 +1,27 @@ +--- +title: TiDB 1.0.6 Release Notes +category: Releases +--- + +# TiDB 1.0.6 Release Notes + +On January 08, 2018, TiDB 1.0.6 is released with the following updates: + +## TiDB: + +- [Support the `Alter Table Auto_Increment` syntax](https://github.com/pingcap/tidb/pull/5511) +- [Fix the bug in Cost Based computation and the `Null Json` issue in statistics](https://github.com/pingcap/tidb/pull/5556) +- [Support the extension syntax to shard the implicit row ID to avoid write hot spot for a single table](https://github.com/pingcap/tidb/pull/5559) +- [Fix a potential DDL issue](https://github.com/pingcap/tidb/pull/5562) +- [Consider the timezone setting in the `curtime`, `sysdate` and `curdate` functions](https://github.com/pingcap/tidb/pull/5564) +- [Support the `SEPARATOR` syntax in the `GROUP_CONCAT` function](https://github.com/pingcap/tidb/pull/5569) +- [Fix the wrong return type issue of the `GROUP_CONCAT` function.](https://github.com/pingcap/tidb/pull/5582) + +## PD: +- [Fix store selection problem of hot-region scheduler](https://github.com/pingcap/pd/pull/898) + +## TiKV: + +None. + +To upgrade from 1.0.5 to 1.0.6, follow the rolling upgrade order of PD -> TiKV -> TiDB. diff --git a/v1.0/releases/107.md b/v1.0/releases/107.md new file mode 100755 index 0000000000000..d0037bc804368 --- /dev/null +++ b/v1.0/releases/107.md @@ -0,0 +1,39 @@ +--- +title: TiDB 1.0.7 Release Notes +category: Releases +--- + +# TiDB 1.0.7 Release Notes + +On January 22, 2018, TiDB 1.0.7 is released with the following updates: + +## TiDB: + +- [Optimize the `FIELD_LIST` command](https://github.com/pingcap/tidb/pull/5679) +- [Fix data race of the information schema](https://github.com/pingcap/tidb/pull/5676) +- [Avoid adding read-only statements to history](https://github.com/pingcap/tidb/pull/5661) +- [Add the `session` variable to control the log query](https://github.com/pingcap/tidb/pull/5659) +- [Fix the resource leak issue in statistics](https://github.com/pingcap/tidb/pull/5657) +- [Fix the goroutine leak issue](https://github.com/pingcap/tidb/pull/5624) +- [Add schema info API for the http status server](https://github.com/pingcap/tidb/pull/5256) +- [Fix an issue about `IndexJoin`](https://github.com/pingcap/tidb/pull/5623) +- [Update the behavior when `RunWorker` is false in DDL](https://github.com/pingcap/tidb/pull/5604) +- [Improve the stability of test results in statistics](https://github.com/pingcap/tidb/pull/5609) +- [Support `PACK_KEYS` syntax for the `CREATE TABLE` statement](https://github.com/pingcap/tidb/pull/5602) +- [Add `row_id` column for the null pushdown schema to optimize performance](https://github.com/pingcap/tidb/pull/5447) + +## PD: + +- [Fix possible scheduling loss issue in abnormal conditions](https://github.com/pingcap/pd/pull/921) +- [Fix the compatibility issue with proto3](https://github.com/pingcap/pd/pull/919) +- [Add the log](https://github.com/pingcap/pd/pull/917) + +## TiKV: + +- [Support `Table Scan`](https://github.com/pingcap/tikv/pull/2657) +- [Support the remote mode in tikv-ctl](https://github.com/pingcap/tikv/pull/2377) +- [Fix the format compatibility issue of tikv-ctl proto](https://github.com/pingcap/tikv/pull/2668) +- [Fix the loss of scheduling command from PD](https://github.com/pingcap/tikv/pull/2669) +- [Add timeout in Push metric](https://github.com/pingcap/tikv/pull/2686) + +To upgrade from 1.0.6 to 1.0.7, follow the rolling upgrade order of PD -> TiKV -> TiDB. \ No newline at end of file diff --git a/v1.0/releases/108.md b/v1.0/releases/108.md new file mode 100755 index 0000000000000..c972795f4fbaf --- /dev/null +++ b/v1.0/releases/108.md @@ -0,0 +1,32 @@ +--- +title: TiDB 1.0.8 Release Notes +category: Releases +--- + +# TiDB 1.0.8 Release Notes + +On February 11, 2018, TiDB 1.0.8 is released with the following updates: + +## TiDB: +- [Fix issues in the `Outer Join` result in some scenarios](https://github.com/pingcap/tidb/pull/5712) +- [Optimize the performance of the `InsertIntoIgnore` statement](https://github.com/pingcap/tidb/pull/5738) +- [Fix the issue in the `ShardRowID` option](https://github.com/pingcap/tidb/pull/5751) +- [Add limitation (Configurable, the default value is 5000) to the DML statements number within a transaction](https://github.com/pingcap/tidb/pull/5754) +- [Fix an issue in the Table/Column aliases returned by the `Prepare` statement](https://github.com/pingcap/tidb/pull/5776) +- [Fix an issue in updating statistics delta](https://github.com/pingcap/tidb/pull/5787) +- [Fix a panic error in the `Drop Column` statement](https://github.com/pingcap/tidb/pull/5805) +- [Fix an DML issue when running the `Add Column After` statement](https://github.com/pingcap/tidb/pull/5818) +- [Improve the stability of the GC process by ignoring the regions with GC errors](https://github.com/pingcap/tidb/pull/5815) +- [Run GC concurrently to accelerate the GC process](https://github.com/pingcap/tidb/pull/5850) +- [Provide syntax support for the `CREATE INDEX` statement](https://github.com/pingcap/tidb/pull/5853) + +## PD: +- [Reduce the lock overheat of the region heartbeats](https://github.com/pingcap/pd/pull/932) +- [Fix the issue that a hot region scheduler selects the wrong Leader](https://github.com/pingcap/pd/pull/939) + +## TiKV: +- [Use `DeleteFilesInRanges` to clear stale data and improve the TiKV starting speed](https://github.com/pingcap/tikv/pull/2740) +- [Using `Decimal` in Coprocessor sum](https://github.com/pingcap/tikv/pull/2754) +- [Sync the metadata of the received Snapshot compulsorily to ensure its safety](https://github.com/pingcap/tikv/pull/2758) + +To upgrade from 1.0.7 to 1.0.8, follow the rolling upgrade order of PD -> TiKV -> TiDB. diff --git a/v1.0/releases/11alpha.md b/v1.0/releases/11alpha.md new file mode 100755 index 0000000000000..6ccfece50c194 --- /dev/null +++ b/v1.0/releases/11alpha.md @@ -0,0 +1,52 @@ +--- +title: TiDB 1.1 Alpha Release Notes +category: Releases +--- + +# TiDB 1.1 Alpha Release Notes + +On January 19, 2018, TiDB 1.1 Alpha is released. This release has great improvement in MySQL compatibility, SQL optimization, stability, and performance. + +## TiDB: + +- SQL parser + - Support more syntax +- SQL query optimizer + - Use more compact structure to reduce statistics info memory usage + - Speed up loading statistics info when starting tidb-server + - Provide more accurate query cost evaluation + - Use Count-Min Sketch to evaluate the cost of queries using unique index more accurately + - Support more complex conditions to make full use of index +- SQL executor + - Refactor all executor operators using Chunk architecture, improve the execution performance of analytical statements and reduce memory usage + - Optimize performance of the `INSERT INGORE` statement + - Push down more types and functions to TiKV + - Support more `SQL_MODE` + - Optimize the `Load Data` performance to increase the speed by 10 times + - Optimize the `Use Database` performance + - Support statistics on the memory usage of physical operators +- Server + - Support the PROXY protocol + +## PD: + +- Add more APIs +- Support TLS +- Add more cases for scheduling Simulator +- Schedule to adapt to different Region sizes +- Fix some bugs about scheduling + +## TiKV: + +- Support Raft learner +- Optimize Raft Snapshot and reduce the IO overhead +- Support TLS +- Optimize the RocksDB configuration to improve performance +- Optimize `count (*)` and query performance of unique index in Coprocessor +- Add more failpoints and stability test cases +- Solve the reconnection issue between PD and TiKV +- Enhance the features of the data recovery tool TiKV-CTL +- Support splitting according to table in Region +- Support the `Delete Range` feature +- Support setting the IO limit caused by snapshot +- Improve the flow control mechanism \ No newline at end of file diff --git a/v1.0/releases/11beta.md b/v1.0/releases/11beta.md new file mode 100755 index 0000000000000..e2dc6695e56e5 --- /dev/null +++ b/v1.0/releases/11beta.md @@ -0,0 +1,49 @@ +--- +title: TiDB 1.1 Beta Release Notes +category: Releases +--- + +# TiDB 1.1 Beta Release Notes + +On February 24, 2018, TiDB 1.1 Beta is released. This release has great improvement in MySQL compatibility, SQL optimization, stability, and performance. + +## TiDB: + +- Add more monitoring metrics and refine the log +- Compatible with more MySQL syntax +- Support displaying the table creating time in `information_schema` +- Optimize queries containing the `MaxOneRow` operator +- Configure the size of intermediate result sets generated by Join, to further reduce the memory used by Join +- Add the `tidb_config` session variable to output the current TiDB configuration +- Fix the panic issue in the `Union` and `Index Join` operators +- Fix the wrong result issue of the `Sort Merge Join` operator in some scenarios +- Fix the issue that the `Show Index` statement shows indexes that are in the process of adding +- Fix the failure of the `Drop Stats` statement +- Optimize the query performance of the SQL engine to improve the test result of the Sysbench Select/OLTP by 10% +- Improve the computing speed of subqueries in the optimizer using the new execution engine; compared with TiDB 1.0, TiDB 1.1 Beta has great improvement in tests like TPC-H and TPC-DS + +## PD: + +- Add the Drop Region debug interface +- Support setting priority of the PD leader +- Support configuring stores with a specific label not to schedule Raft leaders +- Add the interfaces to enumerate the health status of each PD +- Add more metrics +- Keep the PD leader and the etcd leader together as much as possible in the same node +- Improve the priority and speed of restoring data when TiKV goes down +- Enhance the validity check of the `data-dir` configuration item +- Optimize the performance of Region heartbeat +- Fix the issue that hot spot scheduling violates label constraint +- Fix other stability issues + +## TiKV: + +- Traverse locks using offset + limit to avoid potential GC problems +- Support resolving locks in batches to improve GC speed +- Support GC concurrency to improve GC speed +- Update the Region size using the RocksDB compaction listener for more accurate PD scheduling +- Delete the outdated data in batches using `DeleteFilesInRanges`, to make TiKV start faster +- Configure the Raft snapshot max size to avoid the retained files taking up too much space +- Support more recovery operations in `tikv-ctl` +- Optimize the ordered flow aggregation operation +- Improve metrics and fix bugs \ No newline at end of file diff --git a/v1.0/releases/2rc1.md b/v1.0/releases/2rc1.md new file mode 100755 index 0000000000000..3c742e8353a20 --- /dev/null +++ b/v1.0/releases/2rc1.md @@ -0,0 +1,39 @@ +--- +title: TiDB 2.0 RC1 Release Notes +category: Releases +--- + +# TiDB 2.0 RC1 Release Notes + +On March 9, 2018, TiDB 2.0 RC1 is released. This release has great improvement in MySQL compatibility, SQL optimization and stability. + +## TiDB: + +- Support limiting the memory usage by a single SQL statement, to reduce the risk of OOM +- Support pushing the Stream Aggregate operator down to TiKV +- Support validating the configuration file +- Support obtaining the information of TiDB configuration through HTTP API +- Compatible with more MySQL syntax in Parser +- Improve the compatibility with Navicat +- Improve the optimizer and extract common expressions with multiple OR conditions, to choose better query plan +- Improve the optimizer and convert subqueries to Join operators in more scenarios, to choose better query plan +- Resolve Lock in the Batch mode to increase the garbage collection speed +- Fix the length of Boolean field to improve compatibility +- Optimize the Add Index operation and give lower priority to all write and read operations, to reduce the impact on online business + +## PD: + +- Optimize the logic of code used to check the Region status to improve performance +- Optimize the output of log information in abnormal conditions to facilitate debugging +- Fix the monitor statistics that the disk space of TiKV nodes is not enough +- Fix the wrong reporting issue of the health interface when TLS is enabled +- Fix the issue that concurrent addition of replicas might exceed the threshold value of configuration, to improve stability + +## TiKV: + +- Fix the issue that gRPC call is not cancelled when PD leaders switch +- Protect important configuration which cannot be changed after initial configuration +- Add gRPC APIs used to obtain metrics +- Check whether SSD is used when you start the cluster +- Optimize the read performance using ReadPool, and improve the performance by 30% in the `raw get` test +- Improve metrics and optimize the usage of metrics \ No newline at end of file diff --git a/v1.0/releases/ga.md b/v1.0/releases/ga.md new file mode 100755 index 0000000000000..e0859994f5328 --- /dev/null +++ b/v1.0/releases/ga.md @@ -0,0 +1,269 @@ +--- +title: TiDB 1.0 release notes +category: Releases +--- + +# TiDB 1.0 Release Notes + +On October 16, 2017, TiDB 1.0 is now released! This release is focused on MySQL compatibility, SQL optimization, stability, and performance. + +## TiDB: + +- The SQL query optimizer: + - Adjust the cost model + - Analyze pushdown + - Function signature pushdown +- Optimize the internal data format to reduce the interim data size +- Enhance the MySQL compatibility +- Support the `NO_SQL_CACHE` syntax and limit the cache usage in the storage engine +- Refactor the Hash Aggregator operator to reduce the memory usage +- Support the Stream Aggregator operator + +## PD: + +- Support read flow based balancing +- Support setting the Store weight and weight based balancing + +## TiKV: + +- Coprocessor now supports more pushdown functions +- Support pushing down the sampling operation +- Support manually triggering data compact to collect space quickly +- Improve the performance and stability +- Add a Debug API for debugging +- TiSpark Beta Release: +- Support configuration framework +- Support ThriftSever/JDBC and Spark SQL + +## Acknowledgement + +### Special thanks to the following enterprises and teams! + +- Archon +- Mobike +- Samsung Electronics +- SpeedyCloud +- Tencent Cloud +- UCloud + +### Thanks to the open source software and services from the following organizations and individuals: + +- Asta Xie +- CNCF +- CoreOS +- Databricks +- Docker +- Github +- Grafana +- gRPC +- Jepsen +- Kubernetes +- Namazu +- Prometheus +- RedHat +- RocksDB Team +- Rust Team + +### Thanks to the individual contributors: + +- 8cbx +- Akihiro Suda +- aliyx +- alston111111 +- andelf +- Andy Librian +- Arthur Yang +- astaxie +- Bai, Yang +- bailaohe +- Bin Liu +- Blame cosmos +- Breezewish +- Carlos Ferreira +- Ce Gao +- Changjian Zhang +- Cheng Lian +- Cholerae Hu +- Chu Chao +- coldwater +- Cole R Lawrence +- cuiqiu +- cuiyuan +- Cwen +- Dagang +- David Chen +- David Ding +- dawxy +- dcadevil +- Deshi Xiao +- Di Tang +- disksing +- dongxu +- dreamquster +- Drogon +- Du Chuan +- Dylan Wen +- eBoyy +- Eric Romano +- Ewan Chou +- Fiisio +- follitude +- Fred Wang +- fud +- fudali +- gaoyangxiaozhu +- Gogs +- goroutine +- Gregory Ian +- Guanqun Lu +- Guilherme Hübner Franco +- Haibin Xie +- Han Fei +- hawkingrei +- Hiroaki Nakamura +- hiwjd +- Hongyuan Wang +- Hu Ming +- Hu Ziming +- Huachao Huang +- HuaiyuXu +- Huxley Hu +- iamxy +- Ian +- insion +- iroi44 +- Ivan.Yang +- Jack Yu +- jacky liu +- Jan Mercl +- Jason W +- Jay +- Jay Lee +- Jianfei Wang +- Jiaxing Liang +- Jie Zhou +- jinhelin +- Jonathan Boulle +- Karl Ostendorf +- knarfeh +- Kuiba +- leixuechun +- li +- Li Shihai +- Liao Qiang +- Light +- lijian +- Lilian Lee +- Liqueur Librazy +- Liu Cong +- Liu Shaohui +- liubo0127 +- liyanan +- lkk2003rty +- Louis +- louishust +- luckcolors +- Lynn +- Mae Huang +- maiyang +- maxwell +- mengshangqi +- Michael Belenchenko +- mo2zie +- morefreeze +- MQ +- mxlxm +- Neil Shen +- netroby +- ngaut +- Nicole Nie +- nolouch +- onlymellb +- overvenus +- PaladinTyrion +- paulg +- Priya Seth +- qgxiaozhan +- qhsong +- Qiannan +- qiukeren +- qiuyesuifeng +- queenypingcap +- qupeng +- Rain Li +- ranxiaolong +- Ray +- Rick Yu +- shady +- ShawnLi +- Shen Li +- Sheng Tang +- Shirly +- Shuai Li +- ShuNing +- ShuYu Wang +- siddontang +- silenceper +- Simon J Mudd +- Simon Xia +- skimmilk6877 +- sllt +- soup +- Sphinx +- Steffen +- sumBug +- sunhao2017 +- Tao Meng +- Tao Zhou +- tennix +- tiancaiamao +- TianGuangyu +- Tristan Su +- ueizhou +- UncP +- Unknwon +- v01dstar +- Van +- WangXiangUSTC +- wangyanjun +- wangyisong1996 +- weekface +- wegel +- Wei Fu +- Wenbin Xiao +- Wenting Li +- Wenxuan Shi +- winkyao +- woodpenker +- wuxuelian +- Xiang Li +- xiaojian cai +- Xuanjia Yang +- Xuanwo +- XuHuaiyu +- Yang Zhexuan +- Yann Autissier +- Yanzhe Chen +- Yiding Cui +- Yim +- youyouhu +- Yu Jun +- Yuwen Shen +- Zejun Li +- Zhang Yuning +- zhangjinpeng1987 +- ZHAO Yijun +- Zhe-xuan Yang +- ZhengQian +- ZhengQianFang +- zhengwanbo +- ZhiFeng Hu +- Zhiyuan Zheng +- Zhou Tao +- Zhoubirdblue +- zhouningnan +- Ziyi Yan +- zs634134578 +- zxylvlp +- zyguan +- zz-jason diff --git a/v1.0/releases/prega.md b/v1.0/releases/prega.md new file mode 100755 index 0000000000000..d66c2a9a42712 --- /dev/null +++ b/v1.0/releases/prega.md @@ -0,0 +1,39 @@ +--- +title: Pre-GA release notes +category: releases +--- + +# Pre-GA Release Notes + +On August 30, 2017, TiDB Pre-GA is released! This release is focused on MySQL compatibility, SQL optimization, stability, and performance. + +## TiDB: + ++ The SQL query optimizer: + - Adjust the cost model + - Use index scan to handle the `where` clause with the `compare` expression which has different types on each side + - Support the Greedy algorithm based Join Reorder ++ Many enhancements have been introduced to be more compatible with MySQL ++ Support `Natural Join` ++ Support the JSON type (Experimental), including the query, update and index of the JSON fields ++ Prune the useless data to reduce the consumption of the executor memory ++ Support configuring prioritization in the SQL statements and automatically set the prioritization for some of the statements according to the query type ++ Completed the expression refactor and the speed is increased by about 30% + +## Placement Driver (PD): + ++ Support manually changing the leader of the PD cluster + +## TiKV: + ++ Use dedicated Rocksdb instance to store Raft log ++ Use `DeleteRange` to speed up the deleting of replicas ++ Coprocessor now supports more pushdown operators ++ Improve the performance and stability + +## TiDB Connector for Spark Beta Release: + ++ Implement the predicates pushdown ++ Implement the aggregation pushdown ++ Implement range pruning ++ Capable of running full set of TPC+H except for one query that needs view support \ No newline at end of file diff --git a/v1.0/releases/rc1.md b/v1.0/releases/rc1.md new file mode 100755 index 0000000000000..089201df4542e --- /dev/null +++ b/v1.0/releases/rc1.md @@ -0,0 +1,43 @@ +--- +title: TiDB RC1 Release Notes +category: releases +--- + +# TiDB RC1 Release Notes + +On December 23, 2016, TiDB RC1 is released. See the following updates in this release: + +## TiKV: ++ The write speed has been improved. ++ The disk space usage is reduced. ++ Hundreds of TBs of data can be supported. ++ The stability is improved and TiKV can support a cluster with 200 nodes. ++ Supports the Raw KV API and the Golang client. + +Placement Driver (PD): ++ The scheduling strategy framework is optimized and now the strategy is more flexible and reasonable. ++ The support for `label` is added to support Cross Data Center scheduling. ++ PD Controller is provided to operate the PD cluster more easily. + +## TiDB: ++ The following features are added or improved in the SQL query optimizer: + - Eager aggregation + - More detailed `EXPLAIN` information + - Parallelization of the `UNION` operator + - Optimization of the subquery performance + - Optimization of the conditional push-down + - Optimization of the Cost Based Optimizer (CBO) framework ++ The implementation of the time related data types are refactored to improve the compatibility with MySQL. ++ More built-in functions in MySQL are supported. ++ The speed of the `add index` statement is enhanced. ++ The following statements are supported: + - Use the `CHANGE COLUMN` statement to change the name of a column. + - Use `MODIFY COLUMN` and `CHANGE COLUMN` of the `ALTER TABLE` statement for some of the column type transfer. + +## New tools: ++ `Loader` is added to be compatible with the `mydumper` data format in Percona and provides the following functions: + - Multi-thread import + - Retry if error occurs + - Breakpoint resume + - Targeted optimization for TiDB ++ The tool for one-click deployment is added. diff --git a/v1.0/releases/rc2.md b/v1.0/releases/rc2.md new file mode 100755 index 0000000000000..8490b46fbf841 --- /dev/null +++ b/v1.0/releases/rc2.md @@ -0,0 +1,50 @@ +--- +title: TiDB RC2 Release Notes +category: releases +--- + +# TiDB RC2 Release Notes + +On August 4, 2017, TiDB RC4 is released! This release is focused on the compatibility with MySQL, SQL query optimizer, system stability and performance in this version. What’s more, a new permission management mechanism is added and users can control data access in the same way as the MySQL privilege management system. + +## TiDB: + ++ Query optimizer + - Collect column/index statistics and use them in the query optimizer + - Optimize the correlated subquery + - Optimize the Cost Based Optimizer (CBO) framework + - Eliminate aggregation using unique key information + - Refactor expression evaluation framework + - Convert Distinct to GroupBy + - Support the topn operation push-down ++ Support basic privilege management ++ Add lots of MySQL built-in functions ++ Improve the Alter Table statement and support the modification of table name, default value and comment ++ Support the Create Table Like statement ++ Support the Show Warnings statement ++ Support the Rename Table statement ++ Restrict the size of a single transaction to avoid the cluster blocking of large transactions ++ Automatically split data in the process of Load Data ++ Optimize the performance of the AddIndex and Delete statement ++ Support "ANSI_QUOTES" sql_mode ++ Improve the monitoring system ++ Fix Bugs ++ Solve the problem of memory leak + +## PD: ++ Support location aware replica scheduling ++ Conduct fast scheduling based on the number of region ++ pd-ctl support more features + - Add or delete PD + - Obtain Region information with Key + - Add or delete scheduler and operator + - Obtain cluster label information + +## TiKV: ++ Support Async Apply to improve the entire write performance ++ Use prefix seek to improve the read performance of Write CF ++ Use memory hint prefix to improve the insert performance of Raft CF ++ Optimize the single read transaction performance ++ Support more push-down expressions ++ Improve the monitoring system ++ Fix Bugs diff --git a/v1.0/releases/rc3.md b/v1.0/releases/rc3.md new file mode 100755 index 0000000000000..103569ceddb6e --- /dev/null +++ b/v1.0/releases/rc3.md @@ -0,0 +1,61 @@ +--- +title: TiDB RC3 Release Notes +category: releases +--- + +# TiDB RC3 Release Notes + +On June 20, 2017, TiDB RC4 is released!This release is focused on MySQL compatibility, SQL optimization, stability, and performance. + +## Highlight: + +- The privilege management is refined to enable users to manage the data access privileges using the same way as in MySQL. +- DDL is accelerated. +- The load balancing policy and process are optimized for performance. +- TiDB-Ansible is open sourced. By using TiDB-Ansilbe, you can deploy, upgrade, start and shutdown a TiDB cluster with one click. + +## Detailed updates: + +## TiDB: + ++ The following features are added or improved in the SQL query optimizer: + - Support incremental statistics + - Support the ` Merge Sort Join ` operator + - Support the ` Index Lookup Join` operator + - Support the ` Optimizer Hint` Syntax + - Optimize the memory consumption of the `Scan`, `Join`, `Aggregation` operators + - Optimize the Cost Based Optimizer (CBO) framework + - Refactor `Expression` ++ Support more complete privilege management ++ DDL acceleration ++ Support using HTTP API to get the data distribution information of tables ++ Support using system variables to control the query concurrency ++ Add more MySQL built-in functions ++ Support using system variables to automatically split a big transaction into smaller ones to commit + +## Placement Driver (PD): + ++ Support gRPC ++ Provide the Disaster Recovery Toolkit ++ Use Garbage Collection to clear stale data automatically ++ Support more efficient data balance ++ Support hot Region scheduling to enable load balancing and speed up the data importing ++ Performance + - Accelerate getting Client TSO + - Improve the efficiency of Region Heartbeat processing ++ Improve the `pd-ctl` function + - Update the Replica configuration dynamically + - Get the Timestamp Oracle (TSO) + - Use ID to get the Region information + +## TiKV: + ++ Support gRPC ++ Support the Sorted String Table (SST) format snapshot to improve the load balancing speed of a cluster ++ Support using the Heap Profile to uncover memory leaks ++ Support Streaming SIMD Extensions (SSE) and speed up the CRC32 calculation ++ Accelerate transferring leader for faster load balancing ++ Use Batch Apply to reduce CPU usage and improve the write performance ++ Support parallel Prewrite to improve the transaction write speed ++ Optimize the scheduling of the coprocessor thread pool to reduce the impact of big queries on point get ++ The new Loader supports data importing at the table level, as well as splitting a big table into smaller logical blocks to import concurrently to improve the data importing speed. diff --git a/v1.0/releases/rc4.md b/v1.0/releases/rc4.md new file mode 100755 index 0000000000000..cde179064aa4e --- /dev/null +++ b/v1.0/releases/rc4.md @@ -0,0 +1,56 @@ +--- +title: TiDB RC4 Release Notes +category: releases +--- + +# TiDB RC4 Release Notes + +On August 4, 2017, TiDB RC4 is released! This release is focused on MySQL compatibility, SQL optimization, stability, and performance. + +## Highlight: + ++ For performance, the write performance is improved significantly, and the computing task scheduling supports prioritizing to avoid the impact of OLAP on OLTP. ++ The optimizer is revised for a more accurate query cost estimating and for an automatic choice of the `Join` physical operator based on the cost. ++ Many enhancements have been introduced to be more compatible with MySQL. ++ TiSpark is now released to better support the OLAP business scenarios. You can now use Spark to access the data in TiKV. + +## Detailed updates: + +### TiDB: + ++ The SQL query optimizer refactoring: + - Better support for TopN queries + - Support the automatic choice of the of the `Join` physical operator based on the cost + - Improved Projection Elimination ++ The version check of schema is based on Table to avoid the impact of DDL on the ongoing transactions ++ Support ` BatchIndexJoin` ++ Improve the `Explain` statement ++ Improve the `Index Scan` performance ++ Many enhancements have been introduced to be more compatible with MySQL ++ Support the JSON type and operations ++ Support the configuration of query prioritizing and isolation level + +### Placement Driver (PD): + ++ Support using PD to set the TiKV location labels ++ Optimize the scheduler + - PD is now supported to initialize the scheduling commands to TiKV. + - Accelerate the response speed of the region heartbeat. + - Optimize the `balance` algorithm ++ Optimize data loading to speed up failover + +### TiKV: + ++ Support the configuration of query prioritizing ++ Support the RC isolation level ++ Improve Jepsen test results and the stability ++ Support Document Store ++ Coprocessor now supports more pushdown functions ++ Improve the performance and stability + +### TiSpark Beta Release: + ++ Implement the prediction pushdown ++ Implement the aggregation pushdown ++ Implement range pruning ++ Capable of running full set of TPC-H except one query that needs view support diff --git a/v1.0/releases/rn.md b/v1.0/releases/rn.md new file mode 100755 index 0000000000000..232f789502df9 --- /dev/null +++ b/v1.0/releases/rn.md @@ -0,0 +1,24 @@ +--- +title: Release Notes +category: release +--- + +# TiDB Release Notes + + - [2.0 RC1](2rc1.md) + - [1.1 Beta](11beta.md) + - [1.0.8](108.md) + - [1.0.7](107.md) + - [1.1 Alpha](11alpha.md) + - [1.0.6](106.md) + - [1.0.5](105.md) + - [1.0.4](104.md) + - [1.0.3](103.md) + - [1.0.2](102.md) + - [1.0.1](101.md) + - [1.0](ga.md) + - [Pre-GA](prega.md) + - [RC4](rc4.md) + - [RC3](rc3.md) + - [RC2](rc2.md) + - [RC1](rc1.md) diff --git a/v1.0/scripts/build.sh b/v1.0/scripts/build.sh new file mode 100755 index 0000000000000..4ede96547ad43 --- /dev/null +++ b/v1.0/scripts/build.sh @@ -0,0 +1,60 @@ +#!/bin/bash + +set -e + +# Use current path for building and installing TiDB. +TIDB_PATH=`pwd` +echo "building TiDB components in $TIDB_PATH" + +# All the binaries are installed in the `bin` directory. +mkdir -p $TIDB_PATH/bin + +# Assume we install go in /usr/local/go +export PATH=$PATH:/usr/local/go/bin + +echo "checking if go is installed" +# Go is required +go version +# The output might be like: go version go1.6 darwin/amd64 + +echo "checking if rust is installed" +# Rust nightly is required +rustc -V +# The output might be like: rustc 1.12.0-nightly (7ad125c4e 2016-07-11) + +# Set the GOPATH correctly. +export GOPATH=$TIDB_PATH/deps/go + +# Build TiDB +echo "building TiDB..." +rm -rf $GOPATH/src/github.com/pingcap/tidb +git clone --depth=1 https://github.com/pingcap/tidb.git $GOPATH/src/github.com/pingcap/tidb +cd $GOPATH/src/github.com/pingcap/tidb + +make +cp -f ./bin/tidb-server $TIDB_PATH/bin +cd $TIDB_PATH +echo "TiDB is built" + +# Build PD +echo "building PD..." +rm -rf $GOPATH/src/github.com/pingcap/pd +git clone --depth=1 https://github.com/pingcap/pd.git $GOPATH/src/github.com/pingcap/pd +cd $GOPATH/src/github.com/pingcap/pd + +make +cp -f ./bin/pd-server $TIDB_PATH/bin +cd $TIDB_PATH +echo "PD is built" + +# Build TiKV +echo "building TiKV..." +rm -rf $TIDB_PATH/deps/tikv +git clone --depth=1 https://github.com/pingcap/tikv.git $TIDB_PATH/deps/tikv +cd $TIDB_PATH/deps/tikv + +make release + +cp -f ./bin/tikv-server $TIDB_PATH/bin +cd $TIDB_PATH +echo "TiKV is built" diff --git a/v1.0/scripts/check_requirement.sh b/v1.0/scripts/check_requirement.sh new file mode 100755 index 0000000000000..cae7df70de0ee --- /dev/null +++ b/v1.0/scripts/check_requirement.sh @@ -0,0 +1,118 @@ +#!/bin/bash + +set -e + +echo "Checking requirements..." + +SUDO= +if which sudo &>/dev/null; then + SUDO=sudo +fi + +function get_linux_platform { + if [ -f /etc/redhat-release ]; then + # For CentOS or redhat, we treat all as CentOS. + echo "CentOS" + elif [ -f /etc/lsb-release ]; then + DIST=`cat /etc/lsb-release | grep '^DISTRIB_ID' | awk -F= '{ print $2 }'` + echo "$DIST" + else + echo "Unknown" + fi +} + +function install_go { + echo "Intall go ..." + case "$OSTYPE" in + linux*) + curl -L https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz -o golang.tar.gz + ${SUDO} tar -C /usr/local -xzf golang.tar.gz + rm golang.tar.gz + ;; + + darwin*) + curl -L https://storage.googleapis.com/golang/go1.9.2.darwin-amd64.tar.gz -o golang.tar.gz + ${SUDO} tar -C /usr/local -xzf golang.tar.gz + rm golang.tar.gz + ;; + + *) + echo "unsupported $OSTYPE" + exit 1 + ;; + esac +} + +function install_gpp { + echo "Install g++ ..." + case "$OSTYPE" in + linux*) + dist=$(get_linux_platform) + case $dist in + Ubuntu) + ${SUDO} apt-get install -y g++ + ;; + CentOS) + ${SUDO} yum install -y gcc-c++ libstdc++-static + ;; + *) + echo "unsupported platform $dist, you may install g++ manually" + exit 1 + ;; + esac + ;; + + darwin*) + # refer to https://github.com/facebook/rocksdb/blob/master/INSTALL.md + xcode-select --install + brew update + brew tap homebrew/versions + brew install gcc48 --use-llvm + ;; + + *) + echo "unsupported $OSTYPE" + exit 1 + ;; + esac +} + +# Check rust +if which rustc &>/dev/null; then + if ! rustc --version | grep nightly &>/dev/null; then + printf "Please run following command to upgrade Rust to nightly: \n\ +\t curl -sSf https://static.rust-lang.org/rustup.sh | sh -s -- --channel=nightly\n" + exit 1 + fi +else + echo "Install Rust ..." + ${SUDO} curl -sSf https://static.rust-lang.org/rustup.sh | sh -s -- --channel=nightly +fi + +# Check go +if which go &>/dev/null; then + # requires go >= 1.8 + GO_VER_1=`go version | awk 'match($0, /([0-9])+(\.[0-9])+/) { ver = substr($0, RSTART, RLENGTH); split(ver, n, "."); print n[1];}'` + GO_VER_2=`go version | awk 'match($0, /([0-9])+(\.[0-9])+/) { ver = substr($0, RSTART, RLENGTH); split(ver, n, "."); print n[2];}'` + if [[ (($GO_VER_1 -eq 1 && $GO_VER_2 -lt 8)) || (($GO_VER_1 -lt 1)) ]]; then + echo "Please upgrade Go to 1.8 or later." + exit 1 + fi +else + install_go +fi + +# Check g++ +if which g++ &>/dev/null; then + # Check g++ version, RocksDB requires g++ 4.8 or later. + G_VER_1=`g++ -dumpversion | awk '{split($0, n, "."); print n[1];}'` + G_VER_2=`g++ -dumpversion | awk '{split($0, n, "."); print n[2];}'` + if [[ (($G_VER_1 -eq 4 && $G_VER_2 -lt 8)) || (($G_VER_1 -lt 4)) ]]; then + echo "Please upgrade g++ to 4.8 or later." + exit 1 + fi +else + install_gpp +fi + +echo OK \ No newline at end of file diff --git a/v1.0/scripts/generate_pdf.sh b/v1.0/scripts/generate_pdf.sh new file mode 100755 index 0000000000000..d511e1c1d5a80 --- /dev/null +++ b/v1.0/scripts/generate_pdf.sh @@ -0,0 +1,27 @@ +#!/bin/bash + +set -e +# test passed in pandoc 1.19.1 + +MAINFONT="WenQuanYi Micro Hei" +MONOFONT="WenQuanYi Micro Hei Mono" + +# MAINFONT="Tsentsiu Sans HG" +# MONOFONT="Tsentsiu Sans Console HG" + +#_version_tag="$(date '+%Y%m%d').$(git rev-parse --short HEAD)" +_version_tag="$(date '+%Y%m%d')" + +pandoc -N --toc --smart --latex-engine=xelatex \ + --template=templates/template.tex \ + --listings \ + -V title="TiDB Documentation" \ + -V author="PingCAP Inc." \ + -V date="v1.0.0\$\sim\$${_version_tag}" \ + -V CJKmainfont="${MAINFONT}" \ + -V mainfont="${MAINFONT}" \ + -V sansfont="${MAINFONT}" \ + -V monofont="${MONOFONT}" \ + -V geometry:margin=1in \ + -V include-after="\\input{templates/copyright.tex}" \ + doc.md -o output.pdf diff --git a/v1.0/scripts/merge_by_toc.py b/v1.0/scripts/merge_by_toc.py new file mode 100755 index 0000000000000..9c02d483a0918 --- /dev/null +++ b/v1.0/scripts/merge_by_toc.py @@ -0,0 +1,156 @@ +#!/usr/bin/env python3 +# coding: utf8 +# +# Generate all-in-one Markdown file for ``doc-cn`` +# + +from __future__ import print_function, unicode_literals + +import re +import os + + +entry_file = "README.md" +followups = [] +in_toc = False +contents = [] + +hyper_link_pattern = re.compile(r'([\-\+]+)\s\[(.*?)\]\((.*?)(#.*?)?\)') +image_link_pattern = re.compile(r'!\[(.*?)\]\((.*?)\)') +level_pattern = re.compile(r'(\s*[\-\+]+)\s') +# match all headings +heading_patthern = re.compile(r'(^#+|\n#+)\s') + +# stage 1, parse toc +with open(entry_file) as fp: + level = 0 + current_level = "" + for line in fp: + if line.startswith("## Documentation List"): + in_toc = True + print("in toc") + elif in_toc and line.startswith('## '): + # yes, toc processing done + # contents.append(line[1:]) # skip 1 level TOC + break + elif in_toc and not line.startswith('#') and line.strip(): + level_str = level_pattern.findall(line)[0] + print("level", level_str) + if len(level_str) > len(current_level): + level += 1 + elif len(level_str) < len(current_level): + level -= 1 + current_level = level_str + + matches = hyper_link_pattern.findall(line) + if matches: + for match in matches: + fpath = match[2] + if fpath.endswith('.md'): + key = ('FILE', level, fpath) + if key not in followups: + print(key) + followups.append(key) + # else: + # followups.append(('RAW', level, line.strip())) + else: + name = line.strip().split(None, 1)[-1] + key = ('TOC', level, name) + if key not in followups: + print(key) + followups.append(key) + + + else: + pass + + # overview part in README.md + followups.insert(1, ("RAW", 0, fp.read())) + +for k in followups: + print(k) + +# stage 2, get file heading +file_link_name = {} +for tp, lv, f in followups: + if tp != 'FILE': + continue + try: + tag = open(f).read().strip().split('\n')[0] + except Exception as e: + tag = "ERROR" + if tag.startswith('# '): + tag = tag[2:] + elif tag.startswith('## '): + tag = tag[3:] + file_link_name[f] = tag.lower().replace(' ', '-') + +print(file_link_name) + + +def replace_link(match): + full = match.group(0) + link_name = match.group(1) + link = match.group(2) + frag = match.group(3) + if link.endswith('.md'): + if not frag: + for fpath in file_link_name: + if os.path.basename(fpath) == os.path.basename(link): + frag = '#' + file_link_name[fpath] + + return '[%s](%s)' % (link_name, frag) + elif link.endswith('.png'): + # special handing for pic + fname = os.path.basename(link) + return '[%s](./media/%s)' % (link_name, fname) + else: + return full + +def replace_heading_func(diff_level=0): + + def replace_heading(match): + if diff_level == 0: + return match.group(0) + else: + return '\n' + '#' * (match.group(0).count('#') + diff_level) + ' ' + + + return replace_heading + +def replace_img_link(match): + full = match.group(0) + link_name = match.group(1) + link = match.group(2) + + if link.endswith('.png'): + fname = os.path.basename(link) + return '![%s](./media/%s)' % (link_name, fname) + +# stage 3, concat files +for type_, level, name in followups: + if type_ == 'TOC': + contents.append("\n{} {}\n".format('#' * level, name)) + elif type_ == 'RAW': + contents.append(name) + elif type_ == 'FILE': + try: + with open(name) as fp: + chapter = fp.read() + chapter = hyper_link_pattern.sub(replace_link, chapter) + chapter = image_link_pattern.sub(replace_img_link, chapter) + + # fix heading level + diff_level = level - heading_patthern.findall(chapter)[0].count('#') + + print(name, type_, level, diff_level) + chapter = heading_patthern.sub(replace_heading_func(diff_level), chapter) + contents.append(chapter) + contents.append('') # add an empty line + except Exception as e: + + print("generate file error: ignore!") + +# stage 4, generage final doc.md +with open("doc.md", 'w') as fp: + fp.write('\n'.join(contents)) \ No newline at end of file diff --git a/v1.0/scripts/update.sh b/v1.0/scripts/update.sh new file mode 100755 index 0000000000000..f44f8f530fedb --- /dev/null +++ b/v1.0/scripts/update.sh @@ -0,0 +1,57 @@ +#!/bin/bash + +set -e + +# Use current path for building and installing TiDB. +TIDB_PATH=`pwd` +echo "updating and building TiDB components in $TIDB_PATH" + +# All the binaries are installed in the `bin` directory. +mkdir -p $TIDB_PATH/bin + +# Assume we install go in /usr/local/go +export PATH=$PATH:/usr/local/go/bin + +echo "checking if go is installed" +# Go is required +go version +# The output might be like: go version go1.6 darwin/amd64 + +echo "checking if rust is installed" +# Rust nightly is required +rustc -V +# The output might be like: rustc 1.12.0-nightly (7ad125c4e 2016-07-11) + +# Set the GOPATH correctly. +export GOPATH=$TIDB_PATH/deps/go + +# Build TiDB +echo "updating and building TiDB..." +cd $GOPATH/src/github.com/pingcap/tidb +git pull + +make +cp -f ./bin/tidb-server $TIDB_PATH/bin +cd $TIDB_PATH +echo "TiDB is built" + +# Build PD +echo "updating and building PD..." +cd $GOPATH/src/github.com/pingcap/pd +git pull + +make +cp -f ./bin/pd-server $TIDB_PATH/bin +cd $TIDB_PATH +echo "PD is built" + +# Build TiKV +echo "updating and building TiKV..." +cd $TIDB_PATH/deps/tikv +git pull + +make release + +cp -f ./bin/tikv-server $TIDB_PATH/bin +cd $TIDB_PATH +echo "TiKV is built" diff --git a/v1.0/scripts/upload.py b/v1.0/scripts/upload.py new file mode 100755 index 0000000000000..d43a1042c5fb3 --- /dev/null +++ b/v1.0/scripts/upload.py @@ -0,0 +1,39 @@ +#!/usr/bin/env python3 +#-*- coding:utf-8 -*- + +import sys +import os +from qiniu import Auth, put_file, etag, urlsafe_base64_encode +import qiniu.config + + +ACCESS_KEY = os.getenv('QINIU_ACCESS_KEY') +SECRET_KEY = os.getenv('QINIU_SECRET_KEY') +BUCKET_NAME = os.getenv('QINIU_BUCKET_NAME') + +assert(ACCESS_KEY and SECRET_KEY and BUCKET_NAME) + +def progress_handler(progress, total): + print("{}/{} {:.2f}".format(progress, total, progress/total*100)) + +# local_file: local file path +# remote_name: 上传到七牛后保存的文件名 +def upload(local_file, remote_name, ttl=3600): + print(local_file, remote_name, ttl) + #构建鉴权对象 + q = Auth(ACCESS_KEY, SECRET_KEY) + + #生成上传 Token,可以指定过期时间等 + token = q.upload_token(BUCKET_NAME, remote_name, ttl) + + ret, info = put_file(token, remote_name, local_file, progress_handler=progress_handler) + print(info) + assert ret['key'] == remote_name + assert ret['hash'] == etag(local_file) + +if __name__ == "__main__": + local_file = sys.argv[1] + remote_name = sys.argv[2] + upload(local_file, remote_name) + + print("http://download.pingcap.org/{}".format(remote_name)) diff --git a/v1.0/sql/admin.md b/v1.0/sql/admin.md new file mode 100755 index 0000000000000..7c3baaaf3bffd --- /dev/null +++ b/v1.0/sql/admin.md @@ -0,0 +1,132 @@ +--- +title: Database Administration Statements +category: user guide +--- + +# Database Administration Statements + +TiDB manages the database using a number of statements, including granting privileges, modifying system variables, and querying database status. + +## Privilege management + +See [Privilege Management](privilege.md). + +## `SET` statement + +The `SET` statement has multiple functions and forms. + +### Assign values to variables + +```sql +SET variable_assignment [, variable_assignment] ... + +variable_assignment: + user_var_name = expr + | param_name = expr + | local_var_name = expr + | [GLOBAL | SESSION] + system_var_name = expr + | [@@global. | @@session. | @@] + system_var_name = expr +``` + +You can use the above syntax to assign values to variables in TiDB, which include system variables and user-defined variables. All user-defined variables are session variables. The system variables set using `@@global.` or `GLOBAL` are global variables, otherwise session variables. For more information, see [The System Variables](variable.md). + +### `SET CHARACTER` statement and `SET NAMES` + +```sql +SET {CHARACTER SET | CHARSET} + {'charset_name' | DEFAULT} + +SET NAMES {'charset_name' + [COLLATE 'collation_name'] | DEFAULT} +``` + +This statement sets three session system variables (`character_set_client`, `character_set_results` and `character_set_connection`) as given character set. Currently, the value of `character_set_connection` differs from MySQL and is set as the value of `character_set_database` in MySQL. + +### Set the password + +```sql +SET PASSWORD [FOR user] = password_option + +password_option: { + 'auth_string' + | PASSWORD('auth_string') +} +``` + +This statement is used to set user passwords. For more information, see [Privilege Management](privilege.md). + +### Set the isolation level + +```sql +SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED; +``` + +This statement is used to set the transaction isolation level. For more information, see [Transaction Isolation Level](transaction.md#transaction-isolation-level). + +## `SHOW` statement + +TiDB supports part of `SHOW` statements, used to view the Database/Table/Column information and the internal status of the database. Currently supported statements: + +```sql +# Supported and similar to MySQL +SHOW CHARACTER SET [like_or_where] +SHOW COLLATION [like_or_where] +SHOW [FULL] COLUMNS FROM tbl_name [FROM db_name] [like_or_where] +SHOW CREATE {DATABASE|SCHEMA} db_name +SHOW CREATE TABLE tbl_name +SHOW DATABASES [like_or_where] +SHOW GRANTS FOR user +SHOW INDEX FROM tbl_name [FROM db_name] +SHOW PRIVILEGES +SHOW [FULL] PROCESSLIST +SHOW [GLOBAL | SESSION] STATUS [like_or_where] +SHOW TABLE STATUS [FROM db_name] [like_or_where] +SHOW [FULL] TABLES [FROM db_name] [like_or_where] +SHOW [GLOBAL | SESSION] VARIABLES [like_or_where] +SHOW WARNINGS + +# Supported to improve compatibility but return null results +SHOW ENGINE engine_name {STATUS | MUTEX} +SHOW [STORAGE] ENGINES +SHOW PLUGINS +SHOW PROCEDURE STATUS [like_or_where] +SHOW TRIGGERS [FROM db_name] [like_or_where] +SHOW EVENTS +SHOW FUNCTION STATUS [like_or_where] + +# TiDB-specific statements for viewing statistics +SHOW STATS_META [like_or_where] +SHOW STATS_HISTOGRAMS [like_or_where] +SHOW STATS_BUCKETS [like_or_where] + + +like_or_where: + LIKE 'pattern' + | WHERE expr +``` + +> **Note**: +> +> - To view statistics using the `SHOW` statement, see [View Statistics](statistics.md#view-statistics). +> - For more information about the `SHOW` statement, see [SHOW Syntax in MySQL](https://dev.mysql.com/doc/refman/5.7/en/show.html). + +## `ADMIN` statement + +This statement is a TiDB extension syntax, used to view the status of TiDB. + +```sql +ADMIN SHOW DDL +ADMIN SHOW DDL JOBS +ADMIN CANCEL DDL JOBS 'job_id' [, 'job_id'] ... +``` + +- `ADMIN SHOW DDL`: To view the currently running DDL jobs. +- `ADMIN SHOW DDL JOBS`: To view all the results in the current DDL job queue (including tasks that are running and waiting to be run) and the last ten results in the completed DDL job queue. +- `ADMIN CANCEL DDL JOBS 'job_id' [, 'job_id'] ...`: To cancel the currently running DDL jobs and return whether the corresponding jobs are successfully cancelled. If the operation fails to cancel the jobs, specific reasons are displayed. + + > **Note**: + > + > - This operation can cancel multiple DDL jobs at the same time. You can get the ID of DDL jobs using the `ADMIN SHOW DDL JOBS` statement. + > - If the jobs you want to cancel are finished, the cancellation operation fails. diff --git a/v1.0/sql/aggregate-group-by-functions.md b/v1.0/sql/aggregate-group-by-functions.md new file mode 100755 index 0000000000000..326f36efcddc1 --- /dev/null +++ b/v1.0/sql/aggregate-group-by-functions.md @@ -0,0 +1,92 @@ +--- +title: Aggregate (GROUP BY) Functions +category: user guide +--- + +# Aggregate (GROUP BY) Functions + +## Aggregate (GROUP BY) function descriptions + +This section describes the supported MySQL group (aggregate) functions in TiDB. + +| Name | Description | +|:--------------------------------------------------------------------------------------------------------------|:--------------------------------------------------| +| [`COUNT()`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_count) | Return a count of the number of rows returned | +| [`COUNT(DISTINCT)`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_count-distinct) | Return the count of a number of different values | +| [`SUM()`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_sum) | Return the sum | +| [`AVG()`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_avg) | Return the average value of the argument | +| [`MAX()`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_max) | Return the maximum value | +| [`MIN()`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_min) | Return the minimum value | +| [`GROUP_CONCAT()`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_group-concat) | Return a concatenated string | + +- Unless otherwise stated, group functions ignore `NULL` values. +- If you use a group function in a statement containing no `GROUP BY` clause, it is equivalent to grouping on all rows. For more information see [TiDB handling of GROUP BY](#tidb-handling-of-group-by). + +## GROUP BY modifiers + +TiDB dose not support any `GROUP BY` modifiers currently. We'll do it in the future. For more information, see [#4250](https://github.com/pingcap/tidb/issues/4250). + +## TiDB handling of GROUP BY + +TiDB performs equivalent to MySQL with sql mode [`ONLY_FULL_GROUP_BY`](https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_only_full_group_by) being disabled: permits the `SELECT` list, `HAVING` condition, or `ORDER BY` list to refer to non-aggregated columns even if the columns are not functionally dependent on `GROUP BY` columns. + +For example, this query is illegal in MySQL 5.7.5 with `ONLY_FULL_GROUP_BY` enabled because the non-aggregated column "b" in the `SELECT` list does not appear in the `GROUP BY`: + +```sql +drop table if exists t; +create table t(a bigint, b bigint, c bigint); +insert into t values(1, 2, 3), (2, 2, 3), (3, 2, 3); +select a, b, sum(c) from t group by a; +``` + +The preceding query is legal in TiDB. TiDB does not support SQL mode `ONLY_FULL_GROUP_BY` currently. We'll do it in the future. For more inmormation, see [#4248](https://github.com/pingcap/tidb/issues/4248). + +Suppose that we execute the following query, expecting the results to be ordered by "c": +```sql +drop table if exists t; +create table t(a bigint, b bigint, c bigint); +insert into t values(1, 2, 1), (1, 2, 2), (1, 3, 1), (1, 3, 2); +select distinct a, b from t order by c; +``` + +To order the result, duplicates must be eliminated first. But to do so, which row should we keep? This choice influences the retained value of "c", which in turn influences ordering and makes it arbitrary as well. + +In MySQL, a query that has `DISTINCT` and `ORDER BY` is rejected as invalid if any `ORDER BY` expression does not satisfy at least one of these conditions: +- The expression is equal to one in the `SELECT` list +- All columns referenced by the expression and belonging to the query's selected tables are elements of the `SELECT` list + +But in TiDB, the above query is legal, for more information see [#4254](https://github.com/pingcap/tidb/issues/4254). + +Another TiDB extension to standard SQL permits references in the `HAVING` clause to aliased expressions in the `SELECT` list. For example, the following query returns "name" values that occur only once in table "orders": +```sql +select name, count(name) from orders +group by name +having count(name) = 1; +``` + +The TiDB extension permits the use of an alias in the `HAVING` clause for the aggregated column: +```sql +select name, count(name) as c from orders +group by name +having c = 1; +``` + +Standard SQL permits only column expressions in `GROUP BY` clauses, so a statement such as this is invalid because "FLOOR(value/100)" is a noncolumn expression: +```sql +select id, floor(value/100) +from tbl_name +group by id, floor(value/100); +``` + +TiDB extends standard SQL to permit noncolumn expressions in `GROUP BY` clauses and considers the preceding statement valid. + +Standard SQL also does not permit aliases in `GROUP BY` clauses. TiDB extends standard SQL to permit aliases, so another way to write the query is as follows: +```sql +select id, floor(value/100) as val +from tbl_name +group by id, val; +``` + +## Detection of functional dependence + +TiDB does not support SQL mode `ONLY_FULL_GROUP_BY` and detection of functional dependence. We'll do it in the future. For more information, see [#4248](https://github.com/pingcap/tidb/issues/4248). diff --git a/v1.0/sql/bit-functions-and-operators.md b/v1.0/sql/bit-functions-and-operators.md new file mode 100755 index 0000000000000..6f6fe044c03df --- /dev/null +++ b/v1.0/sql/bit-functions-and-operators.md @@ -0,0 +1,20 @@ +--- +title: Bit Functions and Operators +category: user guide +--- + +# Bit Functions and Operators + +In TiDB, the usage of bit functions and operators is similar to MySQL. See [Bit Functions and Operators](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html). + +**Bit functions and operators** + +| Name | Description | +| :------| :------------- | +| [`BIT_COUNT()`](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#function_bit-count) | Return the number of bits that are set as 1 | +| [&](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-and) | Bitwise AND | +| [~](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-invert) | Bitwise inversion | +| [\|](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-or) | Bitwise OR | +| [^](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-xor) | Bitwise XOR | +| [<<](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_left-shift) | Left shift | +| [>>](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_right-shift) | Right shift | diff --git a/v1.0/sql/cast-functions-and-operators.md b/v1.0/sql/cast-functions-and-operators.md new file mode 100755 index 0000000000000..67d387314df85 --- /dev/null +++ b/v1.0/sql/cast-functions-and-operators.md @@ -0,0 +1,17 @@ +--- +title: Cast Functions and Operators +category: user guide +--- + +# Cast Functions and Operators + + +| Name | Description | +| ---------------------------------------- | -------------------------------- | +| [`BINARY`](https://dev.mysql.com/doc/refman/5.7/en/cast-functions.html#operator_binary) | Cast a string to a binary string | +| [`CAST()`](https://dev.mysql.com/doc/refman/5.7/en/cast-functions.html#function_cast) | Cast a value as a certain type | +| [`CONVERT()`](https://dev.mysql.com/doc/refman/5.7/en/cast-functions.html#function_convert) | Cast a value as a certain type | + +Cast functions and operators enable conversion of values from one data type to another. + +For details, see [here](https://dev.mysql.com/doc/refman/5.7/en/cast-functions.html). \ No newline at end of file diff --git a/v1.0/sql/character-set-configuration.md b/v1.0/sql/character-set-configuration.md new file mode 100755 index 0000000000000..8313150e7c330 --- /dev/null +++ b/v1.0/sql/character-set-configuration.md @@ -0,0 +1,10 @@ +--- +title: Character Set Configuration +category: user guide +--- + +# Character Set Configuration + +Currently, TiDB does not support configuring the character set. The default character set is utf8. + +For more information, see [Character Set Configuration in MySQL](https://dev.mysql.com/doc/refman/5.7/en/charset-configuration.html). \ No newline at end of file diff --git a/v1.0/sql/character-set-support.md b/v1.0/sql/character-set-support.md new file mode 100755 index 0000000000000..5b663d95327d7 --- /dev/null +++ b/v1.0/sql/character-set-support.md @@ -0,0 +1,200 @@ +--- +title: Character Set Support +category: user guide +--- + +# Character Set Support + +A character set is a set of symbols and encodings. A collation is a set of rules for comparing characters in a character set. + +Currently, TiDB supports the following character sets: + +```sql +mysql> SHOW CHARACTER SET; ++---------|---------------|-------------------|--------+ +| Charset | Description | Default collation | Maxlen | ++---------|---------------|-------------------|--------+ +| utf8 | UTF-8 Unicode | utf8_bin | 3 | +| utf8mb4 | UTF-8 Unicode | utf8mb4_bin | 4 | +| ascii | US ASCII | ascii_bin | 1 | +| latin1 | Latin1 | latin1_bin | 1 | +| binary | binary | binary | 1 | ++---------|---------------|-------------------|--------+ +5 rows in set (0.00 sec) +``` + +> **Note**: In TiDB, utf8 is treated as utf8mb4. + +Each character set has at least one collation. Most of the character sets have several collations. You can use the following statement to display the available character sets: + +```sql +mysql> SHOW COLLATION WHERE Charset = 'latin1'; ++-------------------|---------|------|---------|----------|---------+ +| Collation | Charset | Id | Default | Compiled | Sortlen | ++-------------------|---------|------|---------|----------|---------+ +| latin1_german1_ci | latin1 | 5 | | Yes | 1 | +| latin1_swedish_ci | latin1 | 8 | Yes | Yes | 1 | +| latin1_danish_ci | latin1 | 15 | | Yes | 1 | +| latin1_german2_ci | latin1 | 31 | | Yes | 1 | +| latin1_bin | latin1 | 47 | | Yes | 1 | +| latin1_general_ci | latin1 | 48 | | Yes | 1 | +| latin1_general_cs | latin1 | 49 | | Yes | 1 | +| latin1_spanish_ci | latin1 | 94 | | Yes | 1 | ++-------------------|---------|------|---------|----------|---------+ +8 rows in set (0.00 sec) +``` + +The `latin1` collations have the following meanings: + +| Collation | Meaning | +|:--------------------|:----------------------------------------------------| +| `latin1_bin` | Binary according to `latin1` encoding | +| `latin1_danish_ci` | Danish/Norwegian | +| `latin1_general_ci` | Multilingual (Western European) | +| `latin1_general_cs` | Multilingual (ISO Western European), case sensitive | +| `latin1_german1_ci` | German DIN-1 (dictionary order) | +| `latin1_german2_ci` | German DIN-2 (phone book order) | +| `latin1_spanish_ci` | Modern Spanish | +| `latin1_swedish_ci` | Swedish/Finnish | + +Each character set has a default collation. For example, the default collation for utf8 is `utf8_bin`. + +> **Note**: The collations in TiDB are case sensitive. + +## Collation naming conventions + +The collation names in TiDB follow these conventions: + +- The prefix of a collation is its corresponding character set, generally followed by one or more suffixes indicating other collation characteristic. For example, `utf8_general_ci` and `latin1_swedish_ci` are collations for the utf8 and latin1 character sets, respectively. The `binary` character set has a single collation, also named `binary`, with no suffixes. +- A language-specific collation includes a language name. For example, `utf8_turkish_ci` and `utf8_hungarian_ci` sort characters for the utf8 character set using the rules of Turkish and Hungarian, respectively. +- Collation suffixes indicate whether a collation is case and accent sensitive, or binary. The following table shows the suffixes used to indicate these characteristics. + + | Suffix | Meaning | + |:-------|:-------------------| + | \_ai | Accent insensitive | + | \_as | Accent insensitive | + | \_ci | Case insensitive | + | \_cs | Case sensitive | + | \_bin | Binary | + +> **Note**: For now, TiDB supports on some of the collations in the above table. + +## Database character set and collation + +Each database has a character set and a collation. You can use the `CREATE DATABASE` statement to specify the database character set and collation: + +```sql +CREATE DATABASE db_name + [[DEFAULT] CHARACTER SET charset_name] + [[DEFAULT] COLLATE collation_name] +``` +Where `DATABASE` can be replaced with `SCHEMA`. + +Different databases can use different character sets and collations. Use the `character_set_database` and `collation_database` to see the character set and collation of the current database: + +```sql +mysql> create schema test1 character set utf8 COLLATE uft8_general_ci; +Query OK, 0 rows affected (0.09 sec) + +mysql> use test1; +Database changed +mysql> SELECT @@character_set_database, @@collation_database; ++--------------------------|----------------------+ +| @@character_set_database | @@collation_database | ++--------------------------|----------------------+ +| utf8 | uft8_general_ci | ++--------------------------|----------------------+ +1 row in set (0.00 sec) + +mysql> create schema test2 character set latin1 COLLATE latin1_general_ci; +Query OK, 0 rows affected (0.09 sec) + +mysql> use test2; +Database changed +mysql> SELECT @@character_set_database, @@collation_database; ++--------------------------|----------------------+ +| @@character_set_database | @@collation_database | ++--------------------------|----------------------+ +| latin1 | latin1_general_ci | ++--------------------------|----------------------+ +1 row in set (0.00 sec) +``` + +You can also see the two values in INFORMATION_SCHEMA: + +```sql +SELECT DEFAULT_CHARACTER_SET_NAME, DEFAULT_COLLATION_NAME +FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'db_name'; +``` + +## Table character set and collation + +You can use the following statement to specify the character set and collation for tables: + +```sql +CREATE TABLE tbl_name (column_list) + [[DEFAULT] CHARACTER SET charset_name] + [COLLATE collation_name]] + +ALTER TABLE tbl_name + [[DEFAULT] CHARACTER SET charset_name] + [COLLATE collation_name] +``` + +For example: + +```sql +mysql> CREATE TABLE t1(a int) CHARACTER SET utf8 COLLATE utf8_general_ci; +Query OK, 0 rows affected (0.08 sec) +``` +The table character set and collation are used as the default values for column definitions if the column character set and collation are not specified in individual column definitions. + +## Column character set and collation + +See the following table for the character set and collation syntax for columns: + +```sql +col_name {CHAR | VARCHAR | TEXT} (col_length) + [CHARACTER SET charset_name] + [COLLATE collation_name] + +col_name {ENUM | SET} (val_list) + [CHARACTER SET charset_name] + [COLLATE collation_name] +``` + +## Connection character sets and collations + +- The server character set and collation are the values of the `character_set_server` and `collation_server` system variables. + +- The character set and collation of the default database are the values of the `character_set_database` and `collation_database` system variables. + You can use `character_set_connection` and `collation_connection` to specify the character set and collation for each connection. + The `character_set_client` variable is to set the client character set. Before returning the result, the `character_set_results` system variable indicates the character set in which the server returns query results to the client, including the metadata of the result. + +You can use the following statement to specify a particular collation that is related to the client: + +- `SET NAMES 'charset_name' [COLLATE 'collation_name']` + + `SET NAMES` indicates what character set the client will use to send SQL statements to the server. `SET NAMES utf8` indicates that all the requests from the client use utf8, as well as the results from the server. + + The `SET NAMES 'charset_name'` statement is equivalent to the following statement combination: + + ```sql + SET character_set_client = charset_name; + SET character_set_results = charset_name; + SET character_set_connection = charset_name; + ``` + + `COLLATE` is optional, if absent, the default collation of the `charset_name` is used. + +- `SET CHARACTER SET 'charset_name'` + + Similar to `SET NAMES`, the `SET NAMES 'charset_name'` statement is equivalent to the following statement combination: + + ```sql + SET character_set_client = charset_name; + SET character_set_results = charset_name; + SET collation_connection = @@collation_database; + ``` + +For more information, see [Connection Character Sets and Collations in MySQL](https://dev.mysql.com/doc/refman/5.7/en/charset-connection.html). diff --git a/v1.0/sql/comment-syntax.md b/v1.0/sql/comment-syntax.md new file mode 100755 index 0000000000000..08801b783700d --- /dev/null +++ b/v1.0/sql/comment-syntax.md @@ -0,0 +1,117 @@ +--- +title: Comment Syntax +category: user guide +--- + +# Comment Syntax + +TiDB supports three comment styles: + +- Use `#` to comment a line. +- Use `--` to comment a line, and this style requires at least one whitespace after `--`. +- Use `/* */` to comment a block or multiple lines. + +Example: + +``` +mysql> SELECT 1+1; # This comment continues to the end of line ++------+ +| 1+1 | ++------+ +| 2 | ++------+ +1 row in set (0.00 sec) + +mysql> SELECT 1+1; -- This comment continues to the end of line ++------+ +| 1+1 | ++------+ +| 2 | ++------+ +1 row in set (0.00 sec) + +mysql> SELECT 1 /* this is an in-line comment */ + 1; ++--------+ +| 1 + 1 | ++--------+ +| 2 | ++--------+ +1 row in set (0.01 sec) + +mysql> SELECT 1+ + -> /* + /*> this is a + /*> multiple-line comment + /*> */ + -> 1; ++-------+ +| 1+ + +1 | ++-------+ +| 2 | ++-------+ +1 row in set (0.00 sec) + +mysql> SELECT 1+1--1; ++--------+ +| 1+1--1 | ++--------+ +| 3 | ++--------+ +1 row in set (0.01 sec) +``` + +Similar to MySQL, TiDB supports a variant of C comment style: + +``` +/*! Specific code */ +``` + +In this comment style, TiDB runs the statements in the comment. The syntax is used to make these SQL statements ignored in other databases and run only in TiDB. + +For example: + +``` +SELECT /*! STRAIGHT_JOIN */ col1 FROM table1,table2 WHERE ... +``` + +In TiDB, you can also use another version: + +``` +SELECT STRAIGHT_JOIN col1 FROM table1,table2 WHERE ... +``` + +If the server version number is specified in the comment, for example, `/*!50110 KEY_BLOCK_SIZE=1024 */`, in MySQL it means that the contents in this comment is processed only when the MySQL version is or higher than 5.1.10. But in TiDB, the version number does not work and all contents in the comment are processed. + +Another type of comment is specially treated as the Hint optimizer: + +``` +SELECT /*+ hint */ FROM ...; +``` + +Since Hint is involved in comments like `/*+ xxx */`, the MySQL client clears the comment by default in versions earlier than 5.7.7. To use Hint in those earlier versions, add the `--comments` option when you start the client. For example: + +``` +mysql -h 127.0.0.1 -P 4000 -uroot --comments` +``` + +Currently, TiDB supports the following specific types of Hint: + +- TIDB_SMJ(t1, t2) + + ``` + SELECT /*+ TIDB_SMJ(t1, t2) */ * from t1,t2 where t1.id = t2.id + ``` + + The Hint optimizer uses the Sort Merge Join algorithm, which usually consumes less memory but takes longer to run. This is recommended when the amount of data is too large, or the system memory is insufficient. + +- TIDB_INLJ(t1, t2) + + ``` + SELECT /*+ TIDB_INLJ(t1, t2) */ * from t1,t2 where t1.id = t2.id + ``` + + The Hint optimizer uses the Index Nested Loop Join algorithm. This algorithm is faster in some scenarios and consumes less system resources, while it may be slower in some other scenarios and consumes more system resources. For the scenarios that have a small result set (less than 10,000 lines) after the filtration of `WHERE` condition, you can try to use it. The parameter in `TIDB_INLJ()` is the candidate table of the driving table (outer table) when the query plan is created. In other words, `TIDB_INLJ(t1)` only uses `t1` as the driving table to create the query plan. + +For more information, see [Comment Syntax](https://dev.mysql.com/doc/refman/5.7/en/comments.html). diff --git a/v1.0/sql/connection-and-APIs.md b/v1.0/sql/connection-and-APIs.md new file mode 100755 index 0000000000000..d7bbaaea4e367 --- /dev/null +++ b/v1.0/sql/connection-and-APIs.md @@ -0,0 +1,95 @@ +--- +title: Connectors and APIs +category: user guide +--- + +# Connectors and APIs + +Database Connectors provide connectivity to the TiDB server for client programs. APIs provide low-level access to the MySQL protocol and MySQL resources. Both Connectors and the APIs enable you to connect and execute MySQL statements from another language or environment, including ODBC, Java (JDBC), Perl, Python, PHP, Ruby and C. + +TiDB is compatible with all Connectors and APIs of MySQL (5.6, 5.7), including: + +- [MySQL Connector/C](https://dev.mysql.com/doc/refman/5.7/en/connector-c-info.html) +- [MySQL Connector/C++](https://dev.mysql.com/doc/refman/5.7/en/connector-cpp-info.html) +- [MySQL Connector/J](https://dev.mysql.com/doc/refman/5.7/en/connector-j-info.html) +- [MySQL Connector/Net](https://dev.mysql.com/doc/refman/5.7/en/connector-net-info.html) +- [MySQL Connector/ODBC](https://dev.mysql.com/doc/refman/5.7/en/connector-odbc-info.html) +- [MySQL Connector/Python](https://dev.mysql.com/doc/refman/5.7/en/connector-python-info.html) +- [MySQL C API](https://dev.mysql.com/doc/refman/5.7/en/c-api.html) +- [MySQL PHP API](https://dev.mysql.com/doc/refman/5.7/en/apis-php-info.html) +- [MySQL Perl API](https://dev.mysql.com/doc/refman/5.7/en/apis-perl.html) +- [MySQL Python API](https://dev.mysql.com/doc/refman/5.7/en/apis-python.html) +- [MySQL Ruby APIs](https://dev.mysql.com/doc/refman/5.7/en/apis-ruby.html) +- [MySQL Tcl API](https://dev.mysql.com/doc/refman/5.7/en/apis-tcl.html) +- [MySQL Eiffel Wrapper](https://dev.mysql.com/doc/refman/5.7/en/apis-eiffel.html) +- [Mysql Go API](https://github.com/go-sql-driver/mysql) + +## Connect to TiDB using MySQL Connectors + +Oracle develops the following APIs and TiDB is compatible with all of them: + +- [MySQL Connector/C](https://dev.mysql.com/doc/refman/5.7/en/connector-c-info.html): a standalone replacement for the `libmysqlclient`, to be used for C applications +- [MySQL Connector/C++](https://dev.mysql.com/doc/refman/5.7/en/connector-cpp-info.html):to enable C++ applications to connect to MySQL +- [MySQL Connector/J](https://dev.mysql.com/doc/refman/5.7/en/connector-j-info.html):to enable Java applications to connect to MySQL using the standard JDBC API +- [MySQL Connector/Net](https://dev.mysql.com/doc/refman/5.7/en/connector-net-info.html):to enable .Net applications to connect to MySQL; [MySQL for Visual Studio](https://dev.mysql.com/doc/visual-studio/en/) uses this; support Microsoft Visual Studio 2012, 2013, 2015 and 2017 versions +- [MySQL Connector/ODBC](https://dev.mysql.com/doc/refman/5.7/en/connector-odbc-info.html):the standard ODBC API; support Windows, Unix, and OS X platforms +- [MySQL Connector/Python](https://dev.mysql.com/doc/refman/5.7/en/connector-python-info.html):to enable Python applications to connect to MySQL, compliant with the [Python DB API version 2.0](http://www.python.org/dev/peps/pep-0249/) + +## Connect to TiDB using MySQL C API + +If you use C language programs to connect to TiDB, you can connect to `libmysqlclient` directly and use the MySQL [C API](https://dev.mysql.com/doc/refman/5.7/en/c-api.html). This is one of the major connection methods using C language, widely used by various clients and APIs, including Connector/C. + +## Connect to TiDB using third-party MySQL APIs + +The third-party APIs are not developed by Oracle. The following table lists the commonly used third-party APIs: + +| Environment | API | Type | Notes | +| -------------- | ---------------------------------------- | -------------------------------- | ---------------------------------------- | +| Ada | GNU Ada MySQL Bindings | `libmysqlclient` | See [MySQL Bindings for GNU Ada](http://gnade.sourceforge.net/) | +| C | C API | `libmysqlclient` | See [Section 27.8, “MySQL C API”](https://dev.mysql.com/doc/refman/5.7/en/c-api.html) | +| C | Connector/C | Replacement for `libmysqlclient` | See [MySQL Connector/C Developer Guide](https://dev.mysql.com/doc/connector-c/en/) | +| C++ | Connector/C++ | `libmysqlclient` | See [MySQL Connector/C++ Developer Guide](https://dev.mysql.com/doc/connector-cpp/en/) | +| | MySQL++ | `libmysqlclient` | See [MySQL++ Web site](http://tangentsoft.net/mysql++/doc/) | +| | MySQL wrapped | `libmysqlclient` | See [MySQL wrapped](http://www.alhem.net/project/mysql/) | +| Go | go-sql-driver | Native Driver | See [Mysql Go API](https://github.com/go-sql-driver/mysql) | +| Cocoa | MySQL-Cocoa | `libmysqlclient` | Compatible with the Objective-C Cocoa environment. See | +| D | MySQL for D | `libmysqlclient` | See [MySQL for D](http://www.steinmole.de/d/) | +| Eiffel | Eiffel MySQL | `libmysqlclient` | See [Section 27.14, “MySQL Eiffel Wrapper”](https://dev.mysql.com/doc/refman/5.7/en/apis-eiffel.html) | +| Erlang | `erlang-mysql-driver` | `libmysqlclient` | See [`erlang-mysql-driver`](http://code.google.com/p/erlang-mysql-driver/) | +| Haskell | Haskell MySQL Bindings | Native Driver | See [Brian O'Sullivan's pure Haskell MySQL bindings](http://www.serpentine.com/blog/software/mysql/) | +| | `hsql-mysql` | `libmysqlclient` | See [MySQL driver for Haskell ](http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hsql-mysql-1.7) | +| Java/JDBC | Connector/J | Native Driver | See [MySQL Connector/J 5.1 Developer Guide](https://dev.mysql.com/doc/connector-j/5.1/en/) | +| Kaya | MyDB | `libmysqlclient` | See [MyDB](http://kayalang.org/library/latest/MyDB) | +| Lua | LuaSQL | `libmysqlclient` | See [LuaSQL](http://keplerproject.github.io/luasql/doc/us/) | +| .NET/Mono | Connector/Net | Native Driver | See [MySQL Connector/Net Developer Guide](https://dev.mysql.com/doc/connector-net/en/) | +| Objective Caml | OBjective Caml MySQL Bindings | `libmysqlclient` | See [MySQL Bindings for Objective Caml](http://raevnos.pennmush.org/code/ocaml-mysql/) | +| Octave | Database bindings for GNU Octave | `libmysqlclient` | See [Database bindings for GNU Octave](http://octave.sourceforge.net/database/index.html) | +| ODBC | Connector/ODBC | `libmysqlclient` | See [MySQL Connector/ODBC Developer Guide](https://dev.mysql.com/doc/connector-odbc/en/) | +| Perl | `DBI`/`DBD::mysql` | `libmysqlclient` | See [Section 27.10, “MySQL Perl API”](https://dev.mysql.com/doc/refman/5.7/en/apis-perl.html) | +| | `Net::MySQL` | Native Driver | See [`Net::MySQL`](http://search.cpan.org/dist/Net-MySQL/MySQL.pm) at CPAN | +| PHP | `mysql`, `ext/mysql`interface (deprecated) | `libmysqlclient` | See [Original MySQL API](https://dev.mysql.com/doc/apis-php/en/apis-php-mysql.html) | +| | `mysqli`, `ext/mysqli`interface | `libmysqlclient` | See [MySQL Improved Extension](https://dev.mysql.com/doc/apis-php/en/apis-php-mysqli.html) | +| | `PDO_MYSQL` | `libmysqlclient` | See [MySQL Functions (PDO_MYSQL)](https://dev.mysql.com/doc/apis-php/en/apis-php-pdo-mysql.html) | +| | PDO mysqlnd | Native Driver | | +| Python | Connector/Python | Native Driver | See [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/) | +| Python | Connector/Python C Extension | `libmysqlclient` | See [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/) | +| | MySQLdb | `libmysqlclient` | See [Section 27.11, “MySQL Python API”](https://dev.mysql.com/doc/refman/5.7/en/apis-python.html) | +| Ruby | MySQL/Ruby | `libmysqlclient` | Uses `libmysqlclient`. See [Section 27.12.1, “The MySQL/Ruby API”](https://dev.mysql.com/doc/refman/5.7/en/apis-ruby-mysqlruby.html) | +| | Ruby/MySQL | Native Driver | See [Section 27.12.2, “The Ruby/MySQL API”](https://dev.mysql.com/doc/refman/5.7/en/apis-ruby-rubymysql.html) | +| Scheme | `Myscsh` | `libmysqlclient` | See [`Myscsh`](https://github.com/aehrisch/myscsh) | +| SPL | `sql_mysql` | `libmysqlclient` | See [`sql_mysql` for SPL](http://www.clifford.at/spl/spldoc/sql_mysql.html) | +| Tcl | MySQLtcl | `libmysqlclient` | See [Section 27.13, “MySQL Tcl API”](https://dev.mysql.com/doc/refman/5.7/en/apis-tcl.html) | + +## Connector versions supported by TiDB + +| Connector | Connector Version | +| ---------------- | ---------------------------- | +| Connector/C | 6.1.0 GA | +| Connector/C++ | 1.0.5 GA | +| Connector/J | 5.1.8 | +| Connector/Net | 6.9.9 GA | +| Connector/Net | 6.8.8 GA | +| Connector/ODBC | 5.1 | +| Connector/ODBC | 3.51 (Unicode not supported) | +| Connector/Python | 2.0 | +| Connector/Python | 1.2 | diff --git a/v1.0/sql/control-flow-functions.md b/v1.0/sql/control-flow-functions.md new file mode 100755 index 0000000000000..d913dd3bbb95b --- /dev/null +++ b/v1.0/sql/control-flow-functions.md @@ -0,0 +1,14 @@ +--- +title: Control Flow Functions +category: user guide +--- + +# Control Flow Functions + +| Name | Description | +|:--------------------------------------------------------------------------------------------------|:----------------------------------| +| [`CASE`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#operator_case) | Case operator | +| [`IF()`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#function_if) | If/else construct | +| [`IFNULL()`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#function_ifnull) | Null if/else construct | +| [`NULLIF()`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#function_nullif) | Return NULL if expr1 = expr2 | + diff --git a/v1.0/sql/datatype.md b/v1.0/sql/datatype.md new file mode 100755 index 0000000000000..5cd05aed5b03b --- /dev/null +++ b/v1.0/sql/datatype.md @@ -0,0 +1,334 @@ +--- +title: TiDB Data Type +category: user guide +--- + +# TiDB Data Type + +## Overview + +TiDB supports all the data types in MySQL except the Spatial type, including numeric type, string type, date & time type, and JSON type. + +The definition of the data type is: `T(M[, D])`. In this format: + +- `T` indicates the specific data type. +- `M` indicates the maximum display width for integer types. For floating-point and fixed-point types, `M` is the total number of digits that can be stored (the precision). For string types, `M` is the maximum length. The maximum permissible value of M depends on the data type. +- `D` applies to floating-point and fixed-point types and indicates the number of digits following the decimal point (the scale). +- `fsp` applies to the TIME, DATETIME, and TIMESTAMP types and represents the fractional seconds precision. The `fsp` value, if given, must be in the range 0 to 6. A value of 0 signifies that there is no fractional part. If omitted, the default precision is 0. + +## Numeric types + +### Overview + +TiDB supports all the MySQL numeric types, including: + ++ Integer Types (Exact Value) ++ Floating-Point Types (Approximate Value) ++ Fixed-Point Types (Exact Value) + +### Integer types (exact value) + +TiDB supports all the MySQL integer types, including INTEGER/INT, TINYINT, SMALLINT, MEDIUMINT, and BIGINT. For more information, see [Numeric Type Overview in MySQL](https://dev.mysql.com/doc/refman/5.7/en/numeric-type-overview.html). + +#### Type definition + +Syntax: + +```sql +BIT[(M)] +> The BIT data type. A type of BIT(M) enables storage of M-bit values. M can range from 1 to 64. + +TINYINT[(M)] [UNSIGNED] [ZEROFILL] +> The TINYINT data type. The value range for signed: [-128, 127] and the range for unsigned is [0, 255]. + +BOOL, BOOLEAN +> BOOLEAN and is equivalent to TINYINT(1). If the value is "0", it is considered as False; otherwise, it is considered True. In TiDB, True is "1" and False is "0". + + +SMALLINT[(M)] [UNSIGNED] [ZEROFILL] +> SMALLINT. The signed range is: [-32768, 32767], and the unsigned range is [0, 65535]. + +MEDIUMINT[(M)] [UNSIGNED] [ZEROFILL] +> MEDIUMINT. The signed range is: [-8388608, 8388607], and the unsigned range is [0, 16777215]. + +INT[(M)] [UNSIGNED] [ZEROFILL] +> INT. The signed range is: [-2147483648, 2147483647], and the unsigned range is [0, 4294967295]. + +INTEGER[(M)] [UNSIGNED] [ZEROFILL] +> Same as INT. + +BIGINT[(M)] [UNSIGNED] [ZEROFILL] +> BIGINT. The signed range is: [-9223372036854775808, 9223372036854775807], and the unsigned range is [0, 18446744073709551615]. + +``` +The meaning of the fields: + +| Syntax Element | Description | +| -------- | ------------------------------- | +| M | the length of the type. Optional. | +| UNSIGNED | UNSIGNED. If omitted, it is SIGNED. | +| ZEROFILL | If you specify ZEROFILL for a numeric column, TiDB automatically adds the UNSIGNED attribute to the column. | + +#### Storage and range + +See the following for the requirements of the storage and minimum value/maximim value of each data type: + +| Type | Storage Required (bytes) | Minimum Value (Signed/Unsigned) | Maximum Value (Signed/Unsigned) | +| ----------- |----------|-----------------------| --------------------- | +| `TINYINT` | 1 | -128 / 0 | 127 / 255 | +| `SMALLINT` | 2 | -32768 / 0 | 32767 / 65535 | +| `MEDIUMINT` | 3 | -8388608 / 0 | 8388607 / 16777215 | +| `INT` | 4 | -2147483648 / 0 | 2147483647 / 4294967295 | +| `BIGINT` | 8 | -9223372036854775808 / 0 | 9223372036854775807 / 18446744073709551615 | + +### Floating-point types (approximate value) + +TiDB supports all the MySQL floating-point types, including FLOAT, and DOUBLE. For more information, [Floating-Point Types (Approximate Value) - FLOAT, DOUBLE in MySQL](https://dev.mysql.com/doc/refman/5.7/en/floating-point-types.html). + +#### Type definition + +Syntax: + +```sql +FLOAT[(M,D)] [UNSIGNED] [ZEROFILL] +> A small (single-precision) floating-point number. Permissible values are -3.402823466E+38 to -1.175494351E-38, 0, and 1.175494351E-38 to 3.402823466E+38. These are the theoretical limits, based on the IEEE standard. The actual range might be slightly smaller depending on your hardware or operating system. + +DOUBLE[(M,D)] [UNSIGNED] [ZEROFILL] +> A normal-size (double-precision) floating-point number. Permissible values are -1.7976931348623157E+308 to -2.2250738585072014E-308, 0, and 2.2250738585072014E-308 to 1.7976931348623157E+308. These are the theoretical limits, based on the IEEE standard. The actual range might be slightly smaller depending on your hardware or operating system. + +DOUBLE PRECISION [(M,D)] [UNSIGNED] [ZEROFILL], REAL[(M,D)] [UNSIGNED] [ZEROFILL] +> Synonym for DOUBLE. + +FLOAT(p) [UNSIGNED] [ZEROFILL] +> A floating-point number. p represents the precision in bits, but TiDB uses this value only to determine whether to use FLOAT or DOUBLE for the resulting data type. If p is from 0 to 24, the data type becomes FLOAT with no M or D values. If p is from 25 to 53, the data type becomes DOUBLE with no M or D values. The range of the resulting column is the same as for the single-precision FLOAT or double-precision DOUBLE data types described earlier in this section. + +``` + +The meaning of the fields: + +| Syntax Element | Description | +| -------- | ------------------------------- | +| M | the total number of digits | +| D | the number of digits following the decimal point | +| UNSIGNED | UNSIGNED. If omitted, it is SIGNED. | +| ZEROFILL | If you specify ZEROFILL for a numeric column, TiDB automatically adds the UNSIGNED attribute to the column. | + +#### Storage + +See the following for the requirements of the storage: + +| Data Type | Storage Required (bytes)| +| ----------- |----------| +| `FLOAT` | 4 | +| `FLOAT(p)` | If 0 <= p <= 24, it is 4; if 25 <= p <= 53, it is 8| +| `DOUBLE` | 8 | + + +### Fixed-point types (exact value) + +TiDB supports all the MySQL floating-point types, including DECIMAL, and NUMERIC. For more information, [Fixed-Point Types (Exact Value) - DECIMAL, NUMERIC in MySQL](https://dev.mysql.com/doc/refman/5.7/en/fixed-point-types.html). + +#### Type definition + +Syntax + +```sql +DECIMAL[(M[,D])] [UNSIGNED] [ZEROFILL] +> A packed “exact” fixed-point number. M is the total number of digits (the precision), and D is the number of digits after the decimal point (the scale). The decimal point and (for negative numbers) the - sign are not counted in M. If D is 0, values have no decimal point or fractional part. The maximum number of digits (M) for DECIMAL is 65. The maximum number of supported decimals (D) is 30. If D is omitted, the default is 0. If M is omitted, the default is 10. + +NUMERIC[(M[,D])] [UNSIGNED] [ZEROFILL] +> Synonym for DECIMAL. +``` + +The meaning of the fields: + +| Syntax Element | Description | +| -------- | ------------------------------- | +| M | the total number of digits | +| D | the number of digits after the decimal point | +| UNSIGNED | UNSIGNED. If omitted, it is SIGNED. | +| ZEROFILL | If you specify ZEROFILL for a numeric column, TiDB automatically adds the UNSIGNED attribute to the column. | + +## Date and time types + +### Overview + +TiDB supports all the MySQL floating-point types, including DATE, DATETIME, TIMESTAMP, TIME, and YEAR. For more information, [Date and Time Types in MySQL](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-types.html). + +#### Type definition + +Syntax: + +```sql +DATE +> A date. The supported range is '1000-01-01' to '9999-12-31'. TiDB displays DATE values in 'YYYY-MM-DD' format. + +DATETIME[(fsp)] +> A date and time combination. The supported range is '1000-01-01 00:00:00.000000' to '9999-12-31 23:59:59.999999'. TiDB displays DATETIME values in 'YYYY-MM-DD HH:MM:SS[.fraction]' format, but permits assignment of values to DATETIME columns using either strings or numbers. +An optional fsp value in the range from 0 to 6 may be given to specify fractional seconds precision. If omitted, the default precision is 0. + +TIMESTAMP[(fsp)] +> A timestamp. The range is '1970-01-01 00:00:01.000000' to '2038-01-19 03:14:07.999999'. +An optional fsp value in the range from 0 to 6 may be given to specify fractional seconds precision. If omitted, the default precision is 0. +An optional fsp value in the range from 0 to 6 may be given to specify fractional seconds precision. If omitted, the default precision is 0. + +TIME[(fsp)] +> A time. The range is '-838:59:59.000000' to '838:59:59.000000'. TiDB displays TIME values in 'HH:MM:SS[.fraction]' format. +An optional fsp value in the range from 0 to 6 may be given to specify fractional seconds precision. If omitted, the default precision is 0. + +YEAR[(2|4)] +> A year in two-digit or four-digit format. The default is the four-digit format. In four-digit format, values display as 1901 to 2155, and 0000. In two-digit format, values display as 70 to 69, representing years from 1970 to 2069. + +``` + +## String types + +### Overview + +TiDB supports all the MySQL string types, including CHAR, VARCHAR, BINARY, VARBINARY, BLOB, TEXT, ENUM, and SET. For more information, [String Types in MySQL](https://dev.mysql.com/doc/refman/5.7/en/string-types.html). + +#### Type definition + +Syntax: + +```sql +[NATIONAL] CHAR[(M)] [CHARACTER SET charset_name] [COLLATE collation_name] +> A fixed-length string. If stored as CHAR, it is right-padded with spaces to the specified length. M represents the column length in characters. The range of M is 0 to 255. + +[NATIONAL] VARCHAR(M) [CHARACTER SET charset_name] [COLLATE collation_name] +> A variable-length string. M represents the maximum column length in characters. The range of M is 0 to 65,535. The effective maximum length of a VARCHAR is subject to the maximum row size (65,535 bytes, which is shared among all columns) and the character set used. + +BINARY(M) +> The BINARY type is similar to the CHAR type, but stores binary byte strings rather than nonbinary character strings. + +VARBINARY(M) +> The VARBINARY type is similar to the VARCHAR type, but stores binary byte strings rather than nonbinary character strings. + +BLOB[(M)] +> A BLOB column with a maximum length of 65,535 bytes. M represents the maximum column length. + +TINYBLOB +> A BLOB column with a maximum length of 255 bytes. + +MEDIUMBLOB +> A BLOB column with a maximum length of 16,777,215 bytes. + +LONGBLOB +> A BLOB column with a maximum length of 4,294,967,295 bytes. + +TEXT[(M)] [CHARACTER SET charset_name] [COLLATE collation_name] +> A TEXT column. M represents the maximum column length ranging from 0 to 65,535. The maximum length of TEXT is based on the size of the longest row and the character set. + +TINYTEXT[(M)] [CHARACTER SET charset_name] [COLLATE collation_name] +> A TEXT column with a maximum length of 255 characters. + +MEDIUMTEXT [CHARACTER SET charset_name] [COLLATE collation_name] +> A TEXT column with a maximum length of 16,777,215 characters. + +LONGTEXT [CHARACTER SET charset_name] [COLLATE collation_name] +> A TEXT column with a maximum length of 4,294,967,295 characters. + +ENUM('value1','value2',...) [CHARACTER SET charset_name] [COLLATE collation_name] +> An enumeration. A string object that can have only one value, chosen from the list of values 'value1', 'value2', ..., NULL or the special '' error value. + +SET('value1','value2',...) [CHARACTER SET charset_name] [COLLATE collation_name] +> A set. A string object that can have zero or more values, each of which must be chosen from the list of values 'value1', 'value2', ... +``` + +## JSON types + +TiDB supports the JSON (JavaScript Object Notation) data type. +The JSON type can store semi-structured data like JSON documents. The JSON data type provides the following advantages over storing JSON-format strings in a string column: + +- Use the Binary format for serialization. The internal format permits quick read access to JSON document elements. +- Automatic validation of the JSON documents stored in JSON columns.Only valid documents can be stored. + +JSON columns, like columns of other binary types, are not indexed directly, but you can index the fields in the JSON document in the form of generated column: + +```sql +CREATE TABLE city ( +id INT PRIMARY KEY, +detail JSON, +population INT AS (JSON_EXTRACT(detail, '$.population') +); +INSERT INTO city VALUES (1, '{"name": "Beijing", "population": 100}'); +SELECT id FROM city WHERE population >= 100; +``` + +For more information, see [JSON Functions and Generated Column](json-functions-generated-column.md). + +## The ENUM data type + +An ENUM is a string object with a value chosen from a list of permitted values that are enumerated explicitly in the column specification when the table is created. The syntax is: + +```sql +ENUM('value1','value2',...) [CHARACTER SET charset_name] [COLLATE collation_name] + +# For example: +ENUM('apple', 'orange', 'pear') +``` + +The value of the ENUM data type is stored as numbers. Each value is converted to a number according the definition order. In the previous example, each string is mapped to a number: + +| Value | Number | +| ---- | ---- | +| NULL | NULL | +| '' | 0 | +| 'apple' | 1 | +| 'orange' | 2 | +| 'pear' | 3 | + +For more information, see [the ENUM type in MySQL](https://dev.mysql.com/doc/refman/5.7/en/enum.html). + +## The SET type + +A SET is a string object that can have zero or more values, each of which must be chosen from a list of permitted values specified when the table is created. The syntax is: + +```sql +SET('value1','value2',...) [CHARACTER SET charset_name] [COLLATE collation_name] + +# For example: +SET('1', '2') NOT NULL +``` +In the example, any of the following values can be valid: + +``` +'' +'1' +'2' +'1,2' +``` +In TiDB, the values of the SET type is internally converted to Int64. The existence of each element is represented using a binary: 0 or 1. For a column specified as `SET('a','b','c','d')`, the members have the following decimal and binary values. + +| Member | Decimal Value | Binary Value | +| ---- | ---- | ------ | +| 'a' | 1 | 0001 | +| 'b' | 2 | 0010 | +| 'c' | 4 | 0100 | +| 'd' | 8 | 1000 | + +In this case, for an element of `('a', 'c')`, it is 0101 in binary. + +For more information, see [the SET type in MySQL](https://dev.mysql.com/doc/refman/5.7/en/set.html)。 + +## Data type default values + +The DEFAULT value clause in a data type specification indicates a default value for a column. The default value must be a constant and cannot be a function or an expression. But for the time type, you can specify the `NOW`, `CURRENT_TIMESTAMP`, `LOCALTIME`, and `LOCALTIMESTAMP` functions as the default for TIMESTAMP and DATETIME columns + +The BLOB, TEXT, and JSON columns cannot be assigned a default value. + +If a column definition includes no explicit DEFAULT value, TiDB determines the default value as follows: + +- If the column can take NULL as a value, the column is defined with an explicit DEFAULT NULL clause. +- If the column cannot take NULL as the value, TiDB defines the column with no explicit DEFAULT clause. + +For data entry into a NOT NULL column that has no explicit DEFAULT clause, if an INSERT or REPLACE statement includes no value for the column, TiDB handles the column according to the SQL mode in effect at the time: + +- If strict SQL mode is enabled, an error occurs for transactional tables, and the statement is rolled back. For nontransactional tables, an error occurs. +- If strict mode is not enabled, TiDB sets the column to the implicit default value for the column data type. + +Implicit defaults are defined as follows: + +- For numeric types, the default is 0. If declared with the AUTO_INCREMENT attribute, the default is the next value in the sequence. +- For date and time types other than TIMESTAMP, the default is the appropriate “zero” value for the type. For TIMESTAMP, the default value is the current date and time. +- For string types other than ENUM, the default value is the empty string. For ENUM, the default is the first enumeration value. \ No newline at end of file diff --git a/v1.0/sql/date-and-time-functions.md b/v1.0/sql/date-and-time-functions.md new file mode 100755 index 0000000000000..7bb87cd5a5ef7 --- /dev/null +++ b/v1.0/sql/date-and-time-functions.md @@ -0,0 +1,75 @@ +--- +title: Date and Time Functions +category: user guide +--- + +# Date and Time Functions + +The usage of date and time functions is similar to MySQL. For more information, see [here](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-types.html). + +**Date/Time functions** + +| Name | Description | +| ---------------------------------------- | ---------------------------------------- | +| [`ADDDATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_adddate) | Add time values (intervals) to a date value | +| [`ADDTIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_addtime) | Add time | +| [`CONVERT_TZ()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_convert-tz) | Convert from one time zone to another | +| [`CURDATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_curdate) | Return the current date | +| [`CURRENT_DATE()`, `CURRENT_DATE`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_current-date) | Synonyms for CURDATE() | +| [`CURRENT_TIME()`, `CURRENT_TIME`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_current-time) | Synonyms for CURTIME() | +| [`CURRENT_TIMESTAMP()`, `CURRENT_TIMESTAMP`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_current-timestamp) | Synonyms for NOW() | +| [`CURTIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_curtime) | Return the current time | +| [`DATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_date) | Extract the date part of a date or datetime expression | +| [`DATE_ADD()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_date-add) | Add time values (intervals) to a date value | +| [`DATE_FORMAT()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_date-format) | Format date as specified | +| [`DATE_SUB()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_date-sub) | Subtract a time value (interval) from a date | +| [`DATEDIFF()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_datediff) | Subtract two dates | +| [`DAY()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_day) | Synonym for DAYOFMONTH() | +| [`DAYNAME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_dayname) | Return the name of the weekday | +| [`DAYOFMONTH()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_dayofmonth) | Return the day of the month (0-31) | +| [`DAYOFWEEK()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_dayofweek) | Return the weekday index of the argument | +| [`DAYOFYEAR()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_dayofyear) | Return the day of the year (1-366) | +| [`EXTRACT()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_extract) | Extract part of a date | +| [`FROM_DAYS()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_from-days) | Convert a day number to a date | +| [`FROM_UNIXTIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_from-unixtime) | Format Unix timestamp as a date | +| [`GET_FORMAT()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_get-format) | Return a date format string | +| [`HOUR()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_hour) | Extract the hour | +| [`LAST_DAY`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_last-day) | Return the last day of the month for the argument | +| [`LOCALTIME()`, `LOCALTIME`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_localtime) | Synonym for NOW() | +| [`LOCALTIMESTAMP`, `LOCALTIMESTAMP()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_localtimestamp) | Synonym for NOW() | +| [`MAKEDATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_makedate) | Create a date from the year and day of year | +| [`MAKETIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_maketime) | Create time from hour, minute, second | +| [`MICROSECOND()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_microsecond) | Return the microseconds from argument | +| [`MINUTE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_minute) | Return the minute from the argument | +| [`MONTH()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_month) | Return the month from the date passed | +| [`MONTHNAME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_monthname) | Return the name of the month | +| [`NOW()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_now) | Return the current date and time | +| [`PERIOD_ADD()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_period-add) | Add a period to a year-month | +| [`PERIOD_DIFF()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_period-diff) | Return the number of months between periods | +| [`QUARTER()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_quarter) | Return the quarter from a date argument | +| [`SEC_TO_TIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_sec-to-time) | Converts seconds to 'HH:MM:SS' format | +| [`SECOND()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_second) | Return the second (0-59) | +| [`STR_TO_DATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_str-to-date) | Convert a string to a date | +| [`SUBDATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_subdate) | Synonym for DATE_SUB() when invoked with three arguments | +| [`SUBTIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_subtime) | Subtract times | +| [`SYSDATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_sysdate) | Return the time at which the function executes | +| [`TIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_time) | Extract the time portion of the expression passed | +| [`TIME_FORMAT()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_time-format) | Format as time | +| [`TIME_TO_SEC()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_time-to-sec) | Return the argument converted to seconds | +| [`TIMEDIFF()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_timediff) | Subtract time | +| [`TIMESTAMP()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_timestamp) | With a single argument, this function returns the date or datetime expression; with two arguments, the sum of the arguments | +| [`TIMESTAMPADD()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_timestampadd) | Add an interval to a datetime expression | +| [`TIMESTAMPDIFF()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_timestampdiff) | Subtract an interval from a datetime expression | +| [`TO_DAYS()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_to-days) | Return the date argument converted to days | +| [`TO_SECONDS()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_to-seconds) | Return the date or datetime argument converted to seconds since Year 0 | +| [`UNIX_TIMESTAMP()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_unix-timestamp) | Return a Unix timestamp | +| [`UTC_DATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_utc-date) | Return the current UTC date | +| [`UTC_TIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_utc-time) | Return the current UTC time | +| [`UTC_TIMESTAMP()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_utc-timestamp) | Return the current UTC date and time | +| [`WEEK()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_week) | Return the week number | +| [`WEEKDAY()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_weekday) | Return the weekday index | +| [`WEEKOFYEAR()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_weekofyear) | Return the calendar week of the date (1-53) | +| [`YEAR()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_year) | Return the year | +| [`YEARWEEK()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_yearweek) | Return the year and week | + +For details, see [here](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html). diff --git a/v1.0/sql/ddl.md b/v1.0/sql/ddl.md new file mode 100755 index 0000000000000..1b42501107599 --- /dev/null +++ b/v1.0/sql/ddl.md @@ -0,0 +1,343 @@ +--- +title: Data Definition Statements +category: user guide +--- + +# Data Definition Statements + +DDL (Data Definition Language) is used to define the database structure or schema, and to manage the database and statements of various objects in the database. + +## CREATE DATABASE syntax + +```sql +CREATE {DATABASE | SCHEMA} [IF NOT EXISTS] db_name + [create_specification] ... + +create_specification: + [DEFAULT] CHARACTER SET [=] charset_name + | [DEFAULT] COLLATE [=] collation_name +``` + +The `CREATE DATABASE` statement is used to create a database, and to specify the default properties of the database, such as the default character set and validation rules. `CREATE SCHEMA` is a synonym for `CREATE DATABASE`. + +If you create an existing database and does not specify `IF NOT EXISTS`, an error is displayed. + +The `create_specification` option is used to specify the specific `CHARACTER SET` and `COLLATE` in the database. Currently, the option is only supported in syntax. + +## DROP DATABASE syntax + +```sql +DROP {DATABASE | SCHEMA} [IF EXISTS] db_name +``` + +The `DROP DATABASE` statement is used to delete the specified database and its tables. + +The `IF EXISTS` statement is used to prevent an error if the database does not exist. + +## CREATE TABLE syntax + +```sql +CREATE TABLE [IF NOT EXISTS] tbl_name + (create_definition,...) + [table_options] + +CREATE TABLE [IF NOT EXISTS] tbl_name + { LIKE old_tbl_name | (LIKE old_tbl_name) } + +create_definition: + col_name column_definition + | [CONSTRAINT [symbol]] PRIMARY KEY [index_type] (index_col_name,...) + [index_option] ... + | {INDEX|KEY} [index_name] [index_type] (index_col_name,...) + [index_option] ... + | [CONSTRAINT [symbol]] UNIQUE [INDEX|KEY] + [index_name] [index_type] (index_col_name,...) + [index_option] ... + | {FULLTEXT} [INDEX|KEY] [index_name] (index_col_name,...) + [index_option] ... + | [CONSTRAINT [symbol]] FOREIGN KEY + [index_name] (index_col_name,...) reference_definition + +column_definition: + data_type [NOT NULL | NULL] [DEFAULT default_value] + [AUTO_INCREMENT] [UNIQUE [KEY] | [PRIMARY] KEY] + [COMMENT 'string'] + [reference_definition] + | data_type [GENERATED ALWAYS] AS (expression) + [VIRTUAL | STORED] [UNIQUE [KEY]] [COMMENT comment] + [NOT NULL | NULL] [[PRIMARY] KEY] + +data_type: + BIT[(length)] + | TINYINT[(length)] [UNSIGNED] [ZEROFILL] + | SMALLINT[(length)] [UNSIGNED] [ZEROFILL] + | MEDIUMINT[(length)] [UNSIGNED] [ZEROFILL] + | INT[(length)] [UNSIGNED] [ZEROFILL] + | INTEGER[(length)] [UNSIGNED] [ZEROFILL] + | BIGINT[(length)] [UNSIGNED] [ZEROFILL] + | REAL[(length,decimals)] [UNSIGNED] [ZEROFILL] + | DOUBLE[(length,decimals)] [UNSIGNED] [ZEROFILL] + | FLOAT[(length,decimals)] [UNSIGNED] [ZEROFILL] + | DECIMAL[(length[,decimals])] [UNSIGNED] [ZEROFILL] + | NUMERIC[(length[,decimals])] [UNSIGNED] [ZEROFILL] + | DATE + | TIME[(fsp)] + | TIMESTAMP[(fsp)] + | DATETIME[(fsp)] + | YEAR + | CHAR[(length)] [BINARY] + [CHARACTER SET charset_name] [COLLATE collation_name] + | VARCHAR(length) [BINARY] + [CHARACTER SET charset_name] [COLLATE collation_name] + | BINARY[(length)] + | VARBINARY(length) + | TINYBLOB + | BLOB + | MEDIUMBLOB + | LONGBLOB + | TINYTEXT [BINARY] + [CHARACTER SET charset_name] [COLLATE collation_name] + | TEXT [BINARY] + [CHARACTER SET charset_name] [COLLATE collation_name] + | MEDIUMTEXT [BINARY] + [CHARACTER SET charset_name] [COLLATE collation_name] + | LONGTEXT [BINARY] + [CHARACTER SET charset_name] [COLLATE collation_name] + | ENUM(value1,value2,value3,...) + [CHARACTER SET charset_name] [COLLATE collation_name] + | SET(value1,value2,value3,...) + [CHARACTER SET charset_name] [COLLATE collation_name] + | JSON + +index_col_name: + col_name [(length)] [ASC | DESC] + +index_type: + USING {BTREE | HASH} + +index_option: + KEY_BLOCK_SIZE [=] value + | index_type + | COMMENT 'string' + +reference_definition: + REFERENCES tbl_name (index_col_name,...) + [MATCH FULL | MATCH PARTIAL | MATCH SIMPLE] + [ON DELETE reference_option] + [ON UPDATE reference_option] + +reference_option: + RESTRICT | CASCADE | SET NULL | NO ACTION | SET DEFAULT + +table_options: + table_option [[,] table_option] ... + +table_option: + AUTO_INCREMENT [=] value + | AVG_ROW_LENGTH [=] value + | [DEFAULT] CHARACTER SET [=] charset_name + | CHECKSUM [=] {0 | 1} + | [DEFAULT] COLLATE [=] collation_name + | COMMENT [=] 'string' + | COMPRESSION [=] {'ZLIB'|'LZ4'|'NONE'} + | CONNECTION [=] 'connect_string' + | DELAY_KEY_WRITE [=] {0 | 1} + | ENGINE [=] engine_name + | KEY_BLOCK_SIZE [=] value + | MAX_ROWS [=] value + | MIN_ROWS [=] value + | ROW_FORMAT [=] {DEFAULT|DYNAMIC|FIXED|COMPRESSED|REDUNDANT|COMPACT} + | STATS_PERSISTENT [=] {DEFAULT|0|1} +``` + +The `CREATE TABLE` statement is used to create a table. Currently, it does not support temporary tables, `CHECK` constraints, or importing data from other tables while creating tables. It supports some of the `Partition_options` in syntax. + +- When you create an existing table and if you specify `IF NOT EXIST`, it does not report an error. Otherwise, it reports an error. +- Use `LIKE` to create an empty table based on the definition of another table including its column and index properties. +- The `FULLTEXT` and `FOREIGN KEY` in `create_definition` are currently only supported in syntax. +- For the `data_type`, see [Data Types](datatype.md). +- The `[ASC | DESC]` in `index_col_name` is currently only supported in syntax. +- The `index_type` is currently only supported in syntax. +- The `KEY_BLOCK_SIZE` in `index_option` is currently only supported in syntax. +- The `table_option` currently only supports `AUTO_INCREMENT`, `CHARACTER SET` and `COMMENT`, while the others are only supported in syntax. The clauses are separated by a comma `,`. See the following table for details: + + | Parameters | Description | Example | + | ---------- | ---------- | ------- | + | `AUTO_INCREMENT` | The initial value of the increment field | `AUTO_INCREMENT` = 5 | + | `CHARACTER SET` | To specify the string code for the table; currently only support UTF8MB4 | `CHARACTER SET` = 'utf8mb4' | + | `COMMENT` | The comment information | `COMMENT` = 'comment info' | + +### AUTO_INCREMENT description + +The TiDB automatic increment ID (`AUTO_INCREMENT` ID) only guarantees automatic increment and uniqueness and does not guarantee continuous allocation. Currently, TiDB adopts bulk allocation. If you insert data into multiple TiDB servers at the same time, the allocated automatic increment ID is not continuous. + +You can specify the `AUTO_INCREMENT` for integer fields. A table only supports one field with the `AUTO_INCREMENT` property. + +## DROP TABLE syntax + +```sql +DROP TABLE [IF EXISTS] + tbl_name [, tbl_name] ... + [RESTRICT | CASCADE] +``` + +You can delete multiple tables at the same time. The tables are separated by a comma `,`. + +If you delete a table that does not exist and does not specify the use of `IF EXISTS`, an error is displayed. + +The RESTRICT and CASCADE keywords do nothing. They are permitted to make porting easier from other database systems. + +## TRUNCATE TABLE syntax + +```sql +TRUNCATE [TABLE] tbl_name +``` + +The `TRUNCATE TABLE` statement is used to clear all the data in the specified table but keeps the table structure. + +This operation is similar to deleting all the data of a specified table, but it is much faster and is not affected by the number of rows in the table. + +> **Note**: If you use the `TRUNCATE TABLE` statement, the value of `AUTO_INCREMENT` in the original table is reset to its starting value. + +## RENAME TABLE syntax + +```sql +RENAME TABLE + tbl_name TO new_tbl_name +``` + +The `RENAME TABLE` statement is used to rename a table. + +This statement is equivalent to the following `ALTER TABLE` statement: + +```sql +ALTER TABLE old_table RENAME new_table; +``` + +## ALTER TABLE syntax + +```sql +ALTER TABLE tbl_name + [alter_specification] + +alter_specification: + table_options + | ADD [COLUMN] col_name column_definition + [FIRST | AFTER col_name] + | ADD [COLUMN] (col_name column_definition,...) + | ADD {INDEX|KEY} [index_name] + [index_type] (index_col_name,...) [index_option] ... + | ADD [CONSTRAINT [symbol]] PRIMARY KEY + [index_type] (index_col_name,...) [index_option] ... + | ADD [CONSTRAINT [symbol]] + UNIQUE [INDEX|KEY] [index_name] + [index_type] (index_col_name,...) [index_option] ... + | ADD FULLTEXT [INDEX|KEY] [index_name] + (index_col_name,...) [index_option] ... + | ADD [CONSTRAINT [symbol]] + FOREIGN KEY [index_name] (index_col_name,...) + reference_definition + | ALTER [COLUMN] col_name {SET DEFAULT literal | DROP DEFAULT} + | CHANGE [COLUMN] old_col_name new_col_name column_definition + [FIRST|AFTER col_name] + | {DISABLE|ENABLE} KEYS + | DROP [COLUMN] col_name + | DROP {INDEX|KEY} index_name + | DROP PRIMARY KEY + | DROP FOREIGN KEY fk_symbol + | LOCK [=] {DEFAULT|NONE|SHARED|EXCLUSIVE} + | MODIFY [COLUMN] col_name column_definition + [FIRST | AFTER col_name] + | RENAME [TO|AS] new_tbl_name + | {WITHOUT|WITH} VALIDATION + +index_col_name: + col_name [(length)] [ASC | DESC] + +index_type: + USING {BTREE | HASH} + +index_option: + KEY_BLOCK_SIZE [=] value + | index_type + | COMMENT 'string' + +table_options: + table_option [[,] table_option] ... + +table_option: + AVG_ROW_LENGTH [=] value + | [DEFAULT] CHARACTER SET [=] charset_name + | CHECKSUM [=] {0 | 1} + | [DEFAULT] COLLATE [=] collation_name + | COMMENT [=] 'string' + | COMPRESSION [=] {'ZLIB'|'LZ4'|'NONE'} + | CONNECTION [=] 'connect_string' + | DELAY_KEY_WRITE [=] {0 | 1} + | ENGINE [=] engine_name + | KEY_BLOCK_SIZE [=] value + | MAX_ROWS [=] value + | MIN_ROWS [=] value + | ROW_FORMAT [=] {DEFAULT|DYNAMIC|FIXED|COMPRESSED|REDUNDANT|COMPACT} + | STATS_PERSISTENT [=] {DEFAULT|0|1} +``` + +The `ALTER TABLE` statement is used to update the structure of an existing table, such as updating the table or table properties, adding or deleting columns, creating or deleting indexes, updating columns or column properties. The descriptions of several field types are as follows: + +- For `index_col_name`, `index_type`, and `index_option`, see [CREATE INDEX Syntax](#create-index-syntax). +- Currently, the `table_option` is only supported in syntax. + +The support for specific operation types is as follows: + +- `ADD/DROP INDEX/COLUMN`: currently, does not support the creation or deletion of multiple indexes or columns at the same time +- `ADD/DROP PRIMARY KEY`: currently not supported +- `DROP COLUMN`: currently does not support the deletion of columns that are primary key columns or index columns +- `ADD COLUMN`: currently, does not support setting the newly added column as the primary key or unique index at the same time, and does not support setting the column property to `AUTO_INCREMENT` +- `CHANGE/MODIFY COLUMN`: currently supports some of the syntaxes, and the details are as follows: + - In updating data types, the `CHANGE/MODIFY COLUMN` only supports updates between integer types, updates between string types, and updates between Blob types. You can only extend the length of the original type. Besides, the column properties of `unsigned`/`charset`/`collate` cannot be changed. The specific supported types are classified as follows: + - Integer types: `TinyInt`, `SmallInt`, `MediumInt`, `Int`, `BigInt` + - String types: `Char`, `Varchar`, `Text`, `TinyText`, `MediumText`, `LongText` + - Blob types: `Blob`, `TinyBlob`, `MediumBlob`, `LongBlob` + - In updating type definition, the `CHANGE/MODIFY COLUMN` supports `default value`, `comment`, `null`, `not null` and `OnUpdate`, but does not support the update from `null` to `not null`. + - The `CHANGE/MODIFY COLUMN` does not support the update of `enum` type column. +- `LOCK [=] {DEFAULT|NONE|SHARED|EXCLUSIVE}`: is currently only supported in syntax + +## CREATE INDEX syntax + +```sql +CREATE [UNIQUE] INDEX index_name + [index_type] + ON tbl_name (index_col_name,...) + [index_option] ... + +index_col_name: + col_name [(length)] [ASC | DESC] + +index_option: + KEY_BLOCK_SIZE [=] value + | index_type + | COMMENT 'string' + +index_type: + USING {BTREE | HASH} +``` + +The `CREATE INDEX` statement is used to create the index for an existing table. In function, `CREATE INDEX` corresponds to the index creation of `ALTER TABLE`. Similar to MySQL, the `CREATE INDEX` cannot create a primary key index. + +### Difference from MySQL + +- The `CREATE INDEX` supports the `UNIQUE` index and does not support `FULLTEXT` and `SPATIAL` indexes. +- The `index_col_name` supports the length option with a maximum length limit of 3072 bytes. The length limit does not change depending on the storage engine, and character set used when building the table. This is because TiDB does not use storage engines like InnoDB and MyISAM, and only provides syntax compatibility with MySQL for the storage engine options when creating tables. Similarly, TiDB uses the utf8mb4 character set, and only provides syntax compatibility with MySQL for the character set options when creating tables. For more information, see [Compatibility with MySQL](mysql-compatibility.md). +- The `index_col_name` supports the index sorting options of `ASC` and `DESC`. The behavior of sorting options is similar to MySQL, and only syntax parsing is supported. All the internal indexes are stored in ascending order ascending order. For more information, see [CREATE INDEX Syntax](https://dev.mysql.com/doc/refman/5.7/en/create-index.html). +- The `index_option` supports `KEY_BLOCK_SIZE`, `index_type` and `COMMENT`. The `COMMENT` supports a maximum of 1024 characters and does not support the `WITH PARSER` option. +- The `index_type` supports `BTREE` and `HASH` only in MySQL syntax, which means the index type is independent of the storage engine option in the creating table statement. For example, in MySQL, when you use `CREATE INDEX` on a table using InnoDB, it only supports the `BTREE` index, while TiDB supports both `BTREE` and `HASH` indexes. +- The `CREATE INDEX` does not support the `algorithm_option` and `lock_option` in MySQL. +- TiDB supports at most 512 columns in a single table. The corresponding number limit in InnoDB is 1017, and the hard limit in MySQL is 4096. For more details, see [Limits on Table Column Count and Row Size](https://dev.mysql.com/doc/refman/5.7/en/column-count-limit.html). + +## DROP INDEX syntax + +```sql +DROP INDEX index_name ON tbl_name +``` + +The `DROP INDEX` statement is used to delete a table index. Currently, it does not support deleting the primary key index. diff --git a/v1.0/sql/dml.md b/v1.0/sql/dml.md new file mode 100755 index 0000000000000..69ac6de0bce48 --- /dev/null +++ b/v1.0/sql/dml.md @@ -0,0 +1,268 @@ +--- +title: TiDB Data Manipulation Language +category: user guide +--- + +# TiDB Data Manipulation Language + +Data manipulation language (DML) is a family of syntax elements used for selecting, inserting, deleting and updating data in a database. + +## SELECT + +`SELECT` is used to retrieve rows selected from one or more tables. + +### Syntax + +```sql +SELECT + [ALL | DISTINCT | DISTINCTROW ] + [HIGH_PRIORITY] + [SQL_CACHE | SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS] + select_expr [, select_expr ...] + [FROM table_references + [WHERE where_condition] + [GROUP BY {col_name | expr | position} + [ASC | DESC], ...] + [HAVING where_condition] + [ORDER BY {col_name | expr | position} + [ASC | DESC], ...] + [LIMIT {[offset,] row_count | row_count OFFSET offset}] + [FOR UPDATE | LOCK IN SHARE MODE]] +``` + +### Description of the syntax elements + +|Syntax Element|Description| +| --------------------- | -------------------------------------------------- | +|`ALL`, `DISTINCT`, `DISTINCTROW` | The `ALL`, `DISTINCT`/`DISTINCTROW` modifiers specify whether duplicate rows should be returned. ALL (the default) specifies that all matching rows should be returned.| +|`HIGH_PRIORITY` | `HIGH_PRIORITY` gives the current statement higher priority than other statements. | +|`SQL_CACHE`, `SQL_NO_CACHE`, `SQL_CALC_FOUND_ROWS` | To guarantee compatibility with MySQL, TiDB parses these three modifiers, but will ignore them.| +|`select_expr` | Each `select_expr` indicates a column to retrieve. including the column names and expressions. `\*` represents all the columns.| +\|`FROM table_references` | The `FROM table_references` clause indicates the table (such as `(select * from t;)`) , or tables(such as `select * from t1 join t2;)') or even 0 tables (such as `select 1+1 from dual;` (which is equivalent to `select 1+1;')) from which to retrieve rows.| +|`WHERE where_condition` | The `WHERE` clause, if given, indicates the condition or conditions that rows must satisfy to be selected. The result contains only the data that meets the condition(s).| +|`GROUP BY` | The `GROUP BY` statement is used to group the result-set.| +|`HAVING where_condition` |The `HAVING` clause and the `WHERE` clause are both used to filter the results. The `HAVING` clause filters the results of `GROUP BY`, while the `WHERE` clause filter the results before aggregation。| +|`ORDER BY` | The `ORDER BY` clause is used to sort the data in ascending or descending order, based on columns, expressions or items in the `select_expr` list.| +|`LIMIT` | The `LIMIT` clause can be used to constrain the number of rows. `LIMIT` takes one or two numeric arguments. With one argument, the argument specifies the maximum number of rows to return, the first row to return is the first row of the table by default; with two arguments, the first argument specifies the offset of the first row to return, and the second specifies the maximum number of rows to return.| +|`FOR UPDATE` | All the data in the result sets are read-locked, in order to detect the concurrent updates. TiDB uses the [Optimistic Transaction Model](mysql-compatibility.md#transaction). The transaction conflicts are detected in the commit phase instead of statement execution phase. while executing the `SELECT FOR UPDATE` statement, if there are other transactions trying to update relavant data, the `SELECT FOR UPDATE` transaction will fail.| +|`LOCK IN SHARE MODE` | To guarantee compatibility, TiDB parses these three modifiers, but will ignore them.| + +## INSERT + +`INSERT` inserts new rows into an existing table. TiDB is compatible with all the `INSERT` syntaxes of MySQL. + +### Syntax + +```sql + Insert Statement: + INSERT [LOW_PRIORITY | DELAYED | HIGH_PRIORITY] [IGNORE] + [INTO] tbl_name + insert_values + [ON DUPLICATE KEY UPDATE assignment_list] + + insert_values: + [(col_name [, col_name] ...)] + {VALUES | VALUE} (expr_list) [, (expr_list)] ... +| SET assignment_list +| [(col_name [, col_name] ...)] + SELECT ... + + expr_list: + expr [, expr] ... + + assignment: + col_name = expr + + assignment_list: + assignment [, assignment] ... +``` + +### Description of the syntax elements + +| Syntax Elements | Description | +| -------------- | --------------------------------------------------------- | +| `LOW_PRIORITY` | `LOW_PRIORITY` gives the statement lower priority. TiDB lowers the priority of the current statement. | +| `DELAYED` | To guarantee compatibility, TiDB parses this modifier, but will ignore it. | +| `HIGH_PRIORITY` | `HIGH_PRIORITY` gives the current statement higher priority than other statements. TiDB raises the priority of the current statement.| +| `IGNORE` | If `IGNORE` modifier is specified and there is a duplicate key error, the data cannot be inserted without an error. | +| `tbl_name` | `tbl_name` is the table into which the rows should be inserted. | +| `insert_values` | The `insert_values` is the value to be inserted. For more information, see [insert_values](#insert_values). | +| `ON DUPLICATE KEY UPDATE assignment_list` | If `ON DUPLICATE KEY UPDATE` is specified, and there is a conflict in a `UNIQUE` index or `PRIMARY` KEY, the data cannot be inserted, instead, the existing row will be updated using `assignment_list`. | + +### insert_values + +You can use the following ways to specify the data set: + +- Value List + + Place the values to be inserted in a Value List. + + ```sql + CREATE TABLE tbl_name ( + a int, + b int, + c int + ); + INSERT INTO tbl_name VALUES(1,2,3),(4,5,6),(7,8,9); + ``` + + In the example above, `(1,2,3),(4,5,6),(7,8,9)` are the Value Lists enclosed within parentheses and separated by commas. Each Values List means a row of data, in this example, 3 rows are inserted. You can also specify the `ColumnName List` to insert rows to some of the columns. + and contains exactly as many values as are to be inserted per row. + + ```sql + INSERT INTO tbl_name (a,c) VALUES(1,2),(4,5),(7,8); + ``` + + In the example above, only the `a` and `c` columns are listed, the the `b` of each row will be set to `Null`. + +- Assignment List + + Insert the values by using Assignment Statements, for example: + + ```sql + INSERT INTO tbl_name a=1, b=2, c=3; + ``` + + In this way, only one row of data can be inserted at a time, and the value of each column needs the assignment statement. + +- Select Statement + + The data set to be inserted is obtained using a `SELECT` statement. The column to be inserted into is obtained from the Schema in the `SELECT` statement. + ```sql + CREATE TABLE tbl_name1 ( + a int, + b int, + c int + ); + INSERT INTO tbl_name SELECT * from tbl_name1; + ``` + In the example above, the data is selected from `tal_name1`, and then inserted into `tbl_name`. + +## DELETE + +`DELETE` is a DML statement that removes rows from a table. TiDB is compatible with all the `DELETE` syntaxes of MySQL except for `PARTITION`. There are two kinds of `DELETE`, [`Single-Table DELETE`](#single-table-delete-syntax) and [`Multiple-Table DELETE`](#multiple-table-delete-syntax). + +### Single-Table DELETE syntax + +The `Single_Table DELETE` syntax deletes rows from a single table. + +### DELETE syntax + +```sql +DELETE [LOW_PRIORITY] [QUICK] [IGNORE] FROM tbl_name + [WHERE where_condition] + [ORDER BY ...] + [LIMIT row_count] +``` + +### Multiple-Table DELETE syntax + +The `Multiple_Table DELETE` syntax deletes rows of multiple tables, and has the following two kinds of formats: + +```sql +DELETE [LOW_PRIORITY] [QUICK] [IGNORE] + tbl_name[.*] [, tbl_name[.*]] ... + FROM table_references + [WHERE where_condition] + +DELETE [LOW_PRIORITY] [QUICK] [IGNORE] + FROM tbl_name[.*] [, tbl_name[.*]] ... + USING table_references + [WHERE where_condition] +``` + +Both of the two syntax formats can be used to delete multiple tables, or delete the selected results from multiple tables. There are still differences between the two formats. The first one will delete data of every table in the table list before `FROM`. The second one will delete the data of the tables in the table list which is after `FROM` and before `USING`. + +### Description of the syntax elements + +| Syntax Elements | Description| +| -------------- | --------------------------------------------------------- | +| `LOW_PRIORITY` | `LOW_PRIORITY` gives the statement lower priority. TiDB lowers the priority of the current statement. | +| `QUICK` | To guarantee compatibility with MySQL, TiDB parses these three modifiers, but will ignore them. | +| `IGNORE` | To guarantee compatibility with MySQL, TiDB parses these three modifiers, but will ignore them.| +| `tbl_name` | the table names to be deleted| +| `WHERE where_condition` | the `Where` expression, which deletes rows that meets the expression | +| `ORDER BY` | To sort the data set which are to be deleted| +| `LIMIT row_count` | the top number of rows to be deleted as specified in`row_count` | + +## Update + +`UPDATE` is used to update data of the tables. + +### Syntax + +There are two kinds of `UPDATE` syntax, [Single-table UPDATE](#single-table-update) and [Multi-Table UPDATE](#multi-table-update). + +### Single-table UPDATE + +```sql +UPDATE [LOW_PRIORITY] [IGNORE] table_reference + SET assignment_list + [WHERE where_condition] + [ORDER BY ...] + [LIMIT row_count] + +assignment: + col_name = value + +assignment_list: + assignment [, assignment] ... +``` + +For the single-table syntax, the `UPDATE` statement updates columns of existing rows in the named table with new values. The `SET assignment_list` clause indicates which columns to modify and the values they should be given. The `WHERE/Orderby/Limit` clause, if given, specifies the conditions that identify which rows to update. + +### Multi-table UPDATE + +```sql +UPDATE [LOW_PRIORITY] [IGNORE] table_references + SET assignment_list + [WHERE where_condition] +``` + +For the multiple-table syntax, `UPDATE` updates rows in each table named in `table_references` that satisfy the conditions. + +### Description of the syntax elements + +| Syntax Elements | Description | +| -------------- | --------------------------------------------------------- | +| `LOW_PRIORITY` | `LOW_PRIORITY` gives the statement lower priority. TiDB lowers the priority of the current statement. | +| `IGNORE` | To guarantee compatibility with MySQL, TiDB parses these three modifiers, but will ignore them.| +| `table_reference` | The Table Name to be updated | +| `table_references` | The Table Names to be updated | +| `SET assignment_list` | ColumnName and value to be updated | +| `WHERE where_condition` | The WHERE clause, if given, specifies the conditions that identify which rows to update. | +| `ORDER BY` | $the rows are updated in the order that is specified$ | +| `LIMIT row_count` | $The LIMIT clause places a limit on the number of rows that can be updated.$ | + +## REPLACE + +`REPLACE` is a MySQL extension to the SQL standard. `REPLACE` works exactly like `INSERT`, except that if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE index, the old row is deleted before the new row is inserted. + +### Syntax + +```sql +REPLACE [LOW_PRIORITY | DELAYED] + [INTO] tbl_name + [(col_name [, col_name] ...)] + {VALUES | VALUE} (value_list) [, (value_list)] ... + +REPLACE [LOW_PRIORITY | DELAYED] + [INTO] tbl_name + SET assignment_list + +REPLACE [LOW_PRIORITY | DELAYED] + [INTO] tbl_name + [(col_name [, col_name] ...)] + SELECT ... +``` + +### Description of the syntax elements + +|Syntax Element|Description| +| -------------- | --------------------------------------------------------- | +| `LOW_PRIORITY` | `LOW_PRIORITY` gives the statement lower priority. TiDB lowers the priority of the current statement. | +| `DELAYED` | To guarantee compatibility with MySQL, TiDB parses these three modifiers, but will ignore them.| +| `tbl_name` | `tbl_name` is the table into which the rows should be inserted. | +| `value_list` | data to be inserted | +| `SET assignment_list` | ColumnName and value to be updated | +| `SELECT ...` | results selected by 'SELECT' and to be inserted | diff --git a/v1.0/sql/encrypted-connections.md b/v1.0/sql/encrypted-connections.md new file mode 100755 index 0000000000000..7ee38af1e5b96 --- /dev/null +++ b/v1.0/sql/encrypted-connections.md @@ -0,0 +1,155 @@ +--- +title: Use Encrypted Connections +category: user guide +--- + +# Use Encrypted Connections + +It is recommended to use the encrypted connection to ensure data security because non-encrypted connection might lead to information leak. + +The TiDB server supports the encrypted connection based on the TLS (Transport Layer Security). The protocol is consistent with MySQL encrypted connections and is directly supported by existing MySQL clients such as MySQL operation tools and MySQL drivers. TLS is sometimes referred to as SSL (Secure Sockets Layer). Because the SSL protocol has [known security vulnerabilities](https://en.wikipedia.org/wiki/Transport_Layer_Security), TiDB does not support it. TiDB supports the following versions: TLS 1.0, TLS 1.1, and TLS 1.2. + +After using an encrypted connection, the connection has the following security properties: + +- Confidentiality: the traffic plaintext cannot be eavesdropped +- Integrity: the traffic plaintext cannot be tampered +- Authentication: (optional) the client and the server can verify the identity of both parties to avoid man-in-the-middle attacks + +The encrypted connections in TiDB are disabled by default. To use encrypted connections in the client, you must first configure the TiDB server and enable encrypted connections. In addition, similar to MySQL, the encrypted connections in TiDB consist of single optional connection. For a TiDB server with encrypted connections enabled, you can choose to securely connect to the TiDB server through an encrypted connection, or to use a generally unencrypted connection. Most MySQL clients do not use encrypted connections by default, so generally the client is explicitly required to use an encrypted connection. + +In short, to use encrypted connections, both of the following conditions must be met: + +1. Enable encrypted connections in the TiDB server. +2. The client specifies to use an encrypted connection. + +## Configure TiDB to use encrypted connections + +See the following desrciptions about the related parameters to enable encrypted connections: + +- [`ssl-cert`](server-command-option.md#ssl-cert): specifies the file path of the SSL certificate +- [`ssl-key`](server-command-option.md#ssl-key): specifies the private key that matches the certificate +- [`ssl-ca`](server-command-option.md#ssl-ca): (optional) specifies the file path of the trusted CA certificate + +To enable encrypted connections in the TiDB server, you must specify both of the `ssl-cert` and `ssl-key` parameters in the configuration file when you start the TiDB server. You can also specify the `ssl-ca` parameter for client authentication (see [Enable authentication](#enable-authentication)). + +All the files specified by the parameters are in PEM (Privacy Enhanced Mail) format. Currently, TiDB does not support the import of a password-protected private key, so it is required to provide a private key file without a password. If the certificate or private key is invalid, the TiDB server starts as usual, but the client cannot connect to the TiDB server through an encrypted connection. + +The certificate or key is signed and generated using OpenSSL, or quickly generated using the `mysql_ssl_rsa_setup` tool in MySQL: + +```bash +mysql_ssl_rsa_setup --datadir=./certs +``` + +This command generates the following files in the `certs` directory: + +``` +certs +├── ca-key.pem +├── ca.pem +├── client-cert.pem +├── client-key.pem +├── private_key.pem +├── public_key.pem +├── server-cert.pem +└── server-key.pem +``` + +The corresponding TiDB configuration file parameters are: + +```toml +[security] +ssl-cert = "certs/server-cert.pem" +ssl-key = "certs/server-key.pem" +``` + +If the certificate parameters are correct, TiDB outputs `Secure connection is enabled` when started, otherwise it outputs `Secure connection is NOT ENABLED`. + +## Configure the MySQL client to use encrypted connections + +The client of MySQL 5.7 or later versions attempts to establish an encrypted connection by default. If the server does not support encrypted connections, it automatically returns to unencrypted connections. The client of MySQL earlier than version 5.7 uses the unencrypted connection by default. + +You can change the connection behavior of the client using the following `--ssl-mode` parameters: + +- `--ssl-mode=REQUIRED`: The client requires an encrypted connection. The connection cannot be established if the server side does not support encrypted connections. +- In the absence of the `--ssl-mode` parameter: The client attempts to use an encrypted connection, but the encrypted connection cannot be established if the server side does not support encrypted connections. Then the client uses an unencrypted connection. +- `--ssl-mode=DISABLED`: The client uses an unencrypted connection. + +For more information, see [Client-Side Configuration for Encrypted Connections](https://dev.mysql.com/doc/refman/5.7/en/using-encrypted-connections.html#using-encrypted-connections-client-side-configuration) in MySQL. + +## Enable authentication + +If the `ssl-ca` parameter is not specified in the TiDB server or MySQL client, the client or the server does not perform authentication by default and cannot prevent man-in-the-middle attack. For example, the client might "securely" connect to a disguised client. You can configure the `ssl-ca` parameter for authentication in the server and client. Generally, you only need to authenticate the server, but you can also authenticate the client to further enhance the security. + ++ To authenticate the TiDB server from the MySQL client: + 1. Specify the `ssl-cert` and` ssl-key` parameters in the TiDB server. + 2. Specify the `--ssl-ca` parameter in the MySQL client. + 3. Specify the `--ssl-mode` to `VERIFY_IDENTITY` in the MySQL client. + 4. Make sure that the certificate (`ssl-cert`) configured by the TiDB server is signed by the CA specified by the client `--ssl-ca` parameter, otherwise the authentication fails. + ++ To authenticate the MySQL client from the TiDB server: + 1. Specify the `ssl-cert`, `ssl-key`, and `ssl-ca` parameters in the TiDB server. + 2. Specify the `--ssl-cert` and `--ssl-key` parameters in the client. + 3. Make sure the server-configured certificate and the client-configured certificate are both signed by the `ssl-ca` specified by the server. + +- To perform mutual authentication, meet both of the above requirements. + +> **Note**: Currently, it is optional that TiDB server authenticates the client. If the client does not present its identity certificate in the TLS handshake, the TLS connection can also be successfully established. + +## Check whether the current connection uses encryption + +Use the `SHOW STATUS LIKE "%Ssl%";` statement to get the details of the current connection, including whether encryption is used, the encryption protocol used by encrypted connections, the TLS version number and so on. + +See the following example of the result in an encrypted connection. The results change according to different TLS versions or encryption protocols supported by the client. + +``` +mysql> SHOW STATUS LIKE "%Ssl%"; +...... +| Ssl_verify_mode | 5 | +| Ssl_version | TLSv1.2 | +| Ssl_cipher | ECDHE-RSA-AES128-GCM-SHA256 | +...... +``` + +Besides, for the official MySQL client, you can also use the `STATUS` or `\s` statement to view the connection status: + +``` +mysql> \s +... +SSL: Cipher in use is ECDHE-RSA-AES128-GCM-SHA256 +... +``` + +## Supported TLS versions, key exchange protocols, and encryption algorithms + +The TLS versions, key exchange protocols and encryption algorithms supported by TiDB are determined by the official Golang libraries. + +### Supported TLS versions + +- TLS 1.0 +- TLS 1.1 +- TLS 1.2 + +### Supported key exchange protocols and encryption algorithms + +- TLS\_RSA\_WITH\_RC4\_128\_SHA +- TLS\_RSA\_WITH\_3DES\_EDE\_CBC\_SHA +- TLS\_RSA\_WITH\_AES\_128\_CBC\_SHA +- TLS\_RSA\_WITH\_AES\_256\_CBC\_SHA +- TLS\_RSA\_WITH\_AES\_128\_CBC\_SHA256 +- TLS\_RSA\_WITH\_AES\_128\_GCM\_SHA256 +- TLS\_RSA\_WITH\_AES\_256\_GCM\_SHA384 +- TLS\_ECDHE\_ECDSA\_WITH\_RC4\_128\_SHA +- TLS\_ECDHE\_ECDSA\_WITH\_AES\_128\_CBC\_SHA +- TLS\_ECDHE\_ECDSA\_WITH\_AES\_256\_CBC\_SHA +- TLS\_ECDHE\_RSA\_WITH\_RC4\_128\_SHA +- TLS\_ECDHE\_RSA\_WITH\_3DES\_EDE\_CBC\_SHA +- TLS\_ECDHE\_RSA\_WITH\_AES\_128\_CBC\_SHA +- TLS\_ECDHE\_RSA\_WITH\_AES\_256\_CBC\_SHA +- TLS\_ECDHE\_ECDSA\_WITH\_AES\_128\_CBC\_SHA256 +- TLS\_ECDHE\_RSA\_WITH\_AES\_128\_CBC\_SHA256 +- TLS\_ECDHE\_RSA\_WITH\_AES\_128\_GCM\_SHA256 +- TLS\_ECDHE\_ECDSA\_WITH\_AES\_128\_GCM\_SHA256 +- TLS\_ECDHE\_RSA\_WITH\_AES\_256\_GCM\_SHA384 +- TLS\_ECDHE\_ECDSA\_WITH\_AES\_256\_GCM\_SHA384 +- TLS\_ECDHE\_RSA\_WITH\_CHACHA20\_POLY1305 +- TLS\_ECDHE\_ECDSA\_WITH\_CHACHA20\_POLY1305 diff --git a/v1.0/sql/encryption-and-compression-functions.md b/v1.0/sql/encryption-and-compression-functions.md new file mode 100755 index 0000000000000..fc76113c31c3f --- /dev/null +++ b/v1.0/sql/encryption-and-compression-functions.md @@ -0,0 +1,28 @@ +--- +title: Encryption and Compression Functions +category: user guide +--- + +# Encryption and Compression Functions + +| Name | Description | +|:------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------| +| [`MD5()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_md5) | Calculate MD5 checksum | +| [`PASSWORD()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_password) (deprecated 5.7.6) | Calculate and return a password string | +| [`RANDOM_BYTES()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_random-bytes) | Return a random byte vector | +| [`SHA1(), SHA()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_sha1) | Calculate an SHA-1 160-bit checksum | +| [`SHA2()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_sha2) | Calculate an SHA-2 checksum | +| [`AES_DECRYPT()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_aes-decrypt) | Decrypt using AES | +| [`AES_ENCRYPT()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_aes-encrypt) | Encrypt using AES | +| [`COMPRESS()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_compress) | Return result as a binary string | +| [`UNCOMPRESS()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_uncompress) | Uncompress a string compressed | +| [`UNCOMPRESSED_LENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_uncompressed-length) | Return the length of a string before compression | +| [`CREATE_ASYMMETRIC_PRIV_KEY()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_create-asymmetric-priv-key) | Create private key | +| [`CREATE_ASYMMETRIC_PUB_KEY()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_create-asymmetric-pub-key) | Create public key | +| [`CREATE_DH_PARAMETERS()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_create-dh-parameters) | Generate shared DH secret | +| [`CREATE_DIGEST()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_create-digest) | Generate digest from string | +| [`ASYMMETRIC_DECRYPT()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_asymmetric-decrypt) | Decrypt ciphertext using private or public key | +| [`ASYMMETRIC_DERIVE()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_asymmetric-derive) | Derive symmetric key from asymmetric keys | +| [`ASYMMETRIC_ENCRYPT()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_asymmetric-encrypt) | Encrypt cleartext using private or public key | +| [`ASYMMETRIC_SIGN()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_asymmetric-sign) | Generate signature from digest | +| [`ASYMMETRIC_VERIFY()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_asymmetric-verify) | Verify that signature matches digest | diff --git a/v1.0/sql/error.md b/v1.0/sql/error.md new file mode 100755 index 0000000000000..dd756016042cb --- /dev/null +++ b/v1.0/sql/error.md @@ -0,0 +1,26 @@ +--- +title: Error Codes and Troubleshooting +category: user guide +--- + +# Error Codes and Troubleshooting + +This document describes the problems encountered during the use of TiDB and provides the solutions. + +## Error codes + +TiDB is compatible with the error codes in MySQL, and in most cases returns the same error code as MySQL. In addition, TiDB has the following unique error codes: + +| Error code | Description | Solution | +| ---- | ------- | --------- | +| 9001 | The PD request timed out. | Check the state/monitor/log of the PD server and the network between the TiDB server and the PD server. | +| 9002 | The TiKV request timed out. | Check the state/monitor/log of the TiKV server and the network between the TiDB server and the TiKV server. | +| 9003 | The TiKV server is busy and this usually occurs when the workload is too high. | Check the state/monitor/log of the TiKV server. | +| 9004 | This error occurs when a large number of transactional conflicts exist in the database. | Check the code of application. | +| 9005 | A certain Raft Group is not available, such as the number of replicas is not enough. This error usually occurs when the TiKV server is busy or the TiKV node is down. | Check the state/monitor/log of the TiKV server. | +| 9006 | The interval of GC Life Time is too short and the data that should be read by the long transactions might be cleared. | Extend the interval of GC Life Time. | +| 9500 | A single transaction is too large. | See [here](../FAQ.md#the-error-message-transaction-too-large-is-displayed) for the solution. | + +## Troubleshooting + +See the [troubleshooting](../trouble-shooting.md) and [FAQ](../FAQ.md) documents. \ No newline at end of file diff --git a/v1.0/sql/expression-syntax.md b/v1.0/sql/expression-syntax.md new file mode 100755 index 0000000000000..b8764683309e9 --- /dev/null +++ b/v1.0/sql/expression-syntax.md @@ -0,0 +1,67 @@ +--- +title: Expression Syntax +category: user guide +--- + +# Expression Syntax + +The following rules define the expression syntax in TiDB. You can find the definition in `parser/parser.y`. The syntax parsing in TiDB is based on Yacc. + +``` +Expression: + singleAtIdentifier assignmentEq Expression + | Expression logOr Expression + | Expression "XOR" Expression + | Expression logAnd Expression + | "NOT" Expression + | Factor IsOrNotOp trueKwd + | Factor IsOrNotOp falseKwd + | Factor IsOrNotOp "UNKNOWN" + | Factor + +Factor: + Factor IsOrNotOp "NULL" + | Factor CompareOp PredicateExpr + | Factor CompareOp singleAtIdentifier assignmentEq PredicateExpr + | Factor CompareOp AnyOrAll SubSelect + | PredicateExpr + +PredicateExpr: + PrimaryFactor InOrNotOp '(' ExpressionList ')' + | PrimaryFactor InOrNotOp SubSelect + | PrimaryFactor BetweenOrNotOp PrimaryFactor "AND" PredicateExpr + | PrimaryFactor LikeOrNotOp PrimaryExpression LikeEscapeOpt + | PrimaryFactor RegexpOrNotOp PrimaryExpression + | PrimaryFactor + +PrimaryFactor: + PrimaryFactor '|' PrimaryFactor + | PrimaryFactor '&' PrimaryFactor + | PrimaryFactor "<<" PrimaryFactor + | PrimaryFactor ">>" PrimaryFactor + | PrimaryFactor '+' PrimaryFactor + | PrimaryFactor '-' PrimaryFactor + | PrimaryFactor '*' PrimaryFactor + | PrimaryFactor '/' PrimaryFactor + | PrimaryFactor '%' PrimaryFactor + | PrimaryFactor "DIV" PrimaryFactor + | PrimaryFactor "MOD" PrimaryFactor + | PrimaryFactor '^' PrimaryFactor + | PrimaryExpression + +PrimaryExpression: + Operand + | FunctionCallKeyword + | FunctionCallNonKeyword + | FunctionCallAgg + | FunctionCallGeneric + | Identifier jss stringLit + | Identifier juss stringLit + | SubSelect + | '!' PrimaryExpression + | '~' PrimaryExpression + | '-' PrimaryExpression + | '+' PrimaryExpression + | "BINARY" PrimaryExpression + | PrimaryExpression "COLLATE" StringName +``` diff --git a/v1.0/sql/functions-and-operators-reference.md b/v1.0/sql/functions-and-operators-reference.md new file mode 100755 index 0000000000000..1851b6b2b9c5c --- /dev/null +++ b/v1.0/sql/functions-and-operators-reference.md @@ -0,0 +1,12 @@ +--- +title: Function and Operator Reference +category: user guide +--- + +# Function and Operator Reference + +The usage of the functions and operators in TiDB is similar to MySQL. See [Functions and Operators in MySQL](https://dev.mysql.com/doc/refman/5.7/en/functions.html). + +In SQL statements, expressions can be used on the `ORDER BY` and `HAVING` clauses of the `SELECT` statement, the `WHERE` clause of `SELECT`/`DELETE`/`UPDATE` statements, and `SET` statements. + +You can write expressions using literals, column names, NULL, built-in functions, operators and so on. diff --git a/v1.0/sql/information-functions.md b/v1.0/sql/information-functions.md new file mode 100755 index 0000000000000..e40825244a53b --- /dev/null +++ b/v1.0/sql/information-functions.md @@ -0,0 +1,24 @@ +--- +title: Information Functions +category: user guide +--- + +# Information Functions + +In TiDB, the usage of information functions is similar to MySQL. For more information, see [Information Functions](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html). + +## Information function descriptions + +| Name | Description | +|:-----|:------------| +| [`CONNECTION_ID()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_connection-id) | Return the connection ID (thread ID) for the connection | +| [`CURRENT_USER()`, `CURRENT_USER`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_current-user) | Return the authenticated user name and host name | +| [`DATABASE()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_database) | Return the default (current) database name | +| [`FOUND_ROWS()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_found-rows) | For a `SELECT` with a `LIMIT` clause, the number of the rows that are returned if there is no `LIMIT` clause | +| [`LAST_INSERT_ID()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_last-insert-id) | Return the value of the `AUTOINCREMENT` column for the last `INSERT` | +| [`SCHEMA()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_schema) | Synonym for `DATABASE()` | +| [`SESSION_USER()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_session-user) | Synonym for `USER()` | +| [`SYSTEM_USER()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_system-user) | Synonym for `USER()` | +| [`USER()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_user) | Return the user name and host name provided by the client | +| [`VERSION()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_version) | Return a string that indicates the MySQL server version | +| `TIDB_VERSION` | Return a string that indicates the TiDB server version | diff --git a/v1.0/sql/json-functions-generated-column.md b/v1.0/sql/json-functions-generated-column.md new file mode 100755 index 0000000000000..3cbb40a9f5c1c --- /dev/null +++ b/v1.0/sql/json-functions-generated-column.md @@ -0,0 +1,117 @@ +--- +title: JSON Functions and Generated Column +category: user guide +--- + +# JSON Functions and Generated Column + +## About + +To be compatible with MySQL 5.7 or later and better support the document store, TiDB supports JSON in the latest version. In TiDB, a document is a set of Key-Value pairs, encoded as a JSON object. You can use the JSON datatype in a TiDB table and create indexes for the JSON document fields using generated columns. In this way, you can flexibly deal with the business scenarios with uncertain schema and are no longer limited by the read performance and the lack of support for transactions in traditional document databases. + +## JSON functions + +The support for JSON in TiDB mainly refers to the user interface of MySQL 5.7. For example, you can create a table that includes a JSON field to store complex information: + +```sql +CREATE TABLE person ( + id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, + name VARCHAR(255) NOT NULL, + address_info JSON +); +``` + +When you insert data into a table, you can deal with those data with uncertain schema like this: + +```sql +INSERT INTO person (name, address_info) VALUES ("John", '{"city": "Beijing"}'); +``` + +You can insert JSON data into the table by inserting a legal JSON string into the column corresponding to the JSON field. TiDB will then parse the text and save it in a more compact and easy-to-access binary form. + +You can also convert other data type into JSON using CAST: + +```sql +INSERT INTO person (name, address_info) VALUES ("John", CAST('{"city": "Beijing"}' AS JSON)); +INSERT INTO person (name, address_info) VALUES ("John", CAST('123' AS JSON)); +INSERT INTO person (name, address_info) VALUES ("John", CAST(123 AS JSON)); +``` + +Now, if you want to query all the users living in Beijing from the table, you can simply use the following SQL statement: + +```sql +SELECT id, name FROM person WHERE JSON_EXTRACT(address_info, '$.city') = 'Beijing'; +``` + +TiDB supports the `JSON_EXTRACT` function which is exactly the same as in MySQL. The function is to extract the `city` field from the `address_info` document. The second argument is a "path expression" and is used to specify which field to extract. See the following few examples to help you understand the "path expression": + +```sql +SET @person = '{"name":"John","friends":[{"name":"Forest","age":16},{"name":"Zhang San","gender":"male"}]}'; + +SELECT JSON_EXTRACT(@person, '$.name'); -- gets "John" +SELECT JSON_EXTRACT(@person, '$.friends[0].age'); -- gets 16 +SELECT JSON_EXTRACT(@person, '$.friends[1].gender'); -- gets "male" +SELECT JSON_EXTRACT(@person, '$.friends[2].name'); -- gets NULL +``` + +In addition to inserting and querying data, TiDB also supports editing JSON. In general, TiDB currently supports the following JSON functions in MySQL 5.7: + +- [JSON_EXTRACT](https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-extract) +- [JSON_ARRAY](https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-array) +- [JSON_OBJECT](https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-object) +- [JSON_SET](https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-set) +- [JSON_REPLACE](https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-replace) +- [JSON_INSERT](https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-insert) +- [JSON_REMOVE](https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-remove) +- [JSON_TYPE](https://dev.mysql.com/doc/refman/5.7/en/json-attribute-functions.html#function_json-type) +- [JSON_UNQUOTE](https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-unquote) + +You can get the general use of these functions directly from the function name. These functions in TiDB behave the same as in MySQL 5.7. For more information, see the [JSON Functions document of MySQL 5.7](https://dev.mysql.com/doc/refman/5.7/en/json-functions.html). If you are a user of MySQL 5.7, you can migrate to TiDB seamlessly. + +Currently TiDB does not support all the JSON functions in MySQL 5.7. This is because our preliminary goal is to provide complete support for **MySQL X Plugin**, which covers the majority of JSON functions used to insert, select, update and delete data. More functions will be supported if necessary. + +## Index JSON using generated column + +The full table scan is executed when you query a JSON field. When you run the `EXPLAIN` statement in TiDB, the results show that it is full table scan. Then, can you index the JSON field? + +First, this type of index is wrong: + +```sql +CREATE TABLE person ( + id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, + name VARCHAR(255) NOT NULL, + address_info JSON, + KEY (address_info) +); +``` + +This is not because of technical impossibility but because the direct comparison of JSON itself is meaningless. Although we can agree on some comparison rules, such as `ARRAY` is bigger than all `OBJECT`, it is useless. Therefore, as what is done in MySQL 5.7, TiDB prohibits the direct creation of index on JSON field, but you can index the fields in the JSON document in the form of generated column: + +```sql +CREATE TABLE person ( + id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, + name VARCHAR(255) NOT NULL, + address_info JSON, + city VARCHAR(64) AS (JSON_EXTRACT(address_info, '$.city')) VIRTUAL, + KEY (city) +); +``` + +In this table, the `city` column is a **generated column**. As the name implies, the column is generated by other columns in the table, and cannot be assigned a value when inserted or updated. For generating a column, you can specify it as `VIRTUAL` to prevent it from being explicitly saved in the record, but by other columns when needed. This is particularly useful when the column is wide and you need to save storage space. With this generated column, you can create an index on it, and it looks the same with other regular columns. In query, you can run the following statements: + +```sql +SELECT name, id FROM person WHERE city = 'Beijing'; +``` + +In this way, you can create an index. + +> **Note**: In the JSON document, if the field in the specified path does not exist, the result of `JSON_EXTRACT` will be `NULL`. The value of the generated column with index is also `NULL`. If this is not what you want to see, you can add a `NOT NULL` constraint on the generated column. In this way, when the value of the `city` field is `NULL` after you insert data, it can be detected. + +## Limitations + +The current limitations of JSON and generated column are as follows: + +- You cannot add the generated column in the storage type of `STORED` through `ALTER TABLE`. +- You cannot create an index on the generated column through `ALTER TABLE`. + +The above functions and some other JSON functions are under development. diff --git a/v1.0/sql/json-functions.md b/v1.0/sql/json-functions.md new file mode 100755 index 0000000000000..c0ef4fbd73a33 --- /dev/null +++ b/v1.0/sql/json-functions.md @@ -0,0 +1,32 @@ +--- +title: JSON Functions +category: user guide +--- + +# JSON Functions + +| Function Name and Syntactic Sugar | Description | +| ---------- | ------------------ | +| [JSON_EXTRACT(json_doc, path[, path] ...)][json_extract]| Return data from a JSON document, selected from the parts of the document matched by the `path` arguments | +| [JSON_UNQUOTE(json_val)][json_unquote] | Unquote JSON value and return the result as a `utf8mb4` string | +| [JSON_TYPE(json_val)][json_type] | Return a `utf8mb4` string indicating the type of a JSON value | +| [JSON_SET(json_doc, path, val[, path, val] ...)][json_set] | Insert or update data in a JSON document and return the result | +| [JSON_INSERT(json_doc, path, val[, path, val] ...)][json_insert] | Insert data into a JSON document and return the result | +| [JSON_REPLACE(json_doc, path, val[, path, val] ...)][json_replace] | Replace existing values in a JSON document and return the result | +| [JSON_REMOVE(json_doc, path[, path] ...)][json_remove] | Remove data from a JSON document and return the result | +| [JSON_MERGE(json_doc, json_doc[, json_doc] ...)][json_merge] | Merge two or more JSON documents and return the merged result | +| [JSON_OBJECT(key, val[, key, val] ...)][json_object] | Evaluate a (possibly empty) list of key-value pairs and return a JSON object containing those pairs | +| [JSON_ARRAY([val[, val] ...])][json_array] | Evaluate a (possibly empty) list of values and return a JSON array containing those values | +| -> | Return value from JSON column after evaluating path; the syntactic sugar of `JSON_EXTRACT(doc, path_literal)` | +| ->> | Return value from JSON column after evaluating path and unquoting the result; the syntactic sugar of `JSON_UNQUOTE(JSONJSON_EXTRACT(doc, path_literal))` | + +[json_extract]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-extract +[json_unquote]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-unquote +[json_type]: https://dev.mysql.com/doc/refman/5.7/en/json-attribute-functions.html#function_json-type +[json_set]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-set +[json_insert]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-insert +[json_replace]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-replace +[json_remove]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-remove +[json_merge]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-merge +[json_object]: https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-object +[json_array]: https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-array diff --git a/v1.0/sql/keywords-and-reserved-words.md b/v1.0/sql/keywords-and-reserved-words.md new file mode 100755 index 0000000000000..e7f522b80a0d5 --- /dev/null +++ b/v1.0/sql/keywords-and-reserved-words.md @@ -0,0 +1,145 @@ +--- +title: Keywords and Reserved Words +category: user guide +--- + +# Keywords and Reserved Words + +Keywords are words that have significance in SQL. Certain keywords, such as `SELECT`, `UPDATE`, or `DELETE`, are reserved and require special treatment for use as identifiers such as table and column names. For example, as table names, the reserved words must be quoted with backquotes: + +``` +mysql> CREATE TABLE select (a INT); +ERROR 1105 (HY000): line 0 column 19 near " (a INT)" (total length 27) +mysql> CREATE TABLE `select` (a INT); +Query OK, 0 rows affected (0.09 sec) +``` + +The `BEGIN` and `END` are keywords but not reserved words, so you do not need to quote them with backquotes: + +``` +mysql> CREATE TABLE `select` (BEGIN int, END int); +Query OK, 0 rows affected (0.09 sec) +``` + +Exception: A word that follows a period `.` qualifier does not need to be quoted with backquotes either: + +``` +mysql> CREATE TABLE test.select (BEGIN int, END int); +Query OK, 0 rows affected (0.08 sec) +``` + +The following table lists the keywords and reserved words in TiDB. The reserved words are labelled with (R). + +| ACTION | ADD (R) | ADDDATE | +|:------------------------|:-------------------|:-----------------------| +| ADMIN | AFTER | ALL (R) | +| ALTER (R) | ALWAYS | ANALYZE(R) | +| AND (R) | ANY | AS (R) | +| ASC (R) | ASCII | AUTO_INCREMENT | +| AVG | AVG_ROW_LENGTH | BEGIN | +| BETWEEN (R) | BIGINT (R) | BINARY (R) | +| BINLOG | BIT | BIT_XOR | +| BLOB (R) | BOOL | BOOLEAN | +| BOTH (R) | BTREE | BY (R) | +| BYTE | CASCADE (R) | CASE (R) | +| CAST | CHANGE (R) | CHAR (R) | +| CHARACTER (R) | CHARSET | CHECK (R) | +| CHECKSUM | COALESCE | COLLATE (R) | +| COLLATION | COLUMN (R) | COLUMNS | +| COMMENT | COMMIT | COMMITTED | +| COMPACT | COMPRESSED | COMPRESSION | +| CONNECTION | CONSISTENT | CONSTRAINT (R) | +| CONVERT (R) | COUNT | CREATE (R) | +| CROSS (R) | CURRENT_DATE (R) | CURRENT_TIME (R) | +| CURRENT_TIMESTAMP (R) | CURRENT_USER (R) | CURTIME | +| DATA | DATABASE (R) | DATABASES (R) | +| DATE | DATE_ADD | DATE_SUB | +| DATETIME | DAY | DAY_HOUR (R) | +| DAY_MICROSECOND (R) | DAY_MINUTE (R) | DAY_SECOND (R) | +| DDL | DEALLOCATE | DEC | +| DECIMAL (R) | DEFAULT (R) | DELAY_KEY_WRITE | +| DELAYED (R) | DELETE (R) | DESC (R) | +| DESCRIBE (R) | DISABLE | DISTINCT (R) | +| DISTINCTROW (R) | DIV (R) | DO | +| DOUBLE (R) | DROP (R) | DUAL (R) | +| DUPLICATE | DYNAMIC | ELSE (R) | +| ENABLE | ENCLOSED | END | +| ENGINE | ENGINES | ENUM | +| ESCAPE | ESCAPED | EVENTS | +| EXCLUSIVE | EXECUTE | EXISTS | +| EXPLAIN (R) | EXTRACT | FALSE (R) | +| FIELDS | FIRST | FIXED | +| FLOAT (R) | FLUSH | FOR (R) | +| FORCE (R) | FOREIGN (R) | FORMAT | +| FROM (R) | FULL | FULLTEXT (R) | +| FUNCTION | GENERATED (R) | GET_FORMAT | +| GLOBAL | GRANT (R) | GRANTS | +| GROUP (R) | GROUP_CONCAT | HASH | +| HAVING (R) | HIGH_PRIORITY (R) | HOUR | +| HOUR_MICROSECOND (R) | HOUR_MINUTE (R) | HOUR_SECOND (R) | +| IDENTIFIED | IF (R) | IGNORE (R) | +| IN (R) | INDEX (R) | INDEXES | +| INFILE (R) | INNER (R) | INSERT (R) | +| INT (R) | INTEGER (R) | INTERVAL (R) | +| INTO (R) | IS (R) | ISOLATION | +| JOBS | JOIN (R) | JSON | +| KEY (R) | KEY_BLOCK_SIZE | KEYS (R) | +| KILL (R) | LEADING (R) | LEFT (R) | +| LESS | LEVEL | LIKE (R) | +| LIMIT (R) | LINES (R) | LOAD (R) | +| LOCAL | LOCALTIME (R) | LOCALTIMESTAMP (R) | +| LOCK (R) | LONGBLOB (R) | LONGTEXT (R) | +| LOW_PRIORITY (R) | MAX | MAX_ROWS | +| MAXVALUE (R) | MEDIUMBLOB (R) | MEDIUMINT (R) | +| MEDIUMTEXT (R) | MICROSECOND | MIN | +| MIN_ROWS | MINUTE | MINUTE_MICROSECOND (R) | +| MINUTE_SECOND (R) | MIN | MIN_ROWS | +| MINUTE | MINUTE_MICROSECOND | MINUTE_SECOND | +| MOD (R) | MODE | MODIRY | +| MONTH | NAMES | NATIONAL | +| NATURAL (R) | NO | NO_WRITE_TO_BINLOG (R) | +| NONE | NOT (R) | NOW | +| NULL (R) | NUMERIC (R) | NVARCHAR (R) | +| OFFSET | ON (R) | ONLY | +| OPTION (R) | OR (R) | ORDER (R) | +| OUTER (R) | PARTITION (R) | PARTITIONS | +| PASSWORD | PLUGINS | POSITION | +| PRECISION (R) | PREPARE | PRIMARY (R) | +| PRIVILEGES | PROCEDURE (R) | PROCESS | +| PROCESSLIST | QUARTER | QUERY | +| QUICK | RANGE (R) | READ (R) | +| REAL (R) | REDUNDANT | REFERENCES (R) | +| REGEXP (R) | RENAME (R) | REPEAT (R) | +| REPEATABLE | REPLACE (R) | RESTRICT (R) | +| REVERSE | REVOKE (R) | RIGHT (R) | +| RLIKE (R) | ROLLBACK | ROW | +| ROW_COUNT | ROW_FORMAT | SCHEMA | +| SCHEMAS | SECOND | SECOND_MICROSECOND (R) | +| SELECT (R) | SERIALIZABLE | SESSION | +| SET (R) | SHARE | SHARED | +| SHOW (R) | SIGNED | SMALLINT (R) | +| SNAPSHOT | SOME | SQL_CACHE | +| SQL_CALC_FOUND_ROWS (R) | SQL_NO_CACHE | START | +| STARTING (R) | STATS | STATS_BUCKETS | +| STATS_HISTOGRAMS | STATS_META | STATS_PERSISTENT | +| STATUS | STORED (R) | SUBDATE | +| SUBSTR | SUBSTRING | SUM | +| SUPER | TABLE (R) | TABLES | +| TERMINATED (R) | TEXT | THAN | +| THEN (R) | TIDB | TIDB_INLJ | +| TIDB_SMJ | TIME | TIMESTAMP | +| TIMESTAMPADD | TIMESTAMPDIFF | TINYBLOB (R) | +| TINYINT (R) | TINYTEXT (R) | TO (R) | +| TRAILING (R) | TRANSACTION | TRIGGER (R) | +| TRIGGERS | TRIM | TRUE (R) | +| TRUNCATE | UNCOMMITTED | UNION (R) | +| UNIQUE (R) | UNKNOWN | UNLOCK (R) | +| UNSIGNED (R) | UPDATE (R) | USE (R) | +| USER | USING (R) | UTC_DATE (R) | +| UTC_TIME (R) | UTC_TIMESTAMP (R) | VALUE | +| VALUES (R) | VARBINARY (R) | VARCHAR (R) | +| VARIABLES | VIEW | VIRTUAL (R) | +| WARNINGS | WEEK | WHEN (R) | +| WHERE (R) | WITH (R) | WRITE (R) | +| XOR (R) | YEAR | YEAR_MONTH (R) | | +| ZEROFILL (R) | | | diff --git a/v1.0/sql/literal-values.md b/v1.0/sql/literal-values.md new file mode 100755 index 0000000000000..1914dbfff3a3f --- /dev/null +++ b/v1.0/sql/literal-values.md @@ -0,0 +1,241 @@ +--- +title: Literal Values +category: user guide +--- + +# Literal Values + +## String literals + +A string is a sequence of bytes or characters, enclosed within either single quote `'` or double quote `"` characters. For example: + +``` +'example string' +"example string" +``` + +Quoted strings placed next to each other are concatenated to a single string. The following lines are equivalent: + +``` +'a string' +'a' ' ' 'string' +"a" ' ' "string" +``` + +If the `ANSI_QUOTES` SQL MODE is enabled, string literals can be quoted only within single quotation marks because a string quoted within double quotation marks is interpreted as an identifier. + +A binary string is a string of bytes. Each binary string has a character set and collation named `binary`. A non-binary string is a string of characters. It has a character set other than `binary` and a collation that is compatible with the character set. + +For both types of strings, comparisons are based on the numeric values of the string unit. For binary strings, the unit is the byte. For non-binary strings, the unit is the character and some character sets support multibyte characters. + +A string literal may have an optional `character set introducer` and `COLLATE clause`, to designate it as a string that uses a specific character set and collation. TiDB only supports this in syntax, but does not process it. + +``` +[_charset_name]'string' [COLLATE collation_name] +``` + +For example: + +``` +SELECT _latin1'string'; +SELECT _binary'string'; +SELECT _utf8'string' COLLATE utf8_bin; +``` + +You can use N'literal' (or n'literal') to create a string in the national character set. The following statements are equivalent: + +``` +SELECT N'some text'; +SELECT n'some text'; +SELECT _utf8'some text'; +``` + +Escape characters: + +- `\0`: An ASCII NUL (X'00') character +- `\'`: A single quote (') character +- `\"`: A double quote (")character +- `\b`: A backspace character +- `\n`: A newline (linefeed) character +- `\r`: A carriage return character +- `\t`: A tab character +- `\z`: ASCII 26 (Ctrl + Z) +- `\\`: A backslash `\` character +- `\%`: A `%` character +- `\_`: A `_` character + +You can use the following ways to include quote characters within a string: + +- A `'` inside a string quoted with `'` may be written as `''`. +- A `"` inside a string quoted with `"` may be written as `""`. +- Precede the quote character by an escape character `\`. +- A `'` inside a string quoted with `"` needs no special treatment, and a `"` inside a string quoted with `'` needs no special treatment either. + +For more information, see [String Literals in MySQL](https://dev.mysql.com/doc/refman/5.7/en/string-literals.html). + +## Numeric literals + +Numeric literals include integer and DECIMAL literals and floating-point literals. + +Integer may include `.` as a decimal separator. Numbers may be preceded by `-` or `+` to indicate a negative or positive value respectively. + +Exact-value numeric literals can be represented as `1, .2, 3.4, -5, -6.78, +9.10`. + +Numeric literals can also be represented in scientific notation, such as `1.2E3, 1.2E-3, -1.2E3, -1.2E-3`. + +For more information, see [Numeric Literals in MySQL](https://dev.mysql.com/doc/refman/5.7/en/number-literals.html). + +## NULL values + +The `NULL` value means “no data”. NULL can be written in any letter case. A synonym is `\N` (case sensitive). + +Be aware that the `NULL` value is different from values such as `0` for numeric types or the empty string `''` for string types. + +## Hexadecimal literals + +Hexadecimal literal values are written using `X'val'` or `0xval` notation, where `val` contains hexadecimal digits. A leading `0x` is case sensitive and cannot be written as `0X`. + +Legal hexadecimal literals: + +``` +X'ac12' +X'12AC' +x'ac12' +x'12AC' +0xac12 +0x12AC +``` + +Illegal hexadecimal literals: + +``` +X'1z' (z is not a hexadecimal legal digit) +0X12AC (0X must be written as 0x) +``` + +Hexadecimal literals written using `X'val'` notation must contain an even number of digits. To avoid the syntax error, pad the value with a leading zero: + +``` +mysql> select X'aff'; +ERROR 1105 (HY000): line 0 column 13 near ""hex literal: invalid hexadecimal format, must even numbers, but 3 (total length 13) +mysql> select X'0aff'; ++---------+ +| X'0aff' | ++---------+ +| + | ++---------+ +1 row in set (0.00 sec) +``` + +By default, a hexadecimal literal is a binary string. + +To convert a string or a number to a string in hexadecimal format, use the `HEX()` function: + +``` +mysql> SELECT HEX('TiDB'); ++-------------+ +| HEX('TiDB') | ++-------------+ +| 54694442 | ++-------------+ +1 row in set (0.01 sec) + +mysql> SELECT X'54694442'; ++-------------+ +| X'54694442' | ++-------------+ +| TiDB | ++-------------+ +1 row in set (0.00 sec) +``` + +## Date and time literals + +Date and time values can be represented in several formats, such as quoted strings or as numbers. When TiDB expects a date, it interprets any of `'2015-07-21'`, `'20150721'` and `20150721` as a date. + +TiDB supports the following formats for date values: + +- As a string in either `'YYYY-MM-DD'` or `'YY-MM-DD'` format. The `-` delimiter is "relaxed" in syntax. Any punctuation character may be used as the delimiter between date parts. For example, `'2017-08-24'`, `'2017&08&24'` and `'2012@12^31'` are equivalent. The only delimiter recognized is the `.` character, which is treated as a decimal point to separate the integer and fractional parts. The date and time parts can be separated by `T` other than a space. For example, `2017-8-24 10:42:00` and `2017-8-24T10:42:00` are equivalent. +- As a string with no delimiters in either `'YYYYMMDDHHMMSS'` or `'YYMMDDHHMMSS'` format. For example, `'20170824104520'` and `'170824104520'` are interpreted as `'2017-08-24 10:45:20'`. But `'170824304520'` is illegal because the hour part exceeds the legal range. +- As a number in either `YYYYMMDDHHMMSS` or `YYMMDDHHMMSS` format, without single quotation marks or double quotation marks. For example, `20170824104520` is interpreted as `'2017-08-24 10:45:20'`. + +A DATETIME or TIMESTAMP value can include a trailing fractional seconds part in up to microseconds (6 digits) precision. The fractional part should always be separated from the rest of the time by a decimal point. + +Dates containing two-digit year values are ambiguous. It is recommended to use the four-digit format. TiDB interprets two-digit year values using the following rules: + +- Year values in the range of `70-99` are converted to `1970-1999`. +- Year values in the range of `00-69` are converted to `2000-2069`. + +For values specified as strings that include date part delimiters, it is unnecessary to specify two digits for month or day values that are less than 10. `'2017-8-4'` is the same as `'2017-08-04'`. Similarly, for values specified as strings that include time part delimiters, it is unnecessary to specify two digits for hour, minute, or second values that are less than 10. `'2017-08-24 1:2:3'` is the same as `'2017-08-24 01:02:03'`. + +In TiDB, the date or time values specified as numbers are interpreted according their length: + +- 6 digits: `YYMMDD` +- 12 digits: `YYMMDDHHMMSS` +- 8 digits: `YYYYMMDD` +- 14 digits: `YYYYMMDDHHMMSS` + +TiDB supports the following formats for time values: + +- As a string in `'D HH:MM:SS'` format. You can also use one of the following “relaxed” syntaxes: `'HH:MM:SS'`, `'HH:MM'`, `'D HH:MM'`, `'D HH'`, or `'SS'`. Here D represents days and the legal value range is `0-34`. +- As a number in `'HHMMSS'` format. For example, `231010` is interpreted as `'23:10:10'`. +- A number in any of the `SS`, `MMSS` or `HHMMSS` format can be treated as time. + +The time value can also include a trailing fractional part in up to 6 digits precision. The `.` character represents the decimal point. + +For more information, see [Date and Time Literals in MySQL](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-literals.html). + +## Boolean literals + +The constants `TRUE` and `FALSE` evaluate to 1 and 0 respectively, which are not case sensitive. + +``` +mysql> SELECT TRUE, true, tRuE, FALSE, FaLsE, false; ++------+------+------+-------+-------+-------+ +| TRUE | true | tRuE | FALSE | FaLsE | false | ++------+------+------+-------+-------+-------+ +| 1 | 1 | 1 | 0 | 0 | 0 | ++------+------+------+-------+-------+-------+ +1 row in set (0.00 sec) +``` + +## Bit-value literals + +Bit-value literals are written using `b'val'` or `0bval` notation. The `val` is a binary value written using zeros and ones. A leading `0b` is case sensitive and cannot be written as `0B`. + +Legal bit-value literals: + +``` +b'01' +B'01' +0b01 +``` + +Illegal bit-value literals: + +``` +b'2' (2 is not a binary digit; it must be 0 or 1) +0B01 (0B must be written as 0b) +``` + +By default, a bit-value literal is a binary string. + +Bit values are returned as binary values, which may not display well in the MySQL client. To convert a bit value to printable form, you can use a conversion function such as `BIN()` or `HEX()`. + +```sql +CREATE TABLE t (b BIT(8)); +INSERT INTO t SET b = b'00010011'; +INSERT INTO t SET b = b'1110'; +INSERT INTO t SET b = b'100101'; + +mysql> SELECT b+0, BIN(b), HEX(b) FROM t; ++------+--------+--------+ +| b+0 | BIN(b) | HEX(b) | ++------+--------+--------+ +| 19 | 10011 | 13 | +| 14 | 1110 | E | +| 37 | 100101 | 25 | ++------+--------+--------+ +3 rows in set (0.00 sec) +``` diff --git a/v1.0/sql/miscellaneous-functions.md b/v1.0/sql/miscellaneous-functions.md new file mode 100755 index 0000000000000..99b82ba2d21e3 --- /dev/null +++ b/v1.0/sql/miscellaneous-functions.md @@ -0,0 +1,23 @@ +--- +title: Miscellaneous Functions +category: user guide +--- + +# Miscellaneous Functions + +| Name | Description | +|:------------|:-----------------------------------------------------------------------------------------------| +| [`ANY_VALUE()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_any-value) | Suppress ONLY_FULL_GROUP_BY value rejection | +| [`SLEEP()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_sleep) | Sleep for a number of seconds | +| [`UUID()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid) | Return a Universal Unique Identifier (UUID) | +| [`VALUES()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_values) | Defines the values to be used during an INSERT | +| [`INET_ATON()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_inet-aton) | Return the numeric value of an IP address | +| [`INET_NTOA()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_inet-ntoa) | Return the IP address from a numeric value | +| [`INET6_ATON()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_inet6-aton) | Return the numeric value of an IPv6 address | +| [`INET6_NTOA()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_inet6-ntoa) | Return the IPv6 address from a numeric value | +| [`IS_IPV4()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_is-ipv4) | Whether argument is an IPv4 address | +| [`IS_IPV4_COMPAT()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_is-ipv4-compat) | Whether argument is an IPv4-compatible address | +| [`IS_IPV4_MAPPED()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_is-ipv4-mapped) | Whether argument is an IPv4-mapped address | +| [`IS_IPV6()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_is-ipv6) | Whether argument is an IPv6 address | +| [`GET_LOCK()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_get-lock) | Get a named lock | +| [`RELEASE_LOCK()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_release-lock) | Releases the named lock | diff --git a/v1.0/sql/mysql-compatibility.md b/v1.0/sql/mysql-compatibility.md new file mode 100755 index 0000000000000..6f94a0f52097d --- /dev/null +++ b/v1.0/sql/mysql-compatibility.md @@ -0,0 +1,103 @@ +--- +title: Compatibility with MySQL +category: user guide +--- + +# Compatibility with MySQL + +TiDB supports the majority of the MySQL grammar, including cross-row transactions, JOIN, subquery, and so on. You can connect to TiDB directly using your own MySQL client. If your existing business is developed based on MySQL, you can replace MySQL with TiDB to power your application without changing a single line of code in most cases. + +TiDB is compatible with most of the MySQL database management & administration tools such as `PHPMyAdmin`, `Navicat`, `MySQL Workbench`, and so on. It also supports the database backup tools, such as `mysqldump` and `mydumper/myloader`. + +However, in TiDB, the following MySQL features are not supported for the time being or are different: + +## Unsupported features + ++ Stored Procedures ++ View ++ Trigger ++ The user-defined functions ++ The `FOREIGN KEY` constraints ++ The `FULLTEXT` indexes ++ The `Spatial` indexes ++ The Non-UTF-8 characters ++ The JSON data type ++ Add primary key ++ Drop primary key + +## Features that are different from MySQL + +### Auto-increment ID + +The auto-increment ID feature in TiDB is only guaranteed to be automatically incremental and unique but is not guaranteed to be allocated sequentially. Currently, TiDB is allocating IDs in batches. If data is inserted into multiple TiDB servers simultaneously, the allocated IDs are not sequential. + +> **Warning**: +> +> If you use the auto-increment ID in a cluster with multiple TiDB servers, do not mix the default value and the custom value, because it reports an error in the following situation: +> +> In a cluster of two TiDB servers, namely TiDB A and TiDB B, TiDB A caches [1,5000] auto-increment ID, while TiDB B caches [5001,10000] auto-increment ID. Use the following statement to create a table with auto-increment ID: +> +> ``` +> create table t(id int unique key auto_increment, c int); +> ``` +> +> The statement is executed as follows: +> +> 1. The client inserts a statement to TiDB B which sets the `id` to be 1 and the statement is executed successfully. +> 2. The client inserts a record to TiDB A which sets the `id` set to the default value 1. In this case, it returns `Duplicated Error`. + +### Built-in functions + +TiDB supports most of the MySQL built-in functions, but not all. See [TiDB SQL Grammar](https://pingcap.github.io/sqlgram/#FunctionCallKeyword) for the supported functions. + +### DDL + +TiDB implements the asynchronous schema changes algorithm in F1. The Data Manipulation Language (DML) operations cannot be blocked during DDL the execution. Currently, the supported DDL includes: + ++ Create Database ++ Drop Database ++ Create Table ++ Drop Table ++ Add Index: Does not support creating muliple indexs at the same time. ++ Drop Index ++ Add Column: + - Does not support creating muliple columns at the same time. + - Does not support setting a column as the primary key, or creating a unique index, or specifying auto_increment while adding it. ++ Drop Column: Does not support dropping the primary key column or index column. ++ Alter Column ++ Change/Modify Column + - Supports changing/modifying the types among the following integer types: TinyInt,SmallInt,MediumInt,Int,BigInt. + - Supports changing/modifying the types among the following string types: Char,Varchar,Text,TinyText,MediumText,LongText + - Support changing/modifying the types among the following string types: Blob,TinyBlob,MediumBlob,LongBlob. + +**Note:** The change/modifying column operation cannot make the length of the original type become shorter and it cannot change the unsigned/charset/collate attributes of the column. + + - Supports changing the following type definitions: default value,comment,null,not null and OnUpdate, but does not support changing from null to not null. + - Supports parsing the `LOCK [=] {DEFAULT|NONE|SHARED|EXCLUSIVE}` syntax, but there is no actual operation. + ++ Truncate Table ++ Rename Table ++ Create Table Like + + +### Transaction + +TiDB implements an optimistic transaction model. Unlike MySQL, which uses row-level locking to avoid write conflict, in TiDB, the write conflict is checked only in the `commit` process during the execution of the statements like `Update`, `Insert`, `Delete`, and so on. + +**Note:** On the business side, remember to check the returned results of `commit` because even there is no error in the execution, there might be errors in the `commit` process. + + +### Load data + ++ Syntax: + + ``` + LOAD DATA LOCAL INFILE 'file_name' INTO TABLE table_name + {FIELDS | COLUMNS} TERMINATED BY 'string' ENCLOSED BY 'char' ESCAPED BY 'char' + LINES STARTING BY 'string' TERMINATED BY 'string' + (col_name ...); + ``` +Currently, the supported `ESCAPED BY` characters are: `/\/\`. ++ Transaction + + When TiDB is in the execution of loading data, by default, a record with 20,000 rows of data is seen as a transaction for persistent storage. If a load data operation inserts more than 20,000 rows, it will be divided into multiple transactions to commit. If an error occurs in one transaction, this transaction in process will not be committed. However, transactions before that are committed successfully. In this case, a part of the load data operation is successfully inserted, and the rest of the data insertion fails. But MySQL treats a load data operation as a transaction, one error leads to the failure of the entire load data operation. diff --git a/v1.0/sql/numeric-functions-and-operators.md b/v1.0/sql/numeric-functions-and-operators.md new file mode 100755 index 0000000000000..5520112695e23 --- /dev/null +++ b/v1.0/sql/numeric-functions-and-operators.md @@ -0,0 +1,54 @@ +--- +title: Numeric Functions and Operators +category: user guide +--- + +# Numeric Functions and Operators + +## Arithmetic operators + +| Name | Description | +|:----------------------------------------------------------------------------------------------|:----------------------------------| +| [`+`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_plus) | Addition operator | +| [`-`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_minus) | Minus operator | +| [`*`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_times) | Multiplication operator | +| [`/`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_divide) | Division operator | +| [`DIV`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_div) | Integer division | +| [`%`, `MOD`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_mod) | Modulo operator | +| [`-`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_unary-minus) | Change the sign of the argument | + + +## Mathematical functions + +| Name | Description | +|:----------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------| +| [`POW()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_pow) | Return the argument raised to the specified power | +| [`POWER()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_power) | Return the argument raised to the specified power | +| [`EXP()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_exp) | Raise to the power of | +| [`SQRT()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_sqrt) | Return the square root of the argument | +| [`LN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_ln) | Return the natural logarithm of the argument | +| [`LOG()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_log) | Return the natural logarithm of the first argument | +| [`LOG2()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_log2) | Return the base-2 logarithm of the argument | +| [`LOG10()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_log10) | Return the base-10 logarithm of the argument | +| [`PI()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_pi) | Return the value of pi | +| [`TAN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_tan) | Return the tangent of the argument | +| [`COT()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_cot) | Return the cotangent | +| [`SIN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_sin) | Return the sine of the argument | +| [`COS()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_cos) | Return the cosine | +| [`ATAN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_atan) | Return the arc tangent | +| [`ATAN2(), ATAN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_atan2) | Return the arc tangent of the two arguments | +| [`ASIN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_asin) | Return the arc sine | +| [`ACOS()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_acos) | Return the arc cosine | +| [`RADIANS()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_radians) | Return argument converted to radians | +| [`DEGREES()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_degrees) | Convert radians to degrees | +| [`MOD()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_mod) | Return the remainder | +| [`ABS()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_abs) | Return the absolute value | +| [`CEIL()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_ceil) | Return the smallest integer value not less than the argument | +| [`CEILING()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_ceiling) | Return the smallest integer value not less than the argument | +| [`FLOOR()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_floor) | Return the largest integer value not greater than the argument | +| [`ROUND()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_round) | Round the argument | +| [`RAND()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_rand) | Return a random floating-point value | +| [`SIGN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_sign) | Return the sign of the argument | +| [`CONV()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_conv) | Convert numbers between different number bases | +| [`TRUNCATE()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_truncate) | Truncate to specified number of decimal places | +| [`CRC32()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_crc32) | Compute a cyclic redundancy check value | diff --git a/v1.0/sql/operators.md b/v1.0/sql/operators.md new file mode 100755 index 0000000000000..1c1f8d5ee7948 --- /dev/null +++ b/v1.0/sql/operators.md @@ -0,0 +1,127 @@ +# Operators + +- [Operator precedence](#operator-precedence) +- [Comparison functions and operators](#comparison-functions-and-operators) +- [Logical operators](#logical-operators) +- [Assignment operators](#assignment-operators) + +| Name | Description | +| ---------------------------------------- | ---------------------------------------- | +| [AND, &&](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_and) | Logical AND | +| [=](https://dev.mysql.com/doc/refman/5.7/en/assignment-operators.html#operator_assign-equal) | Assign a value (as part of a [`SET`](https://dev.mysql.com/doc/refman/5.7/en/set-variable.html) statement, or as part of the `SET` clause in an [`UPDATE`](https://dev.mysql.com/doc/refman/5.7/en/update.html) statement) | +| [:=](https://dev.mysql.com/doc/refman/5.7/en/assignment-operators.html#operator_assign-value) | Assign a value | +| [BETWEEN ... AND ...](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_between) | Check whether a value is within a range of values | +| [BINARY](https://dev.mysql.com/doc/refman/5.7/en/cast-functions.html#operator_binary) | Cast a string to a binary string | +| [&](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-and) | Bitwise AND | +| [~](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-invert) | Bitwise inversion | +| [\|](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-or) | Bitwise OR | +| [^](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-xor) | Bitwise XOR | +| [CASE](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#operator_case) | Case operator | +| [DIV](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_div) | Integer division | +| [/](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_divide) | Division operator | +| [=](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_equal) | Equal operator | +| [<=>](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_equal-to) | NULL-safe equal to operator | +| [>](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_greater-than) | Greater than operator | +| [>=](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_greater-than-or-equal) | Greater than or equal operator | +| [IS](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is) | Test a value against a boolean | +| [IS NOT](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is-not) | Test a value against a boolean | +| [IS NOT NULL](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is-not-null) | NOT NULL value test | +| [IS NULL](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is-null) | NULL value test | +| [->](https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#operator_json-column-path) | Return value from JSON column after evaluating path; equivalent to `JSON_EXTRACT()` | +| [->>](https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#operator_json-inline-path) | Return value from JSON column after evaluating path and unquoting the result; equivalent to `JSON_UNQUOTE(JSON_EXTRACT())` | +| [<<](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_left-shift) | Left shift | +| [<](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_less-than) | Less than operator | +| [<=](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_less-than-or-equal) | Less than or equal operator | +| [LIKE](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#operator_like) | Simple pattern matching | +| [-](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_minus) | Minus operator | +| [%, MOD](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_mod) | Modulo operator | +| [NOT, !](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_not) | Negates value | +| [NOT BETWEEN ... AND ...](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_not-between) | Check whether a value is not within a range of values | +| [!=, <>](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_not-equal) | Not equal operator | +| [NOT LIKE](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#operator_not-like) | Negation of simple pattern matching | +| [NOT REGEXP](https://dev.mysql.com/doc/refman/5.7/en/regexp.html#operator_not-regexp) | Negation of REGEXP | +| [\|\|, OR](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_or) | Logical OR | +| [+](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_plus) | Addition operator | +| [REGEXP](https://dev.mysql.com/doc/refman/5.7/en/regexp.html#operator_regexp) | Pattern matching using regular expressions | +| [>>](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_right-shift) | Right shift | +| [RLIKE](https://dev.mysql.com/doc/refman/5.7/en/regexp.html#operator_regexp) | Synonym for REGEXP | +| [SOUNDS LIKE](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#operator_sounds-like) | Compare sounds | +| [*](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_times) | Multiplication operator | +| [-](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_unary-minus) | Change the sign of the argument | +| [XOR](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_xor) | Logical XOR | + +## Operator precedence + +Operator precedences are shown in the following list, from highest precedence to the lowest. Operators that are shown together on a line have the same precedence. + +``` sql +INTERVAL +BINARY, COLLATE +! +- (unary minus), ~ (unary bit inversion) +^ +*, /, DIV, %, MOD +-, + +<<, >> +& +| += (comparison), <=>, >=, >, <=, <, <>, !=, IS, LIKE, REGEXP, IN +BETWEEN, CASE, WHEN, THEN, ELSE +NOT +AND, && +XOR +OR, || += (assignment), := +``` + +For details, see [here](https://dev.mysql.com/doc/refman/5.7/en/operator-precedence.html). + +## Comparison functions and operators + +| Name | Description | +| ---------------------------------------- | ---------------------------------------- | +| [BETWEEN ... AND ...](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_between) | Check whether a value is within a range of values | +| [COALESCE()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_coalesce) | Return the first non-NULL argument | +| [=](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_equal) | Equal operator | +| [<=>](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_equal-to) | NULL-safe equal to operator | +| [>](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_greater-than) | Greater than operator | +| [>=](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_greater-than-or-equal) | Greater than or equal operator | +| [GREATEST()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_greatest) | Return the largest argument | +| [IN()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_in) | Check whether a value is within a set of values | +| [INTERVAL()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_interval) | Return the index of the argument that is less than the first argument | +| [IS](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is) | Test a value against a boolean | +| [IS NOT](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is-not) | Test a value against a boolean | +| [IS NOT NULL](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is-not-null) | NOT NULL value test | +| [IS NULL](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is-null) | NULL value test | +| [ISNULL()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_isnull) | Test whether the argument is NULL | +| [LEAST()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_least) | Return the smallest argument | +| [<](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_less-than) | Less than operator | +| [<=](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_less-than-or-equal) | Less than or equal operator | +| [LIKE](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#operator_like) | Simple pattern matching | +| [NOT BETWEEN ... AND ...](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_not-between) | Check whether a value is not within a range of values | +| [!=, <>](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_not-equal) | Not equal operator | +| [NOT IN()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_not-in) | Check whether a value is not within a set of values | +| [NOT LIKE](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#operator_not-like) | Negation of simple pattern matching | +| [STRCMP()](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#function_strcmp) | Compare two strings | + +For details, see [here](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html). + +## Logical operators + +| Name | Description | +| ---------------------------------------- | ------------- | +| [AND, &&](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_and) | Logical AND | +| [NOT, !](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_not) | Negates value | +| [\|\|, OR](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_or) | Logical OR | +| [XOR](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_xor) | Logical XOR | + +For details, see [here](https://dev.mysql.com/doc/refman/5.7/en/group-by-handling.html). + +## Assignment operators + +| Name | Description | +| ---------------------------------------- | ---------------------------------------- | +| [=](https://dev.mysql.com/doc/refman/5.7/en/assignment-operators.html#operator_assign-equal) | Assign a value (as part of a [`SET`](https://dev.mysql.com/doc/refman/5.7/en/set-variable.html) statement, or as part of the `SET` clause in an [`UPDATE`](https://dev.mysql.com/doc/refman/5.7/en/update.html) statement) | +| [:=](https://dev.mysql.com/doc/refman/5.7/en/assignment-operators.html#operator_assign-value) | Assign a value | + +For details, see [here](https://dev.mysql.com/doc/refman/5.7/en/group-by-functional-dependence.html). diff --git a/v1.0/sql/precision-math.md b/v1.0/sql/precision-math.md new file mode 100755 index 0000000000000..72d3364171e3f --- /dev/null +++ b/v1.0/sql/precision-math.md @@ -0,0 +1,138 @@ +--- +title: Precision Math +category: user guide +--- + +# Precision Math + +The precision math support in TiDB is consistent with MySQL. For more information, see [Precision Math in MySQL](https://dev.mysql.com/doc/refman/5.7/en/precision-math.html). + +## Numeric types + +The scope of precision math for exact-value operations includes the exact-value data types (integer and DECIMAL types) and exact-value numeric literals. Approximate-value data types and numeric literals are handled as floating-point numbers. + +Exact-value numeric literals have an integer part or fractional part, or both. They may be signed. Examples: `1`, `.2`, `3.4`, `-5`, `-6.78`, `+9.10`. + +Approximate-value numeric literals are represented in scientific notation (power-of-10) with a mantissa and exponent. Either or both parts may be signed. Examples: `1.2E3`, `1.2E-3`, `-1.2E3`, `-1.2E-3`. + +Two numbers that look similar might be treated differently. For example, `2.34` is an exact-value (fixed-point) number, whereas `2.34E0` is an approximate-value (floating-point) number. + +The DECIMAL data type is a fixed-point type and the calculations are exact. The FLOAT and DOUBLE data types are floating-point types and calculations are approximate. + +## DECIMAL data type characteristics + +This section discusses the following topics of the characteristics of the DECIMAL data type (and its synonyms): + +- Maximum number of digits +- Storage format +- Storage requirements + +The declaration syntax for a DECIMAL column is `DECIMAL(M,D)`. The ranges of values for the arguments are as follows: + +- M is the maximum number of digits (the precision). 1<= M <= 65. +- D is the number of digits to the right of the decimal point (the scale). 1 <= D <= 30 and D must be no larger than M. + +The maximum value of 65 for M means that calculations on DECIMAL values are accurate up to 65 digits. This limit of 65 digits of precision also applies to exact-value numeric literals. + +Values for DECIMAL columns are stored using a binary format that packs 9 decimal digits into 4 bytes. The storage requirements for the integer and fractional parts of each value are determined separately. Each multiple of 9 digits requires 4 bytes, and any remaining digits left over require some fraction of 4 bytes. The storage required for remaining digits is given by the following table. + +| Leftover Digits | Number of Bytes | +| --- | --- | +| 0 | 0 | +| 1–2 | 1 | +| 3–4 | 2 | +| 5–6 | 3 | +| 7–9 | 4 | + +For example, a `DECIMAL(18,9)` column has 9 digits on each side of the decimal point, so the integer part and the fractional part each require 4 bytes. A `DECIMAL(20,6)` column has 14 integer digits and 6 fractional digits. The integer digits require 4 bytes for 9 of the digits and 3 bytes for the remaining 5 digits. The 6 fractional digits require 3 bytes. + +DECIMAL columns do not store a leading `+` character or `-` character or leading `0` digits. If you insert `+0003.1` into a `DECIMAL(5,1)` column, it is stored as `3.1`. For negative numbers, a literal `-` character is not stored. + +DECIMAL columns do not permit values larger than the range implied by the column definition. For example, a `DECIMAL(3,0)` column supports a range of `-999` to `999`. A `DECIMAL(M,D)` column permits at most `M - D` digits to the left of the decimal point. + +For more information about the internal format of the DECIMAL values, see [`mydecimal.go`](https://github.com/pingcap/tidb/blob/master/types/mydecimal.go) in TiDB souce code. + +## Expression handling + +For expressions with precision math, TiDB uses the exact-value numbers as given whenever possible. For example, numbers in comparisons are used exactly as given without a change in value. In strict SQL mode, if you add an exact data type into a column, a number is inserted with its exact value if it is within the column range. When retrieved, the value is the same as what is inserted. If strict SQL mode is not enabled, truncation for INSERT is permitted in TiDB. + +How to handle a numeric expression depends on the values of the expression: + +- If the expression contains any approximate values, the result is approximate. TiDB evaluates the expression using floating-point arithmetic. +- If the expression contains no approximate values are present, which means only exact values are contained, and if any exact value contains a fractional part, the expression is evaluated using DECIMAL exact arithmetic and has a precision of 65 digits. +- Otherwise, the expression contains only integer values. The expression is exact. TiDB evaluates the expression using integer arithmetic and has a precision the same as BIGINT (64 bits). + +If a numeric expression contains strings, the strings are converted to double-precision floating-point values and the result of the expression is approximate. + +Inserts into numeric columns are affected by the SQL mode. The following discussions mention strict mode and `ERROR_FOR_DIVISION_BY_ZERO`. To turn on all the restrictions, you can simply use the `TRADITIONAL` mode, which includes both strict mode values and `ERROR_FOR_DIVISION_BY_ZERO`: + +```sql +SET sql_mode = 'TRADITIONAL`; +``` + +If a number is inserted into an exact type column (DECIMAL or integer), it is inserted with its exact value if it is within the column range. For this number: +- If the value has too many digits in the fractional part, rounding occurs and a warning is generated. +- If the value has too many digits in the integer part, it is too large and is handled as follows: + - If strict mode is not enabled, the value is truncated to the nearest legal value and a warning is generated. + - If strict mode is enabled, an overflow error occurs. + +To insert strings into numeric columns, TiDB handles the conversion from string to number as follows if the string has nonnumeric contents: + +- In strict mode, a string (including an empty string) that does not begin with a number cannot be used as a number. An error, or a warning occurs. +- A string that begins with a number can be converted, but the trailing nonnumeric portion is truncated. In strict mode, if the truncated portion contains anything other than spaces, an error, or a warning occurs. + +By default, the result of the division by 0 is NULL and no warning. By setting the SQL mode appropriately, division by 0 can be restricted. If you enable the `ERROR_FOR_DIVISION_BY_ZERO` SQL mode, TiDB handles division by 0 differently: + +- In strict mode, inserts and updates are prohibited, and an error occurs. +- If it's not in the strict mode, a warning occurs. + +In the following SQL statement: + +```sql +INSERT INTO t SET i = 1/0; +``` +The following results are returned in different SQL modes: + +| `sql_mode` Value | Result | +| :--- | :--- | +| '' | No warning, no error; i is set to NULL.| +| strict | No warning, no error; i is set to NULL. | +| `ERROR_FOR_DIVISION_BY_ZERO` | Warning, no error; i is set to NULL. | +| strict, `ERROR_FOR_DIVISION_BY_ZERO` | Error; no row is inserted. | + + +## Rounding behavior + +The result of the `ROUND()` function depends on whether its argument is exact or approximate: + +- For exact-value numbers, the `ROUND()` function uses the “round half up” rule. +- For approximate-value numbers, the results in TiDB differs from that in MySQL: + + ```sql + TiDB > SELECT ROUND(2.5), ROUND(25E-1); + +------------+--------------+ + | ROUND(2.5) | ROUND(25E-1) | + +------------+--------------+ + | 3 | 3 | + +------------+--------------+ + 1 row in set (0.00 sec) + ``` + +For inserts into a DECIMAL or integer column, the rounding uses [round half away from zero](https://en.wikipedia.org/wiki/Rounding\#Round_half_away_from_zero). + +```sql +TiDB > CREATE TABLE t (d DECIMAL(10,0)); +Query OK, 0 rows affected (0.01 sec) + +TiDB > INSERT INTO t VALUES(2.5),(2.5E0); +Query OK, 2 rows affected, 2 warnings (0.00 sec) + +TiDB > SELECT d FROM t; ++------+ +| d | ++------+ +| 3 | +| 3 | ++------+ +2 rows in set (0.00 sec) +``` \ No newline at end of file diff --git a/v1.0/sql/prepare.md b/v1.0/sql/prepare.md new file mode 100755 index 0000000000000..00dfc0d31736e --- /dev/null +++ b/v1.0/sql/prepare.md @@ -0,0 +1,42 @@ +--- +title: Prepared SQL Statement Syntax +category: user guide +--- + +# Prepared SQL Statement Syntax + +TiDB supports server-side Prepared statements, which can reduce the load of statement parsing and query optimization and improve execution efficiency. You can use Prepared statements in two ways: application programs and SQL statements. + +## Use application programs + +Most MySQL Drivers support Prepared statements, such as [MySQL Connector/C](https://dev.mysql.com/doc/connector-c/en/). You can call the Prepared statement API directly through the Binary protocol. + +## Use SQL statements + +You can also implement Prepared statements using `PREPARE`, `EXECUTE` and `DEALLOCATE PREPARE`. This approach is not as efficient as the application programs, but you do not need to write a program. + +### `PREPARE` statement + +```sql +PREPARE stmt_name FROM preparable_stmt +``` + +The `PREPARE` statement preprocesses `preparable_stmt` (syntax parsing, semantic check and query optimization) and names the result as `stmt_name`. The following operations can refer to it using `stmt_name`. Processed statements can be executed using the `EXECUTE` statement or released using the `DEALLOCATE PREPARE` statement. + +### `EXECUTE` statement + +```sql +EXECUTE stmt_name [USING @var_name [, @var_name] ...] +``` + +The `EXECUTE` statement executes the prepared statements named as `stmt_name`. If parameters exist in the prepared statements, use the User Variable list in the `USING` clause to assign values to parameters. + +### `DEALLOCATE PREPARE` statement + +```sql +{DEALLOCATE | DROP} PREPARE stmt_name +``` + +The `DEALLOCATE PREPARE` statement is used to delete the result of the prepared statements returned by `PREPARE`. + +For more information, see [MySQL Prepared Statement Syntax](https://dev.mysql.com/doc/refman/5.7/en/sql-syntax-prepared-statements.html). diff --git a/v1.0/sql/privilege.md b/v1.0/sql/privilege.md new file mode 100755 index 0000000000000..ca24ca038f9ef --- /dev/null +++ b/v1.0/sql/privilege.md @@ -0,0 +1,328 @@ +--- +title: Privilege Management +category: user guide +--- + +# Privilege Management + +## Privilege management overview + +TiDB's privilege management system is implemented according to the privilege management system in MySQL. It supports most of the syntaxes and privilege types in MySQL. If you find any inconsistency with MySQL, feel free to [open an issue](https://github.com/pingcap/docs-cn/issues/new). + +## Examples + +### User account operation + +TiDB user account names consist of a user name and a host name. The account name syntax is `'user_name'@'host_name'`. + +- The `user_name` is case sensitive. +- The `host_name` can be a host name or an IP address. The `%` and `_` wildcard characters are permitted in host name or IP address values. For example, a host value of `'%'` matches any host name and `'192.168.1.%'` matches every host on a subnet. + +#### Create user + +The `CREATE USER` statement creates new MySQL accounts. + +```sql +create user 'test'@'127.0.0.1' identified by 'xxx'; +``` + +If the host name is not specified, you can log in from any IP address. If the password is not specified, it is empty by default: + +```sql +create user 'test'; +``` + +Equals: + +```sql +create user 'test'@'%' identified by ''; +``` + +**Required Privilege:** To use `CREATE USER`, you must have the global `CREATE USER` privilege. + +#### Change the password + +You can use the `SET PASSWORD` syntax to assign or modify a password to a user account. + +```sql +set password for 'root'@'%' = 'xxx'; +``` + +**Required Privilege:** Operations that assign or modify passwords are permitted only to users with the `CREATE USER` privilege. + +#### Drop user + +The `DROP USER` statement removes one or more MySQL accounts and their privileges. It removes the user record entries in the `mysql.user` table and the privilege rows for the account from all grant tables. + +```sql +drop user 'test'@'%'; +``` +**Required Privilege:** To use `DROP USER`, you must have the global `CREATE USER` privilege. + +#### Reset the root password + +If you forget the root password, you can skip the privilege system and use the root privilege to reset the password. + +To reset the root password, + +1. Start TiDB with a special startup option (root privilege required): + + ```bash + sudo ./tidb-server -skip-grant-table=true + ``` + +2. Use the root account to log in and reset the password: + + ```base + mysql -h 127.0.0.1 -P 4000 -u root + ``` + +### Privilege-related operations + +#### Grant privileges + +The `GRANT` statement grants privileges to the user accounts. + +For example, use the following statement to grant the `xxx` user the privilege to read the `test` database. + +```sql +grant Select on test.* to 'xxx'@'%'; +``` + +Use the following statement to grant the `xxx` user all privileges on all databases: + +``` +grant all privileges on *.* to 'xxx'@'%'; +``` + +If the granted user does not exist, TiDB will automatically create a user. + +``` +mysql> select * from mysql.user where user='xxxx'; +Empty set (0.00 sec) + +mysql> grant all privileges on test.* to 'xxxx'@'%' identified by 'yyyyy'; +Query OK, 0 rows affected (0.00 sec) + +mysql> select user,host from mysql.user where user='xxxx'; ++------|------+ +| user | host | ++------|------+ +| xxxx | % | ++------|------+ +1 row in set (0.00 sec) +``` + +In this example, `xxxx@%` is the user that is automatically created. + +> **Note:** Granting privileges to a database or table does not check if the database or table exists. + +``` +mysql> select * from test.xxxx; +ERROR 1146 (42S02): Table 'test.xxxx' doesn't exist + +mysql> grant all privileges on test.xxxx to xxxx; +Query OK, 0 rows affected (0.00 sec) + +mysql> select user,host from mysql.tables_priv where user='xxxx'; ++------|------+ +| user | host | ++------|------+ +| xxxx | % | ++------|------+ +1 row in set (0.00 sec) +``` + +You can use fuzzy matching to grant privileges to databases and tables. + +``` +mysql> grant all privileges on `te%`.* to genius; +Query OK, 0 rows affected (0.00 sec) + +mysql> select user,host,db from mysql.db where user='genius'; ++--------|------|-----+ +| user | host | db | ++--------|------|-----+ +| genius | % | te% | ++--------|------|-----+ +1 row in set (0.00 sec) +``` + +In this example, because of the `%` in `te%`, all the databases starting with `te` are granted the privilege. + +#### Revoke privileges + +The `REVOKE` statement enables system administrators to revoke privileges from the user accounts. + +The `REVOKE` statement corresponds with the `REVOKE` statement: + +```sql +revoke all privileges on `test`.* from 'genius'@'localhost'; +``` + +> **Note:** To revoke privileges, you need the exact match. If the matching result cannot be found, an error will be displayed: + + ``` + mysql> revoke all privileges on `te%`.* from 'genius'@'%'; + ERROR 1141 (42000): There is no such grant defined for user 'genius' on host '%' + ``` + +About fuzzy matching, escape, string and identifier: + +```sql +mysql> grant all privileges on `te\%`.* to 'genius'@'localhost'; +Query OK, 0 rows affected (0.00 sec) +``` + +This example uses exact match to find the database named `te%`. Note that the `%` uses the `\` escape character so that `%` is not considered as a wildcard. + +A string is enclosed in single quotation marks(''), while an identifier is enclosed in backticks (``). See the differences below: + +```sql +mysql> grant all privileges on 'test'.* to 'genius'@'localhost'; +ERROR 1064 (42000): You have an error in your SQL syntax; check the +manual that corresponds to your MySQL server version for the right +syntax to use near ''test'.* to 'genius'@'localhost'' at line 1 + +mysql> grant all privileges on `test`.* to 'genius'@'localhost'; +Query OK, 0 rows affected (0.00 sec) +``` + +If you want to use special keywords as table names, enclose them in backticks (``). For example: + +```sql +mysql> create table `select` (id int); +Query OK, 0 rows affected (0.27 sec) +``` + +#### Check privileges granted to user + +You can use the `show grant` statement to see what privileges are granted to a user. + +```sql +show grants for 'root'@'%'; +``` + +To be more precise, you can check the privilege information in the `Grant` table. For example, you can use the following steps to check if the `test@%` user has the `Insert` privilege on `db1.t`: + +1. Check if `test@%` has global `Insert` privilege: + + ```sql + select Insert from mysql.user where user='test' and host='%'; + ``` + +2. If not, check if `test@%` has database-level `Insert` privilege at `db1`: + + ```sql + select Insert from mysql.db where user='test' and host='%'; + ``` + +3. If the result is still empty, check whether `test@%` has table-level `Insert` privilege at `db1.t`: + + ```sql + select tables_priv from mysql.tables_priv where user='test' and host='%' and db='db1'; + ``` + +### Implementation of the privilege system + +#### Grant table + +The following system tables are special because all the privilege-related data is stored in them: + +- mysql.user (user account, global privilege) +- mysql.db (database-level privilege) +- mysql.tables_priv (table-level privilege) +- mysql.columns_priv (column-level privilege) + +These tables contain the effective range and privilege information of the data. For example, in the `mysql.user` table: + +```sql +mysql> select User,Host,Select_priv,Insert_priv from mysql.user limit 1; ++------|------|-------------|-------------+ +| User | Host | Select_priv | Insert_priv | ++------|------|-------------|-------------+ +| root | % | Y | Y | ++------|------|-------------|-------------+ +1 row in set (0.00 sec) +``` + +In this record, `Host` and `User` determine that the connection request sent by the `root` user from any host (`%`) can be accepted. `Select_priv` and `Insert_priv` mean that the user has global `Select` and `Insert` privilege. The effective range in the `mysql.user` table is global. + +`Host` and `User` in `mysql.db` determine which databases users can access. The effective range is the database. + +In theory, all privilege-related operations can be done directly by the CRUD operations on the grant table. + +On the implementation level, only a layer of syntactic sugar is added. For example, you can use the following command to remove a user: + +``` +delete from mysql.user where user='test'; +``` + +However, it’s not recommended to manually modify the grant table. + +#### Connection verification + +When the client sends a connection request, TiDB server will verify the login operation. TiDB server first checks the `mysql.user` table. If a record of `User` and `Host` matches the connection request, TiDB server then verifies the `Password`. + +User identity is based on two pieces of information: `Host`, the host that initiates the connection, and `User`, the user name. If the user name is not empty, the exact match of user named is a must. + +`User`+`Host` may match several rows in `user` table. To deal with this scenario, the rows in the `user` table are sorted. The table rows will be checked one by one when the client connects; the first matched row will be used to verify. When sorting, Host is ranked before User. + +#### Request verification + +When the connection is successful, the request verification process checks whether the operation has the privilege. + +For database-related requests (INSERT, UPDATE), the request verification process first checks the user’s global privileges in the `mysql.user` table. If the privilege is granted, you can access directly. If not, check the `mysql.db` table. + +The `user` table has global privileges regardless of the default database. For example, the `DELETE` privilege in `user` can apply to any row, table, or database. + +In the `Db` table, an empty user is to match the anonymous user name. Wildcards are not allowed in the `User` column. The value for the `Host` and `Db` columns can use `%` and `_`, which can use pattern matching. + +Data in the `user` and `db` tables is also sorted when loaded into memory. + +The use of `%` in `tables_priv` and `columns_priv` is similar, but column value in `Db`, `Table_name` and `Column_name` cannot contain `%`. The sorting is also similar when loaded. + +#### Time of effect + +When TiDB starts, some privilege-check tables are loaded into memory, and then the cached data is used to verify the privileges. The system will periodically synchronize the `grant` table from database to cache. Time of effect is determined by the synchronization cycle. Currently, the value is 5 minutes. + +If an immediate effect is needed when you modify the `grant` table, you can run the following command: + +```sql +flush privileges +``` + +### Limitations and constraints + +Currently, the following privileges are not checked yet because they are less frequently used: + +- FILE +- USAGE +- SHUTDOWN +- EXECUTE +- PROCESS +- INDEX +- ... + +**Note:** The column-level privilege is not implemented at this stage. + +## `Create User` statement + +```sql +CREATE USER [IF NOT EXISTS] + user [auth_spec] [, user [auth_spec]] ... +auth_spec: { + IDENTIFIED BY 'auth_string' + | IDENTIFIED BY PASSWORD 'hash_string' +} +``` + +For more information about the user account, see [TiDB user account management](user-account-management.md). + +- IDENTIFIED BY `auth_string` + + When you set the login password, `auth_string` is encrypted by TiDB and stored in the `mysql.user` table. + +- IDENTIFIED BY PASSWORD `hash_string` + + When you set the login password, `hash_string` is encrypted by TiDB and stored in the `mysql.user` table. Currently, this is not the same as MySQL. diff --git a/v1.0/sql/schema-object-names.md b/v1.0/sql/schema-object-names.md new file mode 100755 index 0000000000000..4113e4825efc3 --- /dev/null +++ b/v1.0/sql/schema-object-names.md @@ -0,0 +1,77 @@ +--- +title: Schema Object Names +category: user guide +--- + +# Schema Object Names + +Some objects names in TiDB, including database, table, index, column, alias, etc., are known as identifiers. + +In TiDB, you can quote or unquote an identifier. If an identifier contains special characters or is a reserved word, you must quote it whenever you refer to it. To quote, use the backtick (\`) to wrap the identifier. For example: + +```sql +mysql> SELECT * FROM `table` WHERE `table`.id = 20; +``` + +If the `ANSI_QUOTES` SQL mode is enabled, you can also quote identifiers within double quotation marks("): + +```sql +mysql> CREATE TABLE "test" (a varchar(10)); +ERROR 1105 (HY000): line 0 column 19 near " (a varchar(10))" (total length 35) + +mysql> SET SESSION sql_mode='ANSI_QUOTES'; +Query OK, 0 rows affected (0.00 sec) + +mysql> CREATE TABLE "test" (a varchar(10)); +Query OK, 0 rows affected (0.09 sec) +``` + +The quote characters can be included within an identifier. Double the character if the character to be included within the identifier is the same as that used to quote the identifier itself. For example, the following statement creates a table named a\`b: + +```sql +mysql> CREATE TABLE `a``b` (a int); +``` + +In a `SELECT` statement, a quoted column alias can be specified using an identifier or a string quoting characters: + +```sql +mysql> SELECT 1 AS `identifier`, 2 AS 'string'; ++------------+--------+ +| identifier | string | ++------------+--------+ +| 1 | 2 | ++------------+--------+ +1 row in set (0.00 sec) +``` + +For more information, see [MySQL Schema Object Names](https://dev.mysql.com/doc/refman/5.7/en/identifiers.html). + +## Identifier qualifiers + +Object names can be unqualified or qualified. For example, the following statement creates a table using the unqualified name `t`: + +```sql +CREATE TABLE t (i int); +``` + +If there is no default database, the `ERROR 1046 (3D000): No database selected` is displayed. You can also use the qualified name ` test.t`: + +```sql +CREATE TABLE test.t (i int); +``` + +The qualifier character is a separate token and need not be contiguous with the associated identifiers. For example, there can be white spaces around `.`, and `table_name.col_name` and `table_name . col_name` are equivalent. + +To quote this identifier, use: + +```sql +`table_name`.`col_name` +``` + +Instead of + +```sql +`table_name.col_name` +``` +For more information, see [MySQL Identifier Qualifiers](https://dev.mysql.com/doc/refman/5.7/en/identifier-qualifiers.html). + diff --git a/v1.0/sql/server-command-option.md b/v1.0/sql/server-command-option.md new file mode 100755 index 0000000000000..3e4573a918c34 --- /dev/null +++ b/v1.0/sql/server-command-option.md @@ -0,0 +1,222 @@ +--- +title: The TiDB Command Options +category: user guide +--- + +# The TiDB Command Options + +## TiDB startup options + +When you star TiDB processes, you can specify some program options. + +TiDB supports a lot of startup options. Run the following command to get a brief introduction: + +``` +./tidb-server --help +``` + +Run the following command to get the version: + +``` +./tidb-server -V +``` + +The complete descriptions of startup options are as follows. + +### -L + +- Log level +- Default: "info" +- Optional values: debug, info, warn, error or fatal + +### -P + +- TiDB service monitor port +- Default: "4000" +- TiDB uses this port to accept requests from the MySQL client + +### \-\-binlog-socket + +- TiDB uses the unix socket file to accept the internal connection, such as the PUMP service. +- Default: "" +- For example, use "/tmp/pump.sock" to accept the PUMP unix socket file communication. + +### \-\-config + +- TiDB configuration files +- Default: "" +- The file path of the configuration files + +### \-\-lease + +- The lease time of schema; unit: second +- Default: "10" +- The lease of schema is mainly used in online schema changes. This value affects the actual execution time of the DDL statement. In most cases, you do not need to change this value unless you clearly understand the internal implementation mechanism of TiDB DDL. + +### \-\-host + +- TiDB service monitor host +- Default: "0.0.0.0" +- TiDB service monitors this host. +- The 0.0.0.0 port monitors the address of all network cards. You can specify the network card that provides external service, such as 192.168.100.113. + +### \-\-log-file + +- Log file +- Default: "" +- If the option is not set, the log is output to "stderr"; if set, the log is output to the corresponding file. In the small hours of every day, the log automatically rotates to use a new file, renames and backups the previous file. + +### \-\-metrics-addr + +- The address of Prometheus Push Gateway +- Default: "" +- If the option value is null, TiDB does not push the statistics to Push Gateway. The option format is like `--metrics-addr=192.168.100.115:9091`. + +### \-\-metrics-intervel + +- The time interval that the statistics are pushed to Prometheus Push Gateway +- Default: 15s +- If you set the option value to 0, the statistics are not pushed to Push Gateway. `--metrics-interval=2` means the statistics are pushed to Push Gateway every two seconds. + +### \-\-path + +- For the local storage engines such as "goleveldb" or "BoltDB", `path` specifies the actual data storage path. +- For the "memory" storage engine, it is not necessary to set `path`. +- For the "TiKV" storage engine, `path` specifies the actual PD address. For example, if the PD is deployed on 192.168.100.113:2379, 192.168.100.114:2379 and 192.168.100.115:2379, the `path` is "192.168.100.113:2379, 192.168.100.114:2379, 192.168.100.115:2379". + +### \-\-report-status + +- Enable (true) or disable (false) the status monitor port +- Default: true +- The value is either true or false. The `true` value means opening the status monitor port. The `false` value means closing the status monitor port. The status monitor port is used to report some internal service information to the external. + +### \-\-run-ddl + +- Whether the TiDB server runs DDL statements; set the option when more than two TiDB servers are in the cluster +- Default: true +- The value is either true or false. The `true` value means the TiDB server runs DDL statements. The `false` value means the TiDB server does not run DDL statements. + +### \-\-socket string + +- TiDB uses the unix socket file to accept the external connection. +- Default: "" +- For example, use "/tmp/tidb.sock" to open the unix socket file. + +### \-\-status + +- The status monitor port of TiDB +- Default: "10080" +- This port is used to display the internal data of TiDB, including the [Prometheus statistics](https://prometheus.io/) and [pprof](https://golang.org/pkg/net/http/pprof/). +- Access the Prometheus statistics at http://host:status_port/metrics. +- Access the pprof data at http://host:status_port/debug/pprof. + +### \-\-store + +- To specify the storage engine used by the bottom layer of TiDB +- Default: "mocktikv" +- Optional values: "memory", "goleveldb", "boltdb", "mocktikv" or "tikv" (TiKV is a distributed storage engine, while the others are local storage engines) +- For example, use `tidb-server --store=memory` to start a TiDB server with a pure memory engine + +## TiDB server configuration files + +When you start the TiDB server, you can specify the server's configuration file using `--config path`. For overlapped options in configuration, the priority of command options is higher than configuration files. + +See [an example of the configuration file](https://github.com/pingcap/tidb/blob/master/config/config.toml.example). + +The complete descriptions of startup options are as follows. + +### host + +Same as the "host" startup option + +### port + +Same as the "P" startup option + +### path + +Same as the "path" startup option + +### socket + +Same as the "socket" startup option + +### binlog-socket + +Same as the "binlog-socket" startup option + +### run-ddl + +Same as the "run-ddl" startup option + +### cross-join + +- Default: true +- When you execute `join` on tables without any conditions on both sides, the statement can be run by default. But if you set the value to `false`, the server does not run such `join` statement. + +### join-concurrency + +- The goroutine number when the `join-concurrency` runs `join` +- Default: 5 +- To view the amount of data and data distribution; generally the more the better; a larger value indicates a larger CPU is needed + +### query-log-max-len + +- To record the maximum length of SQL statements in the log +- Default: 2048 +- The overlong request is truncated when it is output to the log + +### slow-threshold int + +- To record the SQL statement that has a larger value than this option +- Default: 300 +- It is required that the value is an integer (int); unit: millisecond + +### slow-query-file + +- The slow query log file +- Default: "" +- The value is the file name. If a non-null string is specified, the slow query log is redirected to the corresponding file. + +### retry-limit + +- The maximum number of commit retries when the transaction meets a conflict +- Default: 10 +- Setting a large number of retries can affect the performance of the TiDB cluster + +### skip-grant-table + +- Allow anyone to connect without a password, and all operations do not check privileges +- Default: false +- The value is either true or false. The machine's root privilege is required to enable this option, which is used to reset the password when forgotten. + +### stats-lease + +- Scan the full table incrementally, and analyze the data amount and indexes of the table +- Default: "3s" +- To use this option, you need to manually run `analyze table name`. Update the statistics automatically and store data in TiKV persistently, taking up some memory. + +### tcp-keep-alive + +- To Enable keepalive in the tcp layer of TiDB +- Default: false + +### ssl-cert + +- The file path of SSL certificate in PEM format +- Default: "" +- If this option and the `--ssl-key` option are set at the same time, the client can (not required) securely connect to TiDB using TLS. +- If the specified certificate or private key is invalid, TiDB starts as usual but does not support encrypted connections. + +### ssl-key + +- The file path of SSL certificate keys in PEM format, or the private keys specified by `--ssl-cert` +- Default: "" +- Currently, you cannot load a password-protected private key in TiDB. + +### ssl-ca + +- The file path of the trusted CA certificate in PEM format +- Default: "" +- If this option and the `--ssl-cert`, `--ssl-key` options are set at the same time, TiDB authenticates the client certificate based on the trusted CA list specified by the option when the client presents the certificate. If the authentication fails, the connection stops. +- If this option is set but the client does not present the certificate, the encrypted connection continues but the client certificate is not authenticated. diff --git a/v1.0/sql/statistics.md b/v1.0/sql/statistics.md new file mode 100755 index 0000000000000..647b68ecee297 --- /dev/null +++ b/v1.0/sql/statistics.md @@ -0,0 +1,128 @@ +--- +title: Introduction to Statistics +category: user guide +--- + +# Introduction to Statistics + +Based on the statistics, the TiDB optimizer chooses the most efficient query execution plan. The statistics collect table-level and column-level information. The statistics of a table include the total number of rows and the number of updated rows. The statistics of a column include the number of different values, the number of `NULL`, and the histogram of the column. + +## Collect statistics + +### Manual collection + +You can run the `ANALYZE` statement to collect statistics. + +Syntax: + +```sql +ANALYZE TABLE TableNameList +> The statement collects statistics of all the tables in `TableNameList`. + +ANALYZE TABLE TableName INDEX IndexNameList +> The statement collects statistics of the index columns on all `IndexNameList` in `TableName`. +``` + +### Automatic update + +For the `INSERT`, `DELETE`, or `UPDATE` statements, TiDB automatically updates the number of rows and updated rows. TiDB persists this information regularly and the update cycle is 5 * `stats-lease`. The default value of `stats-lease` is `3s`. If you specify the value as `0`, it does not update automatically. + +### Control `ANALYZE` concurrency + +When you run the `ANALYZE` statement, you can adjust the concurrency using the following parameters, to control its effect on the system. + +#### `tidb_build_stats_concurrency` + +Currently, when you run the `ANALYZE` statement, the task is divided into multiple small tasks. Each task only works on one column or index. You can use the `tidb_build_stats_concurrency` parameter to control the number of simultaneous tasks. The default value is `4`. + +#### `tidb_distsql_scan_concurrency` + +When you analyze regular columns, you can use the `tidb_distsql_scan_concurrency` parameter to control the number of Region to be read at one time. The default value is `10`. + +#### `tidb_index_serial_scan_concurrency` + +When you analyze index columns, you can use the `tidb_index_serial_scan_concurrency` parameter to control the number of Region to be read at one time. The default value is `1`. + +## View statistics + +You can view the statistics status using the following statements. + +### Metadata of tables + +You can use the `SHOW STATS_META` statement to view the total number of rows and the number of updated rows. + +Syntax: + +```sql +SHOW STATS_META [ShowLikeOrWhere] +> The statement returns the total number of rows and the number of updated rows. You can use `ShowLikeOrWhere` to filter the information you need. +``` + +Currently, the `SHOW STATS_META` statement returns the following 5 columns: + +| Syntax Element | Description | +| :-------- | :------------- | +| `db_name` | database name | +| `table_name` | table name | +| `update_time` | the time of the update | +| `modify_count` | the number of modified rows | +| `row_count` | the total number of rows | + +### Metadata of columns + +You can use the `SHOW STATS_HISTOGRAMS` statement to view the number of different values and the number of `NULL` in all the columns. + +Syntax: + +```sql +SHOW STATS_HISTOGRAMS [ShowLikeOrWhere] +> The statement returns the number of different values and the number of `NULL` in all the columns. You can use `ShowLikeOrWhere` to filter the information you need. +``` + +Currently, the `SHOW STATS_HISTOGRAMS` statement returns the following 7 columns: + +| Syntax Element | Description | +| :-------- | :------------- | +| `db_name` | database name | +| `table_name` | table name | +| `column_name` | column name | +| `is_index` | whether it is an index column or not | +| `update_time` | the time of the update | +| `distinct_count` | the number of different values | +| `null_count` | the number of `NULL` | + +### Buckets of histogram + +You can use the `SHOW STATS_BUCKETS` statement to view each bucket of the histogram. + +Syntax: + +```sql +SHOW STATS_BUCKETS [ShowLikeOrWhere] +> The statement returns information about all the buckets. You can use `ShowLikeOrWhere` to filter the information you need. +``` + +Currently, the `SHOW STATS_BUCKETS` statement returns the following 9 columns: + +| Syntax Element | Description | +| :-------- | :------------- | +| `db_name` | database name | +| `table_name` | table name | +| `column_name` | column name | +| `is_index` | whether it is an index column or not | +| `bucket_id` | the ID of a bucket | +| `count` | the number of all the values that falls on the bucket and the previous buckets | +| `repeats` | the occurrence number of the maximum value | +| `lower_bound` | the minimum value | +| `upper_bound` | the maximum value | + +## Delete statistics + +You can run the `DROP STATS` statement to delete statistics. + +Syntax: + +```sql +DROP STATS TableName +> The statement deletes statistics of all the tables in `TableName`。 +``` \ No newline at end of file diff --git a/v1.0/sql/string-functions.md b/v1.0/sql/string-functions.md new file mode 100755 index 0000000000000..a50d4f1df532b --- /dev/null +++ b/v1.0/sql/string-functions.md @@ -0,0 +1,75 @@ +--- +title: String Functions +category: user guide +--- + + +# String Functions + +| Name | Description | +|:------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------| +| [`ASCII()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_ascii) | Return numeric value of left-most character | +| [`CHAR()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_char) | Return the character for each integer passed | +| [`BIN()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_bin) | Return a string containing binary representation of a number | +| [`HEX()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_hex) | Return a hexadecimal representation of a decimal or string value | +| [`OCT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_oct) | Return a string containing octal representation of a number | +| [`UNHEX()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_unhex) | Return a string containing hex representation of a number | +| [`TO_BASE64()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_to-base64) | Return the argument converted to a base-64 string | +| [`FROM_BASE64()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_from-base64) | Decode to a base-64 string and return result | +| [`LOWER()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_lower) | Return the argument in lowercase | +| [`LCASE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_lcase) | Synonym for LOWER() | +| [`UPPER()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_upper) | Convert to uppercase | +| [`UCASE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_ucase) | Synonym for UPPER() | +| [`LPAD()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_lpad) | Return the string argument, left-padded with the specified string | +| [`RPAD()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_rpad) | Append string the specified number of times | +| [`TRIM()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_trim) | Remove leading and trailing spaces | +| [`LTRIM()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_ltrim) | Remove leading spaces | +| [`RTRIM()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_rtrim) | Remove trailing spaces | +| [`BIT_LENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_bit-length) | Return length of argument in bits | +| [`CHAR_LENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_char-length) | Return number of characters in argument | +| [`CHARACTER_LENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_character-length) | Synonym for CHAR_LENGTH() | +| [`LENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_length) | Return the length of a string in bytes | +| [`OCTET_LENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_octet-length) | Synonym for LENGTH() | +| [`INSERT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_insert) | Insert a substring at the specified position up to the specified number of characters | +| [`REPLACE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_replace) | Replace occurrences of a specified string | +| [`SUBSTR()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_substr) | Return the substring as specified | +| [`SUBSTRING()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_substring) | Return the substring as specified | +| [`SUBSTRING_INDEX()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_substring-index) | Return a substring from a string before the specified number of occurrences of the delimiter | +| [`MID()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_mid) | Return a substring starting from the specified position | +| [`LEFT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_left) | Return the leftmost number of characters as specified | +| [`RIGHT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_right) | Return the specified rightmost number of characters | +| [`INSTR()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_instr) | Return the index of the first occurrence of substring | +| [`LOCATE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_locate) | Return the position of the first occurrence of substring | +| [`POSITION()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_position) | Synonym for LOCATE() | +| [`REPEAT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_repeat) | Repeat a string the specified number of times | +| [`CONCAT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat) | Return concatenated string | +| [`CONCAT_WS()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat-ws) | Return concatenate with separator | +| [`REVERSE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_reverse) | Reverse the characters in a string | +| [`SPACE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_space) | Return a string of the specified number of spaces | +| [`FIELD()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_field) | Return the index (position) of the first argument in the subsequent arguments | +| [`ELT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_elt) | Return string at index number | +| [`EXPORT_SET()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_export-set) | Return a string such that for every bit set in the value bits, you get an on string and for every unset bit, you get an off string | +| [`MAKE_SET()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_make-set) | Return a set of comma-separated strings that have the corresponding bit in bits set | +| [`FIND_IN_SET()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_find-in-set) | Return the index position of the first argument within the second argument | +| [`FORMAT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_format) | Return a number formatted to specified number of decimal places | +| [`ORD()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_ord) | Return character code for leftmost character of the argument | +| [`QUOTE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_quote) | Escape the argument for use in an SQL statement | +| [`SOUNDEX()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_soundex) | Return a soundex string | +| [`SOUNDS LIKE`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#operator_sounds-like) | Compare sounds | + +## String comparison functions + +| Name | Description | +|:------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------| +| [`LIKE`](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#operator_like) | Simple pattern matching | +| [`NOT LIKE`](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#operator_not-like) | Negation of simple pattern matching | +| [`STRCMP()`](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#function_strcmp) | Compare two strings | +| [`MATCH`](https://dev.mysql.com/doc/refman/5.7/en/fulltext-search.html#function_match) | Perform full-text search | + +## Regular expressions + +| Name | Description | +|:------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------| +| [`REGEXP`](https://dev.mysql.com/doc/refman/5.7/en/regexp.html#operator_regexp) | Pattern matching using regular expressions | +| [`RLIKE`](https://dev.mysql.com/doc/refman/5.7/en/regexp.html#operator_regexp) | Synonym for REGEXP | +| [`NOT REGEXP`](https://dev.mysql.com/doc/refman/5.7/en/regexp.html#operator_not-regexp) | Negation of REGEXP | diff --git a/v1.0/sql/system-database.md b/v1.0/sql/system-database.md new file mode 100755 index 0000000000000..3b008cf78f205 --- /dev/null +++ b/v1.0/sql/system-database.md @@ -0,0 +1,255 @@ +--- +title: The TiDB System Database +category: user guide +--- + +# The TiDB System Database + +The TiDB System Database is similar to MySQL, which contains tables that store information required by the server when it runs. + +## Grant system tables + +These system tables contain grant information about user accounts and their privileges: + +- `user`: user accounts, global privileges, and other non-privilege columns +- `db`: database-level privileges +- `tables_priv`: table-level privileges +- `columns_priv`: column-level privileges + +## Server-side help system tables + +Currently, the `help_topic` is NULL. + +## Statistics system tables + +- `stats_buckets`: the buckets of statistics +- `stats_histograms`: the histograms of statistics +- `stats_meta`: the meta information of tables, such as the total number of rows and updated rows + +## GC worker system tables + +- `gc_delete_range`: to record the data to be deleted + +## Miscellaneous system tables + +- `GLOBAL_VARIABLES`: global system variable table +- `tidb`: to record the version information when TiDB executes `bootstrap` + +## INFORMATION\_SCHEMA tables + +To be compatible with MySQL, TiDB supports INFORMATION\_SCHEMA tables. Some third-party software queries information in these tables. Currently, most INFORMATION\_SCHEMA tables in TiDB are NULL. + +### CHARACTER\_SETS table + +The CHARACTER\_SETS table provides information about character sets. But it contains dummy data. By default, TiDB only supports utf8mb4. + +```sql +mysql> select * from CHARACTER_SETS; ++--------------------|----------------------|-----------------------|--------+ +| CHARACTER_SET_NAME | DEFAULT_COLLATE_NAME | DESCRIPTION | MAXLEN | ++--------------------|----------------------|-----------------------|--------+ +| ascii | ascii_general_ci | US ASCII | 1 | +| binary | binary | Binary pseudo charset | 1 | +| latin1 | latin1_swedish_ci | cp1252 West European | 1 | +| utf8 | utf8_general_ci | UTF-8 Unicode | 3 | +| utf8mb4 | utf8mb4_general_ci | UTF-8 Unicode | 4 | ++--------------------|----------------------|-----------------------|--------+ +5 rows in set (0.00 sec) +``` + +### COLLATIONS table + +The COLLATIONS table is similar to the CHARACTER\_SETS table. + +### COLLATION\_CHARACTER\_SET\_APPLICABILITY table + +NULL. + +### COLUMNS table + +The COLUMNS table provides information about columns in tables. The information in this table is not accurate. To query information, it is recommended to use the `SHOW` statement: + +```sql +SHOW COLUMNS FROM table_name [FROM db_name] [LIKE 'wild'] +``` + +### COLUMNS\_PRIVILEGE table + +NULL. + +### ENGINES table + +The ENGINES table provides information about storage engines. But it contains dummy data only. In the production environment, use the TiKV engine for TiDB. + +### EVENTS table + +NULL. + +### FILES table + +NULL. + +### GLOBAL\_STATUS table + +NULL. + +### GLOBAL\_VARIABLES table + +NULL. + +### KEY\_COLUMN\_USAGE table + +The KEY_COLUMN_USAGE table describes the key constraints of the columns, such as the primary key constraint. + +### OPTIMIZER\_TRACE table + +NULL. + +### PARAMETERS table + +NULL. + +### PARTITIONS table + +NULL. + +### PLUGINS table + +NULL. + +### PROFILING table + +NULL. + +### REFERENTIAL\_CONSTRAINTS table + +NULL. + +### ROUTINES table + +NULL. + +### SCHEMATA table + +The SCHEMATA table provides information about databases. The table data is equivalent to the result of the `SHOW DATABASES` statement. + +```sql +mysql> select * from SCHEMATA; ++--------------|--------------------|----------------------------|------------------------|----------+ +| CATALOG_NAME | SCHEMA_NAME | DEFAULT_CHARACTER_SET_NAME | DEFAULT_COLLATION_NAME | SQL_PATH | ++--------------|--------------------|----------------------------|------------------------|----------+ +| def | INFORMATION_SCHEMA | utf8 | utf8_bin | NULL | +| def | mysql | utf8 | utf8_bin | NULL | +| def | PERFORMANCE_SCHEMA | utf8 | utf8_bin | NULL | +| def | test | utf8 | utf8_bin | NULL | ++--------------|--------------------|----------------------------|------------------------|----------+ +4 rows in set (0.00 sec) +``` + +### SCHEMA\_PRIVILEGES table + +NULL. + +### SESSION\_STATUS table + +NULL. + +### SESSION\_VARIABLES table + +The SESSION\_VARIABLES table provides information about session variables. The table data is similar to the result of the `SHOW SESSION VARIABLES` statement. + +### STATISTICS table + +The STATISTICS table provides information about table indexes. + +```sql +mysql> desc statistics; ++---------------|---------------------|------|------|---------|-------+ +| Field | Type | Null | Key | Default | Extra | ++---------------|---------------------|------|------|---------|-------+ +| TABLE_CATALOG | varchar(512) | YES | | NULL | | +| TABLE_SCHEMA | varchar(64) | YES | | NULL | | +| TABLE_NAME | varchar(64) | YES | | NULL | | +| NON_UNIQUE | varchar(1) | YES | | NULL | | +| INDEX_SCHEMA | varchar(64) | YES | | NULL | | +| INDEX_NAME | varchar(64) | YES | | NULL | | +| SEQ_IN_INDEX | bigint(2) UNSIGNED | YES | | NULL | | +| COLUMN_NAME | varchar(21) | YES | | NULL | | +| COLLATION | varchar(1) | YES | | NULL | | +| CARDINALITY | bigint(21) UNSIGNED | YES | | NULL | | +| SUB_PART | bigint(3) UNSIGNED | YES | | NULL | | +| PACKED | varchar(10) | YES | | NULL | | +| NULLABLE | varchar(3) | YES | | NULL | | +| INDEX_TYPE | varchar(16) | YES | | NULL | | +| COMMENT | varchar(16) | YES | | NULL | | +| INDEX_COMMENT | varchar(1024) | YES | | NULL | | ++---------------|---------------------|------|------|---------|-------+ +``` + +The following statements are equivalent: + +```sql +SELECT * FROM INFORMATION_SCHEMA.STATISTICS + WHERE table_name = 'tbl_name' + AND table_schema = 'db_name' + +SHOW INDEX + FROM tbl_name + FROM db_name +``` + +### TABLES table + +The TABLES table provides information about tables in databases. + +The following statements are equivalent: + +```sql +SELECT table_name FROM INFORMATION_SCHEMA.TABLES + WHERE table_schema = 'db_name' + [AND table_name LIKE 'wild'] + +SHOW TABLES + FROM db_name + [LIKE 'wild'] +``` + +### TABLESPACES table + +NULL. + +### TABLE\_CONSTRAINTS table + +The TABLE_CONSTRAINTS table describes which tables have constraints. + +- The `CONSTRAINT_TYPE` value can be UNIQUE, PRIMARY KEY, or FOREIGN KEY. +- The UNIQUE and PRIMARY KEY information is similar to the result of the `SHOW INDEX` statement. + +### TABLE\_PRIVILEGES table + +NULL. + +### TRIGGERS table + +NULL. + +### USER\_PRIVILEGES table + +The USER_PRIVILEGES table provides information about global privileges. This information comes from the mysql.user grant table. + +```sql +mysql> desc USER_PRIVILEGES; ++----------------|--------------|------|------|---------|-------+ +| Field | Type | Null | Key | Default | Extra | ++----------------|--------------|------|------|---------|-------+ +| GRANTEE | varchar(81) | YES | | NULL | | +| TABLE_CATALOG | varchar(512) | YES | | NULL | | +| PRIVILEGE_TYPE | varchar(64) | YES | | NULL | | +| IS_GRANTABLE | varchar(3) | YES | | NULL | | ++----------------|--------------|------|------|---------|-------+ +4 rows in set (0.00 sec) +``` + +### VIEWS table + +NULL. Currently, TiDB does not support views. diff --git a/v1.0/sql/tidb-server.md b/v1.0/sql/tidb-server.md new file mode 100755 index 0000000000000..3725c45fd7e4f --- /dev/null +++ b/v1.0/sql/tidb-server.md @@ -0,0 +1,36 @@ +--- +title: The TiDB Server +category: user guide +--- + +# The TiDB Server + +## TiDB service + +TiDB refers to the TiDB database management system. This document describes the basic management functions of the TiDB cluster. + +## TiDB cluster startup configuration + +You can set the service parameters using the command line or the configuration file, or both. The priority of the command line parameters is higher than the configuration file. If the same parameter is set in both ways, TiDB uses the value set using command line parameters. For more information, see [The TiDB Command Options](server-command-option.md). + +## TiDB system variable + +TiDB is compatible with MySQL system variables, and defines some unique system variables to adjust the database behavior. For more information, see [The Proprietary System Variables and Syntaxes in TiDB](tidb-specific.md). + +## TiDB system table + +Similar to MySQL, TiDB also has system tables that store the information needed when TiDB runs. For more information, see [The TiDB System Database](system-database.md). + +## TiDB data directory + +The TiDB data is stored in the storage engine and the data directory depends on the storage engine used. For more information about how to choose the storage engine, see the [TiDB startup parameters document](../op-guide/configuration.md#store). + +When you use the local storage engine, the data is stored on the local hard disk and the directory location is controlled by the [`path`](../op-guide/configuration.md#path) parameter. + +When you use the TiKV storage engine, the data is stored on the TiKV node and the directory location is controlled by the [`data-dir`](../op-guide/configuration.md#data-dir-1) parameter. + +## TiDB server logs + +The three components of the TiDB cluster (`tidb-server`, ` tikv-server` and `pd-server`) outputs the logs to standard errors by default. In each of the three components, you can set the [`--log-file`](op-guide/configuration.md#--log-file) parameter (or the configuration item in the configuration file) and output the log into a file. + +You can adjust the log behavior using the configuration file. For more details, see the configuration file description of each component. For example, the [`tidb-server` log configuration item](https://github.com/pingcap/tidb/blob/master/config/config.toml.example#L46). diff --git a/v1.0/sql/tidb-specific.md b/v1.0/sql/tidb-specific.md new file mode 100755 index 0000000000000..0b03d686d5e50 --- /dev/null +++ b/v1.0/sql/tidb-specific.md @@ -0,0 +1,54 @@ +--- +title: The Proprietary System Variables and Syntaxes in TiDB +category: user guide +--- + +# The Proprietary System Variables and Syntaxes in TiDB + +On the basis of MySQL variables and syntaxes, TiDB has defined some specific system variables and syntaxes to optimize performance. + +## System variable +Variables can be set with the `SET` statement, for example: + +```set @@tidb_distsql_scan_concurrency = 10 ``` + +If you need to set the global variable, run: + +```set @@global.tidb_distsql_scan_concurrency = 10 ``` + +### tidb_distsql_scan_concurrency +Scope: SESSION | GLOBAL +Default value: 10 +This variable is used to set the concurrency of the `scan` operation. Use a bigger value in OLAP scenarios, and a smaller value in OLTP scenarios. For OLAP scenarios, the maximum value cannot exceed the number of CPU cores of all the TiKV nodes. + +### tidb_index_lookup_size +Scope: SESSION | GLOBAL +Default value: 20000 +This variable is used to set the batch size of index lookup operation. Use a bigger value in OLAP scenarios, and a smaller value in OLTP scenarios. + +### tidb_index_lookup_concurrency +Scope: SESSION | GLOBAL +Default value: 4 +This variable is used to set the concurrency of the `index lookup` operation. Use a bigger value in OLAP scenarios, and a smaller value in OLTP scenarios. + +### tidb_index_serial_scan_concurrency +Scope: SESSION | GLOBAL +Default value: 1 +This variable is used to set the concurrency of the `serial scan` operation. Use a bigger value in OLAP scenarios, and a smaller value in OLTP scenarios. + +## Optimizer hint +On the basis of MySQL’s `Optimizer Hint` Syntax, TiDB adds some proprietary `Hint` syntaxes. When using the `Hint` syntax, the TiDB optimizer will try to use the specific algorithm, which performs better than the default algorithm in some scenarios. + +The `Hint` syntax is included in comments like `/*+ xxx */`, and in MySQL client versions earlier than 5.7.7, the comment is removed by default. If you want to use the `Hint` syntax in these earlier versions, add the `--comments` option when starting the client. For example: `mysql -h 127.0.0.1 -P 4000 -uroot --comments`. + +### TIDB_SMJ(t1, t2) + +```SELECT /*+ TIDB_SMJ(t1, t2) */ * from t1,t2 where t1.id = t2.id``` + +This variable is used to remind the optimizer to use the `Sort Merge Join` algorithm. This algorithm takes up less memory, but takes longer to execute. It is recommended if the data size is too large, or there’s insufficient system memory. + +### TIDB_INLJ(t1, t2) + +```SELECT /*+ TIDB_INLJ(t1, t2) */ * from t1,t2 where t1.id = t2.id``` + +This variable is used to remind the optimizer to use the `Index Nested Loop Join` algorithm. In some scenarios, this algorithm runs faster and takes up fewer system resources, but may be slower and takes up more system resources in some other scenarios. You can try to use this algorithm in scenarios where the result-set is less than 10,000 rows after the outer table is filtered by the WHERE condition. The parameter in `TIDB_INLJ()` is the candidate table for the driving table (external table) when generating the query plan. That means, `TIDB_INLJ (t1)` will only consider using t1 as the driving table to create a query plan. diff --git a/v1.0/sql/time-zone.md b/v1.0/sql/time-zone.md new file mode 100755 index 0000000000000..ad213984a76a3 --- /dev/null +++ b/v1.0/sql/time-zone.md @@ -0,0 +1,64 @@ +--- +title: Time Zone +category: user guide +--- + +# Time Zone + +The time zone in TiDB is decided by the global `time_zone` system variable and the session `time_zone` system variable. The initial value for `time_zone` is 'SYSTEM', which indicates that the server time zone is the same as the system time zone. + +You can use the following statement to set the global server `time_zone` value at runtime: + +```sql +mysql> SET GLOBAL time_zone = timezone; +``` +Each client has its own time zone setting, given by the session `time_zone` variable. Initially, the session variable takes its value from the global `time_zone` variable, but the client can change its own time zone with this statement: + +```sql +mysql> SET time_zone = timezone; +``` + +You can use the following statment to view the current values of the global and client-specific time zones: + +```sql +mysql> SELECT @@global.time_zone, @@session.time_zone; +``` + +To set the format of the value of the `time_zone`: + +- The value 'SYSTEM' indicates that the time zone should be the same as the system time zone. +- The value can be given as a string indicating an offset from UTC, such as '+10:00' or '-6:00'. +- The value can be given as a named time zone, such as 'Europe/Helsinki', 'US/Eastern', or 'MET'. + +The current session time zone setting affects the display and storage of time values that are zone-sensitive. This includes the values displayed by functions such as `NOW()` or `CURTIME()`, + +> **Note**: Only the values of the Timestamp data type is affected by time zone. This is because the Timestamp data type uses the literal value + time zone information. Other data types, such as Datetime/Date/Time, do not have time zone information, thus their values are not affected by the changes of time zone. + +```sql +mysql> create table t (ts timestamp, dt datetime); +Query OK, 0 rows affected (0.02 sec) + +mysql> set @@time_zone = 'UTC'; +Query OK, 0 rows affected (0.00 sec) + +mysql> insert into t values ('2017-09-30 11:11:11', '2017-09-30 11:11:11'); +Query OK, 1 row affected (0.00 sec) + +mysql> set @@time_zone = '+8:00'; +Query OK, 0 rows affected (0.00 sec) + +mysql> select * from t; ++---------------------|---------------------+ +| ts | dt | ++---------------------|---------------------+ +| 2017-09-30 19:11:11 | 2017-09-30 11:11:11 | ++---------------------|---------------------+ +1 row in set (0.00 sec) +``` + +In this example, no matter how you adjust the value of the time zone, the value of the Datetime data type is not affected. But the displayed value of the Timestamp data type changes if the time zone information changes. In fact, the value that is stored in the storage does not change, it's just displayed differently according to different time zone setting. + +> **Note**: +> +> - Time zone is involved during the conversion of the value of Timestamp and Datetime, which is handled based on the current `time_zone` of the session. +> - For data migration, you need to pay special attention to the time zone setting of the master database and the slave database. \ No newline at end of file diff --git a/v1.0/sql/transaction-isolation.md b/v1.0/sql/transaction-isolation.md new file mode 100755 index 0000000000000..837d1fb7ae9fd --- /dev/null +++ b/v1.0/sql/transaction-isolation.md @@ -0,0 +1,75 @@ +--- +title: TiDB Transaction Isolation Levels +category: user guide +--- + +# TiDB Transaction Isolation Levels + +Transaction isolation is one of the foundations of database transaction processing. Isolation is the I in the acronym ACID (Atomicity, Consistency, Isolation, Durability), which represents the isolation property of database transactions. + +The SQL-92 standard defines four levels of transaction isolation: Read Uncommitted, Read Committed, Repeatable Read and Serializable. See the following table for details: + +| Isolation Level | Dirty Read | Nonrepeatable Read | Phantom Read | Serialization Anomaly | +| ---------------- | ------------ | ------------------ | --------------------- | --------------------- | +| Read Uncommitted | Possible | Possible | Possible | Possible | +| Read Committed | Not possible | Possible | Possible | Possible | +| Repeatable Read | Not possible | Not possible | Not possible in TiDB | Possible | +| Serializable | Not possible | Not possible | Not possible | Not possible | + +TiDB offers two transaction isolation levels: Read Committed and Repeatable Read. + +TiDB uses the [Percolator transaction model](https://research.google.com/pubs/pub36726.html). A global read timestamp is obtained when the transaction is started, and a global commit timestamp is obtained when the transaction is committed. The execution order of transactions is confirmed based on the timestamps. To know more about the implementation of TiDB transaction model, see [MVCC in TiKV](https://pingcap.com/blog/2016-11-17-mvcc-in-tikv/). + +Use the following command to set the level of transaction isolation: + +``` +SET SESSION TRANSACTION ISOLATION LEVEL [read committed|repeatable read] +``` + +## Repeatable Read + +Repeatable Read is the default transaction isolation level in TiDB. The Repeatable Read isolation level only sees data committed before the transaction begins, and it never sees either uncommitted data or changes committed during transaction execution by concurrent transactions. However, the transaction statement does see the effects of previous updates executed within its own transaction, even though they are not yet committed. + +For transactions running on different nodes, the start and commit order depends on the order that the timestamp is obtained from PD. + +Transactions of the Repeatable Read isolation level cannot concurrently update a same row. When committing, if the transaction finds that the row has been updated by another transaction after it starts, then the transaction rolls back and retries automatically. For example: + +``` +create table t1(id int); +insert into t1 values(0); + +start transaction; | start transaction; +select * from t1; | select * from t1; +update t1 set id=id+1; | update t1 set id=id+1; +commit; | + | commit; -- roll back and retry atutomatically +``` + +### Difference between TiDB and ANSI Repeatable Read + +The Repeatable Read isolation level in TiDB differs from ANSI Repeatable Read isolation level, though they sharing the same name. According to the standard described in the [A Critique of ANSI SQL Isolation Levels](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-51.pdf) paper, TiDB implements the snapshot isolation level, and it does not allow phantom reads but allows write skews. In contrast, the ANSI Repeatable Read isolation level allows phantom reads but does not allow write skews. + +### Difference between TiDB and MySQL Repeatable Read + +The Repeatable Read isolation level in TiDB differs from that in MySQL. The MySQL Repeatable Read isolation level does not check whether the current version is visible when updating, which means it can continue to update even if the row has been updated after the transaction starts. In contrast, if the row has been updated after the transaction starts, the TiDB transaction is rolled back and retried. Transaction Retries in TiDB might fail, leading to a final failure of the transaction, while in MySQL the updating transaction can be successful. + +The MySQL Repeatable Read isolation level is not the snapshot isolation level. The consistency of MySQL Repeatable Read isolation level is weaker than both the snapshot isolation level and TiDB Repeatable Read isolation level. + +## Read Committed + +The Read Committed isolation level differs from Repeatable Read isolation level. Read Committed only guarantees the uncommitted data cannot be read. + +**Note:** Because the transaction commit is a dynamic process, the Read Committed isolation level might read the data committed by part of the transaction. It is not recommended to use the Read Committed isolation level in a database that requires strict consistency. + +## Transaction retry + +For the `insert/delete/update` operation, if the transaction fails and can be retried according to the system, the transaction is automatically retried within the system. + +You can control the number of retries by configuring the `retry-limit` parameter: + +``` +[performance] +... +# The maximum number of retries when commit a transaction. +retry-limit = 10 +``` diff --git a/v1.0/sql/transaction.md b/v1.0/sql/transaction.md new file mode 100755 index 0000000000000..8033dd31eaa35 --- /dev/null +++ b/v1.0/sql/transaction.md @@ -0,0 +1,77 @@ +--- +title: Transactions +category: user guide +--- + +# Transactions + +TiDB supports distributed transactions. The statements that relate to transactions include the `Autocommit` variable, `START TRANSACTION`/`BEGIN`, `COMMIT` and `ROLLBACK`. + +## Autocommit + +Syntax: + +```sql +SET autocommit = {0 | 1} +``` + +If you set the value of `autocommit` to 1, the status of the current Session is autocommit. If you set the value of `autocommit` to 0, the status of the current Session is non-autocommit. The value of `autocommit` is 1 by default. + +In the autocommit status, the updates are automatically committed to the database after you run each statement. Otherwise, the updates are only committed when you run the `COMMIT` or `BEGIN` statement. + +Besides, autocommit is also a System Variable. You can update the current Session or the Global value using the following variable assignment statement: + +```sql +SET @@SESSION.autocommit = {0 | 1}; +SET @@GLOBAL.autocommit = {0 | 1}; +``` + +## START TRANSACTION, BEGIN + +Syntax: + +```sql +BEGIN; + +START TRANSACTION; + +START TRANSACTION WITH CONSISTENT SNAPSHOT; +``` + +The three statements above are all statements that transactions start with, through which you can explicitly start a new transaction. If at this time, the current Session is in the process of a transaction, a new transaction is started after the current transaction is committed. + +## COMMIT + +Syntax: + +```sql +COMMIT; +``` + +This statement is used to commit the current transaction, including all the updates between `BEGIN` and `COMMIT`. + +## ROLLBACK + +Syntax: + +```sql +ROLLBACK; +``` + +This statement is used to roll back the current transaction and cancels all the updates between `BEGIN` and `COMMIT`. + +## Explicit and implicit transaction + +TiDB supports explicit transactions (`BEGIN/COMMIT`) and implicit transactions (`SET autocommit = 1`). + +If you set the value of `autocommit` to 1 and start a new transaction through `BEGIN`, the autocommit is disabled before `COMMIT`/`ROLLBACK` which makes the transaction becomes explicit. + +For DDL statements, the transaction is committed automatically and does not support rollback. If you run the DDL statement while the current Session is in the process of a transaction, the DDL statement is run after the current transaction is committed. + +## Transaction isolation level + +TiDB uses `SNAPSHOT ISOLATION` by default. You can set the isolation level of the current Session to `READ COMMITTED` using the following statement: + +```sql +SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED; +``` diff --git a/v1.0/sql/type-conversion-in-expression-evaluation.md b/v1.0/sql/type-conversion-in-expression-evaluation.md new file mode 100755 index 0000000000000..698c9ed6517da --- /dev/null +++ b/v1.0/sql/type-conversion-in-expression-evaluation.md @@ -0,0 +1,8 @@ +--- +title: Type Conversion in Expression Evaluation +category: user guide +--- + +# Type Conversion in Expression Evaluation + +TiDB behaves the same as MySQL: [https://dev.mysql.com/doc/refman/5.7/en/type-conversion.html](https://dev.mysql.com/doc/refman/5.7/en/type-conversion.html) diff --git a/v1.0/sql/understanding-the-query-execution-plan.md b/v1.0/sql/understanding-the-query-execution-plan.md new file mode 100755 index 0000000000000..f95b7c819de72 --- /dev/null +++ b/v1.0/sql/understanding-the-query-execution-plan.md @@ -0,0 +1,87 @@ +--- +title: Understand the Query Execution Plan +category: user guide +--- + +# Understand the Query Execution Plan + +Based on the details of your tables, the TiDB optimizer chooses the most efficient query execution plan, which consists of a series of operators. This document details the execution plan information returned by the `EXPLAIN` statement in TiDB. + +## Optimize SQL statements using `EXPLAIN` + +The result of the `EXPLAIN` statement provides information about how TiDB executes SQL queries: + +- `EXPLAIN` works together with `SELECT`, `DELETE`, `INSERT`, `REPLACE`, and `UPDATE`. +- When you run the `EXPLAIN` statement, TiDB returns the final physical execution plan which is optimized by the SQL statment of `EXPLAIN`. In other words, `EXPLAIN` displays the complete information about how TiDB executes the SQL statement, such as in which order, how tables are joined, and what the expression tree looks like. For more information, see [`EXPLAIN` output format](#explain-output-format). +- TiDB dose not support `EXPLAIN [options] FOR CONNECTION connection_id` currently. We'll do it in the future. For more information, see [#4351](https://github.com/pingcap/tidb/issues/4351). + +The results of `EXPLAIN` shed light on how to index the data tables so that the execution plan can use the index to speed up the execution of SQL statements. You can also use `EXPLAIN` to check if the optimizer chooses the optimal order to join tables. + +## `EXPLAIN` output format + +Currently, the `EXPLAIN` statement returns the following six columns: id, parent, children, task, operator info, and count. Each operator in the execution plan is described by the six properties. In the results returned by `EXPLAIN`, each row describes an operator. See the following table for details: + +| Property Name | Description | +| -----| ------------- | +| id | The id of an operator, to identify the uniqueness of an operator in the entire execution plan. | +| parent | The parent of an operator. The current execution plan is like a tree structure composed of operators. The data flows from a child to its parent, and each operator has one and only one parent. | +| children | the children and the data source of an operator | +| task | the task that the current operator belongs to. The current execution plan contains two types of tasks: 1) the **root** task that runs on the TiDB server; 2) the **cop** task that runs concurrently on the TiKV server. The topological relations of the current execution plan in the task level is that a root task can be followed by many cop tasks. The root task uses the output of cop task as the input. The cop task executes the tasks that TiDB pushes to TiKV. Each cop task scatters in the TiKV cluster and is executed by multiple processes. | +| operator info | The details about each operator. The information of each operator differs from others, see [Operator Info](#operator-info).| +| count | to predict the number of data items that the current operator outputs, based on the statistics and the execution logic of the operator | + +## Overview + +### Introduction to task + +Currently, the calculation task of TiDB contains two different tasks: cop task and root task. The cop task refers to the computing task that is pushed to the KV side and executed distributedly. The root task refers to the computing task that is executed at a single point in TiDB. One of the goals of SQL optimization is to push the calculation down to the KV side as much as possible. + +### Table data and index data + +The table data in TiDB refers to the raw data of a table, which is stored in TiKV. For each row of the table data, its key is a 64-bit integer called Handle ID. If a table has int type primary key, the value of the primary key is taken as the Handle ID of the table data, otherwise the system automatically generates the Handle ID. The value of the table data is encoded by all the data in this row. When the table data is read, return the results in the order in which the Handle ID is incremented. + +Similar to the table data, the index data in TiDB is also stored in TiKV. The key of index data is ordered bytes encoded by index columns. The value is the Handle ID of each row of index data. You can use the Handle ID to read the non-index columns in this row. When the index data is read, return the results in the order in which the index columns are incremented. If the case of multiple index columns, make sure that the first column is incremented and that the i + 1 column is incremented when the i column is equal. + +### Range query + +In the WHERE/HAVING/ON condition, analyze the results returned by primary key or index key queries. For example, number and date types of comparison symbols, greater than, less than, equal to, greater than or equal to, less than or equal to, and character type LIKE symbols. + +TiDB only supports the comparison symbols of which one side is a column and the other side is a constant or can be calculated as a constant. Query conditions like `year(birth_day) < 1992` cannot use the index. Besides, try to use the same type to compare, to avoid that the index cannot be used because of additional cast operations. For example, in `user_id = 123456`, if the `user_id` is a string, you need to write `123456` as a string constant. + +Using `AND` and `OR` combination on the range query conditions of the same column is equivalent to getting the intersection or union set. For multidimensional combined indexes, you can write the conditions for multiple columns. For example, in the `(a, b, c)` combined index, when `a` is an equivalent query, you can continue to calculate the query range of `b`; when `b` is also an equivalent query, you can continue to calculate the query range of `c`; otherwise, if `a` is a non-equivalent query, you can only calculate the query range of `a`. + +## Operator info + +### TableReader and TableScan + +TableScan refers to scanning the table data at the KV side. TableReader refers to reading the table data from TiKV at the TiDB side. TableReader and TableScan are the two operators of one function. The `table` represents the table name in SQL statements. If the table is renamed, it displays the new name. The `range` represents the range of scanned data. If the WHERE/HAVING/ON condition is not specified in the query, full table scan is executed. If the range query condition is specified on the int type primary keys, range query is executed. The `keep order` indicates whether the table scan is returned in order. + +### IndexReader and IndexLookUp + +The index data in TiDB is read in two ways: 1) IndexReader represents reading the index columns directly from the index, which is used when only index related columns or primary keys are quoted in SQL statements; 2) IndexLookUp represents filtering part of the data from the index, returning only the Handle ID, and retrieving the table data again using Handle ID. In the second way, data is retrieved twice from TiKV. The way of reading index data is automatically selected by the optimizer. + +Similar to TableScan, IndexScan is the operator to read index data in the KV side. The `table` represents the table name in SQL statements. If the table is renamed, it displays the new name. The `index` represents the index name. The `range` represents the range of scanned data. The `out of order` indicates whether the index scan is returned in order. In TiDB, the primary key composed of multiple columns or non-int columns is treated as the unique index. + +### Selection + +Selection represents the selection conditions in SQL statements, usually used in WHERE/HAVING/ON clause. + +### Projection + +Projection corresponds to the `SELECT` list in SQL statements, used to map the input data into new output data. + +### Aggregation + +Aggregation corresponds to `Group By` in SQL statements, or the aggregate functions if the `Group By` statement does not exist, such as the `COUNT` or `SUM` function. TiDB supports two aggregation algorithms: Hash Aggregation and Stream Aggregation. Hash Aggregation is a hash-based aggregation algorithm. If Hash Aggregation is close to the read operator of Table or Index, the aggregation operator pre-aggregates in TiKV to improve the concurrency and reduce the network load. + +### Join + +TiDB supports Inner Join and Left/Right Outer Join, and automatically converts the external connection that can be simplified to Inner Join. + +TiDB supports three Join algorithms: Hash Join, Sort Merge Join and Index Look up Join. The principle of Hash Join is to pre-load the memory with small tables involved in the connection and read all the data of big tables to connect. The principle of Sort Merge Join is to read the data of two tables at the same time and compare one by one using the order information of the input data. Index Look Up Join reads data of external tables and executes primary key or index key queries on internal tables. + +### Apply + +Apply is an operator used to describe subqueries in TiDB. The behavior of Apply is similar to Nested Loop. The Apply operator retrieves one piece of data from external tables, puts it into the associated column of the internal tables, executes and calculates the connection according to the inline Join algorithm in Apply. + +Generally, the Apply operator is automatically converted to a Join operation by the query optimizer. Therefore, try to avoid using the Apply operator when you write SQL statements. diff --git a/v1.0/sql/user-account-management.md b/v1.0/sql/user-account-management.md new file mode 100755 index 0000000000000..c4fec959690c8 --- /dev/null +++ b/v1.0/sql/user-account-management.md @@ -0,0 +1,92 @@ +--- +title: TiDB User Account Management +category: user guide +--- + +# TiDB User Account Management + +## User names and passwords + +TiDB stores the user accounts in the table of the `mysql.user` system database. Each account is identified by a user name and the client host. Each account may have a password. + +You can connect to the TiDB server using the MySQL client, and use the specified account and password to login: + +```sql +shell> mysql --port 4000 --user xxx --password +``` + +Or use the abbreviation of command line parameters: + +```sql +shell> mysql -P 4000 -u xxx -p +``` + +## Add user accounts + +You can create TiDB accounts in two ways: + +- By using the standard account-management SQL statements intended for creating accounts and establishing their privileges, such as `CREATE USER` and `GRANT`. +- By manipulating the grant tables directly with statements such as `INSERT`, `UPDATE`, or `DELETE`. + +It is recommended to use the account-management statements, because manipulating the grant tables directly can lead to incomplete updates. You can also create accounts by using third party GUI tools. + +The following example uses the `CREATE USER` and `GRANT` statements to set up four accounts: + +```sql +mysql> CREATE USER 'finley'@'localhost' IDENTIFIED BY 'some_pass'; +mysql> GRANT ALL PRIVILEGES ON *.* TO 'finley'@'localhost' WITH GRANT OPTION; +mysql> CREATE USER 'finley'@'%' IDENTIFIED BY 'some_pass'; +mysql> GRANT ALL PRIVILEGES ON *.* TO 'finley'@'%' WITH GRANT OPTION; +mysql> CREATE USER 'admin'@'localhost' IDENTIFIED BY 'admin_pass'; +mysql> GRANT RELOAD,PROCESS ON *.* TO 'admin'@'localhost'; +mysql> CREATE USER 'dummy'@'localhost'; +``` + +To see the privileges for an account, use `SHOW GRANTS`: + +```sql +mysql> SHOW GRANTS FOR 'admin'@'localhost'; ++-----------------------------------------------------+ +| Grants for admin@localhost | ++-----------------------------------------------------+ +| GRANT RELOAD, PROCESS ON *.* TO 'admin'@'localhost' | ++-----------------------------------------------------+ +``` + +## Remove user accounts + +To remove a user account, use the `DROP USER` statement: + +```sql +mysql> DROP USER 'jeffrey'@'localhost'; +``` + +## Reserved user accounts + +TiDB creates the `'root'@'%'` default account during the database initialization. + +## Set account resource limits + +Currently, TiDB does not support setting account resource limits. + +## Assign account passwords + +TiDB stores passwords in the `mysql.user` system database. Operations that assign or update passwords are permitted only to users with the `CREATE USER` privilege, or, alternatively, privileges for the `mysql` database (`INSERT` privilege to create new accounts, `UPDATE` privilege to update existing accounts). + +To assign a password when you create a new account, use `CREATE USER` and include an `IDENTIFIED BY` clause: + +```sql +CREATE USER 'jeffrey'@'localhost' IDENTIFIED BY 'mypass'; +``` + +To assign or change a password for an existing account, use `SET PASSWORD FOR` or `ALTER USER`: + +```sql +SET PASSWORD FOR 'root'@'%' = 'xxx'; +``` + +Or: + +```sql +ALTER USER 'jeffrey'@'localhost' IDENTIFIED BY 'mypass'; +``` diff --git a/v1.0/sql/user-defined-variables.md b/v1.0/sql/user-defined-variables.md new file mode 100755 index 0000000000000..2879219c103ca --- /dev/null +++ b/v1.0/sql/user-defined-variables.md @@ -0,0 +1,130 @@ +--- +title: User-Defined Variables +category: user guide +--- + +# User-Defined Variables + +The format of the user-defined variables is `@var_name`. `@var_name` consists of alphanumeric characters, `_`, and `$`. The user-defined variables are case-insensitive. + +The user-defined variables are session specific, which means a user variable defined by one client cannot be seen or used by other clients. +You can use the `SET` statement to set a user variable: + +```sql +SET @var_name = expr [, @var_name = expr] ... +``` +or + +```sql +SET @var_name := expr +``` +For SET, you can use `=` or `:=` as the assignment operator. + +For example: + +```sql +mysql> SET @a1=1, @a2=2, @a3:=4; +mysql> SELECT @a1, @a2, @t3, @a4 := @a1+@a2+@a3; ++------+------+------+--------------------+ +| @a1 | @a2 | @a3 | @a4 := @a1+@a2+@a3 | ++------+------+------+--------------------+ +| 1 | 2 | 4 | 7 | ++------+------+------+--------------------+ +``` +Hexadecimal or bit values assigned to user variables are treated as binary strings in TiDB. To assign a hexadecimal or bit value as a number, use it in numeric context. For example, add `0` or use `CAST(... AS UNSIGNED)`: + +```sql +mysql> SELECT @v1, @v2, @v3; ++------+------+------+ +| @v1 | @v2 | @v3 | ++------+------+------+ +| A | 65 | 65 | ++------+------+------+ +1 row in set (0.00 sec) + +mysql> SET @v1 = b'1000001'; +Query OK, 0 rows affected (0.00 sec) + +mysql> SET @v2 = b'1000001'+0; +Query OK, 0 rows affected (0.00 sec) + +mysql> SET @v3 = CAST(b'1000001' AS UNSIGNED); +Query OK, 0 rows affected (0.00 sec) + +mysql> SELECT @v1, @v2, @v3; ++------+------+------+ +| @v1 | @v2 | @v3 | ++------+------+------+ +| A | 65 | 65 | ++------+------+------+ +1 row in set (0.00 sec) +``` + +If you refer to a user-defined variable that has not been initialized, it has a value of NULL and a type of string. + +```sql +mysql> select @not_exist; ++------------+ +| @not_exist | ++------------+ +| NULL | ++------------+ +1 row in set (0.00 sec) +``` + +The user-defined variables cannot be used as an identifier in the SQL statement. For example: + +```sql +mysql> select * from t; ++------+ +| a | ++------+ +| 1 | ++------+ +1 row in set (0.00 sec) + +mysql> SET @col = "a"; +Query OK, 0 rows affected (0.00 sec) + +mysql> SELECT @col FROM t; ++------+ +| @col | ++------+ +| a | ++------+ +1 row in set (0.00 sec) + +mysql> SELECT `@col` FROM t; +ERROR 1054 (42S22): Unknown column '@col' in 'field list' + +mysql> SET @col = "`a`"; +Query OK, 0 rows affected (0.00 sec) + +mysql> SELECT @col FROM t; ++------+ +| @col | ++------+ +| `a` | ++------+ +1 row in set (0.01 sec) +``` + +An exception is that when you are constructing a string for use as a prepared statement to execute later: + +```sql +mysql> PREPARE stmt FROM "SELECT @c FROM t"; +Query OK, 0 rows affected (0.00 sec) + +mysql> EXECUTE stmt; ++------+ +| @c | ++------+ +| a | ++------+ +1 row in set (0.01 sec) + +mysql> DEALLOCATE PREPARE stmt; +Query OK, 0 rows affected (0.00 sec) +``` + +For more information, see [User-Defined Variables in MySQL](https://dev.mysql.com/doc/refman/5.7/en/user-variables.html). \ No newline at end of file diff --git a/v1.0/sql/user-manual.md b/v1.0/sql/user-manual.md new file mode 100755 index 0000000000000..273e1aa626b6a --- /dev/null +++ b/v1.0/sql/user-manual.md @@ -0,0 +1,93 @@ +--- +title: TiDB User Guide +category: user guide +--- + +# TiDB User Guide + +TiDB supports the SQL-92 standard and is compatible with MySQL. To help you easily get started with TiDB, TiDB user guide mainly inherits the MySQL document structure with some TiDB specific changes. + +## TiDB server administration + +- [The TiDB Server](tidb-server.md) +- [The TiDB Command Options](server-command-option.md) +- [The TiDB Data Directory](tidb-server.md#tidb-data-directory) +- [The TiDB System Database](system-database.md) +- [The TiDB System Variables](variable.md) +- [The Proprietary System Variables and Syntax in TiDB](tidb-specific.md) +- [The TiDB Server Logs](tidb-server.md#tidb-server-logs) +- [The TiDB Access Privilege System](privilege.md) +- [TiDB User Account Management](user-account-management.md) +- [Use Encrypted Connections](encrypted-connections.md) + +## SQL optimization + +- [Understand the Query Execution Plan](understanding-the-query-execution-plan.md) +- [Introduction to Statistics](statistics.md) + +## Language structure + +- [Literal Values](literal-values.md) +- [Schema Object Names](schema-object-names.md) +- [Keywords and Reserved Words](keywords-and-reserved-words.md) +- [User-Defined Variables](user-defined-variables.md) +- [Expression Syntax](expression-syntax.md) +- [Comment Syntax](comment-syntax.md) + +## Globalization + +- [Character Set Support](character-set-support.md) +- [Character Set Configuration](character-set-configuration.md) +- [Time Zone](time-zone.md) + +## Data types + +- [Numeric Types](datatype.md#numeric-types) +- [Date and Time Types](datatype.md#date-and-time-types) +- [String Types](datatype.md#string-types) +- [JSON Types](datatype.md#json-types) +- [The ENUM data type](datatype.md#the-enum-data-type) +- [The SET Type](datatype.md#the-set-type) +- [Data Type Default Values](datatype.md#data-type-default-values) + +## Functions and operators + +- [Function and Operator Reference](functions-and-operators-reference.md) +- [Type Conversion in Expression Evaluation](type-conversion-in-expression-evaluation.md) +- [Operators](operators.md) +- [Control Flow Functions](control-flow-functions.md) +- [String Functions](string-functions.md) +- [Numeric Functions and Operators](numeric-functions-and-operators.md) +- [Date and Time Functions](date-and-time-functions.md) +- [Bit Functions and Operators](bit-functions-and-operators.md) +- [Cast Functions and Operators](cast-functions-and-operators.md) +- [Encryption and Compression Functions](encryption-and-compression-functions.md) +- [Information Functions](information-functions.md) +- [JSON Functions](json-functions.md) +- Functions Used with Global Transaction IDs [TBD] +- [Aggregate (GROUP BY) Functions](aggregate-group-by-functions.md) +- [Miscellaneous Functions](miscellaneous-functions.md) +- [Precision Math](precision-math.md) + +## SQL statement syntax + +- [Data Definition Statements](ddl.md) +- [Data Manipulation Statements](dml.md) +- [Transactions](transaction.md) + +- [Database Administration Statements](admin.md) +- [Prepared SQL Statement Syntax](prepare.md) +- [Utility Statements](util.md) +- [TiDB SQL Syntax Diagram](https://pingcap.github.io/sqlgram/) + +## JSON functions and generated column + +- [JSON Functions and Generated Column](json-functions-generated-column.md) + +## Connectors and APIs + +- [Connectors and APIs](connection-and-APIs.md) + +## Compatibility with MySQL + +- [Compatibility with MySQL](mysql-compatibility.md) \ No newline at end of file diff --git a/v1.0/sql/util.md b/v1.0/sql/util.md new file mode 100755 index 0000000000000..1fc96642ed83f --- /dev/null +++ b/v1.0/sql/util.md @@ -0,0 +1,96 @@ +--- +title: Utility Statements +category: user guide +--- + +# Utility Statements + +## `DESCRIBE` statement + +The `DESCRIBE` and `EXPLAIN` statements are synonyms, which can also be abbreviated as `DESC`. See the usage of the `EXPLAIN` statement. + +## `EXPLAIN` statement + +```sql +{EXPLAIN | DESCRIBE | DESC} + tbl_name [col_name] + +{EXPLAIN | DESCRIBE | DESC} + [explain_type] + explainable_stmt + +explain_type: + FORMAT = format_name + +format_name: + "DOT" + +explainable_stmt: { + SELECT statement + | DELETE statement + | INSERT statement + | REPLACE statement + | UPDATE statement +} +``` + +For more information about the `EXPLAIN` statement, see [Understand the Query Execution Plan](understanding-the-query-execution-plan.md). + +In addition to the MySQL standard result format, TiDB also supports DotGraph and you need to specify `FORMAT = "dot"` as in the following example: + +```sql +create table t(a bigint, b bigint); +desc format = "dot" select A.a, B.b from t A join t B on A.a > B.b where A.a < 10; + +TiDB > desc format = "dot" select A.a, B.b from t A join t B on A.a > B.b where A.a < 10;desc format = "dot" select A.a, B.b from t A join t B on A.a > B.b where A.a < 10; ++--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| dot contents | ++--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| +digraph HashRightJoin_7 { +subgraph cluster7{ +node [style=filled, color=lightgrey] +color=black +label = "root" +"HashRightJoin_7" -> "TableReader_10" +"HashRightJoin_7" -> "TableReader_12" +} +subgraph cluster9{ +node [style=filled, color=lightgrey] +color=black +label = "cop" +"Selection_9" -> "TableScan_8" +} +subgraph cluster11{ +node [style=filled, color=lightgrey] +color=black +label = "cop" +"TableScan_11" +} +"TableReader_10" -> "Selection_9" +"TableReader_12" -> "TableScan_11" +} + | ++--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +1 row in set (0.00 sec) +``` + +If the `dot` program (in the `graphviz` package) is installed on your computer, you can generate a PNG file using the following method: + +```bash +dot xx.dot -T png -O + +The xx.dot is the result returned by the above statement. +``` + +If the `dot` program is not installed on your computer, copy the result to [this website](http://www.webgraphviz.com/) to get a tree diagram: + +![Explain Dot](../media/explain_dot.png) + +## `USE` statement + +```sql +USE db_name +``` + +The `USE` statement is used to switch the default database. If the table in SQL statements does not display the specified database, then use the default database. diff --git a/v1.0/sql/variable.md b/v1.0/sql/variable.md new file mode 100755 index 0000000000000..bbe004c73d58e --- /dev/null +++ b/v1.0/sql/variable.md @@ -0,0 +1,48 @@ +--- +title: The System Variables +category: user guide +--- + +# The System Variables + +The system variables in MySQL are the system parameters that modify the operation of the database runtime. These variables have two types of scope, Global Scope and Session Scope. TiDB supports all the system variables in MySQL 5.7. Most of the variables are only supported for compatibility and do not affect the runtime behaviors. + +## Set the system variables + +You can use the [`SET`](admin.md#the-set-statement) statement to change the value of the system variables. Before you change, consider the scope of the variable. For more information, see [MySQL Dynamic System Variables](https://dev.mysql.com/doc/refman/5.7/en/dynamic-system-variables.html). + +### Set Global variables + +Add the `GLOBAL` keyword before the variable or use `@@global.` as the modifier: + +```sql +SET GLOBAL autocommit = 1; +SET @@global.autocommit = 1; +``` + +### Set Session Variables + +Add the `SESSION` keyword before the variable, use `@@session.` as the modifier, or use no modifier: + +```sql +SET SESSION autocommit = 1; +SET @@session.autocommit = 1; +SET @@autocommit = 1; +``` + +> **Note:** `LOCAL` and `@@local.` are the synonyms for `SESSION` and `@@session.` + +## The fully supported MySQL system variables in TiDB + +The following MySQL system variables are fully supported in TiDB and have the same behaviors as in MySQL. + +| Name | Scope | Description | +| ---------------- | -------- | -------------------------------------------------- | +| autocommit | GLOBAL \| SESSION | whether automatically commit a transaction| +| sql_mode | GLOBAL \| SESSION | support some of the MySQL SQL modes| +| time_zone | GLOBAL \| SESSION | the time zone of the database | +| tx_isolation | GLOBAL \| SESSION | the isolation level of a transaction | + +## The proprietary system variables and syntaxes in TiDB + +See [The Proprietary System Variables and Syntax in TiDB](tidb-specific.md). \ No newline at end of file diff --git a/v1.0/templates/copyright.tex b/v1.0/templates/copyright.tex new file mode 100755 index 0000000000000..95df173bc06c7 --- /dev/null +++ b/v1.0/templates/copyright.tex @@ -0,0 +1,4 @@ + +\noindent \rule{\textwidth}{1pt} + +©2017 PingCAP All Rights Reversed. \ No newline at end of file diff --git a/v1.0/templates/template.tex b/v1.0/templates/template.tex new file mode 100755 index 0000000000000..85686c70c582f --- /dev/null +++ b/v1.0/templates/template.tex @@ -0,0 +1,276 @@ +\documentclass[$if(fontsize)$$fontsize$,$endif$$if(lang)$$lang$,$endif$$if(papersize)$$papersize$,$endif$$for(classoption)$$classoption$$sep$,$endfor$]{$documentclass$} +$if(fontfamily)$ +\usepackage{$fontfamily$} +$else$ +\usepackage{lmodern} +$endif$ +$if(linestretch)$ +\usepackage{setspace} +\setstretch{$linestretch$} +$endif$ +\usepackage{amssymb,amsmath} +\usepackage{ifxetex,ifluatex} +\usepackage{fixltx2e} % provides \textsubscript +\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex + \usepackage[T1]{fontenc} + \usepackage[utf8]{inputenc} +$if(euro)$ + \usepackage{eurosym} +$endif$ +\else % if luatex or xelatex + \ifxetex + \usepackage{mathspec} + \usepackage{xltxtra,xunicode} + $if(CJKmainfont)$ + \usepackage{xeCJK} + \setCJKmainfont{$CJKmainfont$} + $endif$ + \else + \usepackage{fontspec} + \fi + \defaultfontfeatures{Mapping=tex-text,Scale=MatchLowercase} + \newcommand{\euro}{€} +$if(mainfont)$ + \setmainfont{$mainfont$} +$endif$ +$if(sansfont)$ + \setsansfont{$sansfont$} +$endif$ +$if(monofont)$ + \setmonofont[Mapping=tex-ansi]{$monofont$} +$endif$ +$if(mathfont)$ + \setmathfont(Digits,Latin,Greek){$mathfont$} +$endif$ +\fi +% use upquote if available, for straight quotes in verbatim environments +\IfFileExists{upquote.sty}{\usepackage{upquote}}{} +% use microtype if available +\IfFileExists{microtype.sty}{% +\usepackage{microtype} +\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts +}{} +$if(geometry)$ +\usepackage[$for(geometry)$$geometry$$sep$,$endfor$]{geometry} +$endif$ +$if(natbib)$ +\usepackage{natbib} +\bibliographystyle{$if(biblio-style)$$biblio-style$$else$plainnat$endif$} +$endif$ +$if(biblatex)$ +\usepackage{biblatex} +$if(biblio-files)$ +\bibliography{$biblio-files$} +$endif$ +$endif$ +$if(listings)$ + +\usepackage{xcolor} +\usepackage{listings} +\lstset{ + basicstyle=\ttfamily, + keywordstyle=\color[rgb]{0.13,0.29,0.53}\bfseries, + stringstyle=\color[rgb]{0.31,0.60,0.02}, + commentstyle=\color[rgb]{0.56,0.35,0.01}\itshape, + numberstyle=\footnotesize, + frame=single, + breaklines=true, + postbreak=\raisebox{0ex}[0ex][0ex]{\ensuremath{\color{red}\hookrightarrow\space}} +} + +$endif$ +$if(lhs)$ +\lstnewenvironment{code}{\lstset{language=Haskell,basicstyle=\small\ttfamily}}{} +$endif$ +$if(highlighting-macros)$ +$highlighting-macros$ +$endif$ +$if(verbatim-in-note)$ +\usepackage{fancyvrb} +$endif$ +$if(tables)$ +\usepackage{longtable,booktabs} +$endif$ +$if(graphics)$ +\usepackage{graphicx} +\makeatletter +\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} +\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} +\makeatother +% Scale images if necessary, so that they will not overflow the page +% margins by default, and it is still possible to overwrite the defaults +% using explicit options in \includegraphics[width, height, ...]{} +\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} +$endif$ +\ifxetex + \usepackage[setpagesize=false, % page size defined by xetex + unicode=false, % unicode breaks when used with xetex + xetex]{hyperref} +\else + \usepackage[unicode=true]{hyperref} +\fi +\hypersetup{breaklinks=true, + bookmarks=true, + pdfauthor={$author-meta$}, + pdftitle={$title-meta$}, + colorlinks=true, + citecolor=$if(citecolor)$$citecolor$$else$blue$endif$, + urlcolor=$if(urlcolor)$$urlcolor$$else$blue$endif$, + linkcolor=$if(linkcolor)$$linkcolor$$else$magenta$endif$, + pdfborder={0 0 0}} +\urlstyle{same} % don't use monospace font for urls +$if(links-as-notes)$ +% Make links footnotes instead of hotlinks: +\renewcommand{\href}[2]{#2\footnote{\url{#1}}} +$endif$ +$if(strikeout)$ +\usepackage[normalem]{ulem} +% avoid problems with \sout in headers with hyperref: +\pdfstringdefDisableCommands{\renewcommand{\sout}{}} +$endif$ +\setlength{\parindent}{0pt} +\setlength{\parskip}{6pt plus 2pt minus 1pt} +\setlength{\emergencystretch}{3em} % prevent overfull lines +\providecommand{\tightlist}{% + \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} +$if(numbersections)$ +\setcounter{secnumdepth}{5} +$else$ +\setcounter{secnumdepth}{0} +$endif$ +$if(verbatim-in-note)$ +\VerbatimFootnotes % allows verbatim text in footnotes +$endif$ +$if(lang)$ +\ifxetex + \usepackage{polyglossia} + \setmainlanguage{$mainlang$} +\else + \usepackage[$lang$]{babel} +\fi +$endif$ + +$if(title)$ +\title{$title$$if(subtitle)$\\\vspace{0.5em}{\large $subtitle$}$endif$} +$endif$ +$if(author)$ +\author{$for(author)$$author$$sep$ \and $endfor$} +$endif$ +\date{$date$} +$for(header-includes)$ +$header-includes$ +$endfor$ + +% quote style +% http://tex.stackexchange.com/questions/179982/add-a-black-border-to-block-quotations +\usepackage{framed} +% \usepackage{xcolor} +\let\oldquote=\quote +\let\endoldquote=\endquote +\colorlet{shadecolor}{orange!15} +\renewenvironment{quote}{\begin{shaded*}\begin{oldquote}}{\end{oldquote}\end{shaded*}} + +% https://www.zhihu.com/question/25082703/answer/30038248 +% no cross chapter +\usepackage[section]{placeins} +% no float everywhere +\usepackage{float} +\floatplacement{figure}{H} + +% we chinese write article this way +\usepackage{indentfirst} +\setlength{\parindent}{2em} + +\renewcommand{\contentsname}{Table of Contents} +\renewcommand\figurename{Figure} + +% fix overlap toc number and title +% http://blog.csdn.net/golden1314521/article/details/39926135 +\usepackage{titlesec} +\usepackage{titletoc} +% \titlecontents{标题名}[左间距]{标题格式}{标题标志}{无序号标题}{指引线与页码}[下间距] +% fix overlap +\titlecontents{subsection} + [4em] + {}% + {\contentslabel{3em}}% + {}% + {\titlerule*[0.5pc]{$$\cdot$$}\contentspage\hspace*{0em}}% + +\titlecontents{subsubsection} + [7em] + {}% + {\contentslabel{3.5em}}% + {}% + {\titlerule*[0.5pc]{$$\cdot$$}\contentspage\hspace*{0em}}% + +\usepackage[all]{background} +% \backgroundsetup{contents=PingCAP Inc.,color=blue,opacity=0.2} +\backgroundsetup{contents=\includegraphics{media/pingcap-logo}, + placement=top,scale=0.2,hshift=1000pt,vshift=-150pt, + opacity=0.9,angle=0} + +% avoid level-4, 5 heading to be connected with following content +% https://github.com/jgm/pandoc/issues/1658 +\let\oldparagraph\paragraph +\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} +\let\oldsubparagraph\subparagraph +\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} + +\begin{document} + +% no bg at title page +\NoBgThispage +$if(title)$ +\maketitle +$endif$ +$if(abstract)$ +\begin{abstract} +$abstract$ +\end{abstract} +$endif$ + +$for(include-before)$ +$include-before$ + +$endfor$ +$if(toc)$ +{ +\hypersetup{linkcolor=black} +\setcounter{tocdepth}{$toc-depth$} +\tableofcontents +} +$endif$ +$if(lot)$ +\listoftables +$endif$ +$if(lof)$ +\listoffigures +$endif$ + +\newpage + +$body$ + +$if(natbib)$ +$if(biblio-files)$ +$if(biblio-title)$ +$if(book-class)$ +\renewcommand\bibname{$biblio-title$} +$else$ +\renewcommand\refname{$biblio-title$} +$endif$ +$endif$ +\bibliography{$biblio-files$} + +$endif$ +$endif$ +$if(biblatex)$ +\printbibliography$if(biblio-title)$[title=$biblio-title$]$endif$ + +$endif$ +$for(include-after)$ +$include-after$ + +$endfor$ +\end{document} diff --git a/v1.0/tispark/tispark-quick-start-guide.md b/v1.0/tispark/tispark-quick-start-guide.md new file mode 100755 index 0000000000000..ae3ba4fa7af2f --- /dev/null +++ b/v1.0/tispark/tispark-quick-start-guide.md @@ -0,0 +1,193 @@ +--- +title: TiSpark Quick Start Guide +category: User Guide +--- + +# Quick Start Guide for the TiDB Connector for Spark + +To make it easy to try [the TiDB Connector for Spark](tispark-user-guide.md), TiDB cluster integrates Spark, TiSpark jar package and TiSpark sample data by default, in both the Pre-GA and master versions installed using TiDB-Ansible. + +## Deployment information + +- Spark is deployed by default in the `spark` folder in the TiDB instance deployment directory. +- The TiSpark jar package is deployed by default in the `jars` folder in the Spark deployment directory. + + ``` + spark/jars/tispark-0.1.0-beta-SNAPSHOT-jar-with-dependencies.jar + ``` + +- TiSpark sample data and import scripts are deployed by default in the TiDB-Ansible directory. + + ``` + tidb-ansible/resources/bin/tispark-sample-data + ``` + +## Prepare the environment + +### Install JDK on the TiDB instance + +Download the latest version of JDK 1.8 from [Oracle JDK official download page](http://www.oracle.com/technetwork/java/javase/downloads/java-archive-javase8-2177648.html). The version used in the following example is `jdk-8u141-linux-x64.tar.gz`. + +Extract the package and set the environment variables based on your JDK deployment directory. + +Edit the `~/.bashrc` file. For example: + +```bashrc +export JAVA_HOME=/home/pingcap/jdk1.8.0_144 +export PATH=$JAVA_HOME/bin:$PATH +``` + +Verify the validity of JDK: + +``` +$ java -version +java version "1.8.0_144" +Java(TM) SE Runtime Environment (build 1.8.0_144-b01) +Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode) +``` + +### Import the sample data + +Assume that the TiDB cluster is started. The service IP of one TiDB instance is `192.168.0.2`, the port is `4000`, the user name is `root`, and the password is null. + +``` +cd tidb-ansible/resources/bin/tispark-sample-data +``` + +Edit the TiDB login information in `sample_data.sh`. For example: + +``` +mysql -h 192.168.0.2 -P 4000 -u root < dss.ddl +``` + +Run the script: + +``` +./sample_data.sh +``` + +> **Note**: You need to install the MySQL client on the machine that runs the script. If you are a CentOS user, you can install it through the command `yum -y install mysql`. + +Log into TiDB and verify that the `TPCH_001` database and the following tables are included. + +``` +$ mysql -uroot -P4000 -h192.168.0.2 +MySQL [(none)]> show databases; ++--------------------+ +| Database | ++--------------------+ +| INFORMATION_SCHEMA | +| PERFORMANCE_SCHEMA | +| TPCH_001 | +| mysql | +| test | ++--------------------+ +5 rows in set (0.00 sec) + +MySQL [(none)]> use TPCH_001 +Reading table information for completion of table and column names +You can turn off this feature to get a quicker startup with -A + +Database changed +MySQL [TPCH_001]> show tables; ++--------------------+ +| Tables_in_TPCH_001 | ++--------------------+ +| CUSTOMER | +| LINEITEM | +| NATION | +| ORDERS | +| PART | +| PARTSUPP | +| REGION | +| SUPPLIER | ++--------------------+ +8 rows in set (0.00 sec) +``` + +## Use example + +Assume that the IP of your PD node is `192.168.0.2`, and the port is `2379`. + +First start the spark-shell in the spark deployment directory: + +``` +$ cd spark +$ bin/spark-shell +``` + +```scala +import org.apache.spark.sql.TiContext +val ti = new TiContext(spark) + +// Mapping all TiDB tables from `TPCH_001` database as Spark SQL tables +ti.tidbMapDatabase("TPCH_001") +``` + +Then you can call Spark SQL directly: + +```scala +scala> spark.sql("select count(*) from lineitem").show +``` + +The result is: + +``` ++--------+ +|count(1)| ++--------+ +| 60175| ++--------+ +``` + +Now run a more complex Spark SQL: + +```scala +scala> spark.sql( + """select + | l_returnflag, + | l_linestatus, + | sum(l_quantity) as sum_qty, + | sum(l_extendedprice) as sum_base_price, + | sum(l_extendedprice * (1 - l_discount)) as sum_disc_price, + | sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, + | avg(l_quantity) as avg_qty, + | avg(l_extendedprice) as avg_price, + | avg(l_discount) as avg_disc, + | count(*) as count_order + |from + | lineitem + |where + | l_shipdate <= date '1998-12-01' - interval '90' day + |group by + | l_returnflag, + | l_linestatus + |order by + | l_returnflag, + | l_linestatus + """.stripMargin).show +``` + +The result is: + +``` ++------------+------------+---------+--------------+--------------+ +|l_returnflag|l_linestatus| sum_qty|sum_base_price|sum_disc_price| ++------------+------------+---------+--------------+--------------+ +| A| F|380456.00| 532348211.65|505822441.4861| +| N| F| 8971.00| 12384801.37| 11798257.2080| +| N| O|742802.00| 1041502841.45|989737518.6346| +| R| F|381449.00| 534594445.35|507996454.4067| ++------------+------------+---------+--------------+--------------+ +(Continued) +-----------------+---------+------------+--------+-----------+ + sum_charge| avg_qty| avg_price|avg_disc|count_order| +-----------------+---------+------------+--------+-----------+ + 526165934.000839|25.575155|35785.709307|0.050081| 14876| + 12282485.056933|25.778736|35588.509684|0.047759| 348| +1029418531.523350|25.454988|35691.129209|0.049931| 29181| + 528524219.358903|25.597168|35874.006533|0.049828| 14902| +-----------------+---------+------------+--------+-----------+ +``` + +See [more examples](https://github.com/ilovesoup/tpch/tree/master/sparksql). diff --git a/v1.0/tispark/tispark-user-guide.md b/v1.0/tispark/tispark-user-guide.md new file mode 100755 index 0000000000000..ca1596be77b26 --- /dev/null +++ b/v1.0/tispark/tispark-user-guide.md @@ -0,0 +1,257 @@ +--- +title: TiDB Connector for Spark User Guide +category: user guide +--- + +# TiDB Connector for Spark User Guide + +The TiDB Connector for Spark is a thin layer built for running Apache Spark on top of TiDB/TiKV to answer the complex OLAP queries. It takes advantages of both the Spark platform and the distributed TiKV cluster and seamlessly glues to TiDB, the distributed OLTP database, to provide a Hybrid Transactional/Analytical Processing (HTAP) solution to serve as a one-stop solution for both online transactions and analysis. + +The TiDB Connector for Spark depends on the TiKV cluster and the PD cluster. You also need to set up a Spark cluster. This document provides a brief introduction to how to setup and use the TiDB Connector for Spark. It requires some basic knowledge of Apache Spark. For more information, see [Spark website](https://spark.apache.org/docs/latest/index.html). + +## Overview + +The TiDB Connector for Spark is an OLAP solution that runs Spark SQL directly on TiKV, the distributed storage engine. + +![TiDB Connector for Spark architecture](../media/tispark-architecture.png) + ++ TiDB Connector for Spark integrates with Spark Catalyst Engine deeply. It provides precise control of the computing, which allows Spark read data from TiKV efficiently. It also supports index seek, which improves the performance of the point query execution significantly. ++ It utilizes several strategies to push down the computing to reduce the size of dataset handling by Spark SQL, which accelerates the query execution. It also uses the TiDB built-in statistical information for the query plan optimization. ++ From the data integration point of view, TiDB Connector for Spark and TiDB serve as a solution runs both transaction and analysis directly on the same platform without building and maintaining any ETLs. It simplifies the system architecture and reduces the cost of maintenance. ++ also, you can deploy and utilize tools from the Spark ecosystem for further data processing and manipulation on TiDB. For example, using the TiDB Connector for Spark for data analysis and ETL; retrieving data from TiKV as a machine learning data source; generating reports from the scheduling system and so on. + +## Environment setup + ++ The current version of the TiDB Connector for Spark supports Spark 2.1. For Spark 2.0 and Spark 2.2, it has not been fully tested yet. It does not support any versions earlier than 2.0. ++ The TiDB Connector for Spark requires JDK 1.8+ and Scala 2.11 (Spark2.0 + default Scala version). ++ The TiDB Connector for Spark runs in any Spark mode such as YARN, Mesos, and Standalone. + + +## Recommended configuration + +### Deployment of TiKV and the TiDB Connector for Spark clusters + +#### Configuration of the TiKV cluster + +For independent deployment of TiKV and the TiDB Connector for Spark, it is recommended to refer to the following recommendations + ++ Hardware configuration + - For general purposes, please refer to the TiDB and TiKV hardware configuration [recommendations](https://github.com/pingcap/docs/blob/master/op-guide/recommendation.md#deployment-recommendations). + - If the usage is more focused on the analysis scenarios, you can increase the memory of the TiKV nodes to at least 64G. + ++ TiKV parameters (default) + + ``` + [server] + end-point-concurrency = 8 # For OLAP scenarios, consider increasing this parameter + [raftstore] + sync-log = false + + [rocksdb] + max-background-compactions = 6 + max-background-flushes = 2 + + [rocksdb.defaultcf] + block-cache-size = "10GB" + + [rocksdb.writecf] + block-cache-size = "4GB" + + [rocksdb.raftcf] + block-cache-size = "1GB" + + [rocksdb.lockcf] + block-cache-size = "1GB" + + [storage] + scheduler-worker-pool-size = 4 + ``` + +#### Configuration of the independent deployment of the Spark cluster and the TiDB Connector for Spark cluster + + +See the [Spark official website](https://spark.apache.org/docs/latest/hardware-provisioning.html) for the detail hardware recommendations. + +The following is a short overview of the TiDB Connector for Spark configuration. + +It is recommended to allocate 32G memory for Spark. Please reserve at least 25% of the memory for the operating system and buffer cache. + +It is recommended to provision at least 8 to 16 cores on per machine for Spark. Initially, you can assign all the CPU cores to Spark. + +See the [official configuration](https://spark.apache.org/docs/latest/spark-standalone.html) on the Spark website. The following is an example based on the `spark-env.sh` configuration: + +```sh +SPARK_EXECUTOR_MEMORY = 32g +SPARK_WORKER_MEMORY = 32g +SPARK_WORKER_CORES = 8 +``` + +#### Hybrid deployment configuration for the TiDB Connector for Spark and TiKV cluster + +For the hybrid deployment of the TiDB Connector for Spark and TiKV, add the TiDB Connector for Spark required resources to the TiKV reserved resources, and allocate 25% of the memory for the system. + +## Deploy the TiDB Connector for Spark + +Download the TiDB Connector for Spark's jar package [here](http://download.pingcap.org/tispark-0.1.0-SNAPSHOT-jar-with-dependencies.jar). + +### Deploy the TiDB Connector for Spark on the existing Spark cluster + +Running TiDB Connector for Spark on an existing Spark cluster does not require a reboot of the cluster. You can use Spark's `--jars` parameter to introduce the TiDB Connector for Spark as a dependency: + +```sh +spark-shell --jars $PATH/tispark-0.1.0.jar +``` + +If you want to deploy TiDB Connector for Spark as a default component, simply place the TiDB Connector for Spark jar package into the jars path for each node of the Spark cluster and restart the Spark cluster: + +```sh +${SPARK_INSTALL_PATH}/jars + +``` + +In this way, you can use either `Spark-Submit` or `Spark-Shell` to use the TiDB Connector for Spark directly. + + +### Deploy TiDB Connector for Spark without the Spark cluster + + +If you do not have a Spark cluster, we recommend using the standalone mode. To use the Spark Standalone model, you can simply place a compiled version of Spark on each node of the cluster. If you encounter problems, see its [official website](https://spark.apache.org/docs/latest/spark-standalone.html). And you are welcome to [file an issue](https://github.com/pingcap/tispark/issues/new) on our GitHub. + + +#### Download and install + +You can download [Apache Spark](https://spark.apache.org/downloads.html) + +For the Standalone mode without Hadoop support, use Spark 2.1.x and any version of Pre-build with Apache Hadoop 2.x with Hadoop dependencies. If you need to use the Hadoop cluster, please choose the corresponding Hadoop version. You can also choose to build from the [source code](https://spark.apache.org/docs/2.1.0/building-spark.html) to match the previous version of the official Hadoop 2.6. Please note that the TiDB Connector for Spark currently only supports Spark 2.1.x version. + +Suppose you already have a Spark binaries, and the current PATH is `SPARKPATH`, please copy the TiDB Connector for Spark jar package to the `${SPARKPATH}/jars` directory. + +#### Start a Master node + +Execute the following command on the selected Spark Master node: + +```sh +cd $SPARKPATH + +./sbin/start-master.sh +``` + +After the above step is completed, a log file will be printed on the screen. Check the log file to confirm whether the Spark-Master is started successfully. You can open the [http://spark-master-hostname:8080](http://spark-master-hostname:8080) to view the cluster information (if you does not change the Spark-Master default port number). When you start Spark-Slave, you can also use this panel to confirm whether the Slave is joined to the cluster. + +#### Start a Slave node + + +Similarly, you can start a Spark-Slave node with the following command: + +```sh +./sbin/start-slave.sh spark://spark-master-hostname:7077 +``` + +After the command returns, you can see if the Slave node is joined to the Spark cluster correctly from the panel as well. Repeat the above command at all Slave nodes. After all Slaves are connected to the master, you have a Standalone mode Spark cluster. + +#### Spark SQL shell and JDBC server + +If you want to use JDBC server and interactive SQL shell, please copy `start-tithriftserver.sh stop-tithriftserver.sh` to your Spark's sbin folder and `tispark-sql` to the bin folder. + +To start interactive shell: +```sh +./bin/tispark-sql +``` + +To use Thrift Server, you can start it similar way as default Spark Thrift Server: +```sh +./sbin/start-tithriftserver.sh +``` + +And stop it like below: +```sh +./sbin/stop-tithriftserver.sh +``` + + +## Demo + +Assuming that you have successfully started the TiDB Connector for Spark cluster as described above, here's a quick introduction to how to use Spark SQL for OLAP analysis. Here we use a table named `lineitem` in the `tpch` database as an example. + + +Assuming that your PD node is located at `192.168.1.100`, port `2379`, add the following command to `$SPARK_HOME/conf/spark-defaults.conf`: + +``` +spark.tispark.pd.addresses 192.168.1.100:2379 +``` + +And then enter the following command in the Spark-Shell: + +```sh +import org.apache.spark.sql.TiContext +val ti = new TiContext(spark) +ti.tidbMapDatabase ("tpch") +``` +After that you can call Spark SQL directly: + +```sh +spark.sql("select count(*)from lineitem").show +``` + +The result is: + +```sql ++-------------+ +| Count (1) | ++-------------+ +| 600000000 | ++-------------+ +``` + +TiSpark's SQL Interactive shell is almost the same as the spark-SQL shell. + +```sh +tispark-sql> use tpch; +Time taken: 0.015 seconds + +tispark-sql> select count(*) from lineitem; +2000 +Time taken: 0.673 seconds, Fetched 1 row(s) +``` + +For JDBC connection with Thrift Server, you can try it with various JDBC supported tools including SQuirreLSQL and hive-beeline. +For example, to use it with beeline: + +```sh +./beeline +Beeline version 1.2.2 by Apache Hive +beeline> !connect jdbc:hive2://localhost:10000 + +1: jdbc:hive2://localhost:10000> use testdb; ++---------+--+ +| Result | ++---------+--+ ++---------+--+ +No rows selected (0.013 seconds) + +select count(*) from account; ++-----------+--+ +| count(1) | ++-----------+--+ +| 1000000 | ++-----------+--+ +1 row selected (1.97 seconds) +``` + +## TiSparkR + +TiSparkR is a thin layer built to support the R language with TiSpark. Refer to [this document](https://github.com/pingcap/tispark/blob/master/R/README.md) for usage. + +## TiSpark on PySpark + +TiSpark on PySpark is a Python package build to support the Python language with TiSpark. Refer to [this document](https://github.com/pingcap/tispark/blob/master/python/README.md) for usage. + +## FAQ + +Q: What are the pros/cons of independent deployment as opposed to a shared resource with an existing Spark / Hadoop cluster? + +A: You can use the existing Spark cluster without a separate deployment, but if the existing cluster is busy, TiDB Connector for Spark will not be able to achieve the desired speed. + +Q: Can I mix Spark with TiKV? + +A: If TiDB and TiKV are overloaded and run critical online tasks, consider deploying the TiDB Connector for Spark separately. You also need to consider using different NICs to ensure that OLTP's network resources are not compromised and affect online business. If the online business requirements are not high or the loading is not large enough, you can consider mixing the TiDB Connector for Spark with TiKV deployment. diff --git a/v1.0/tools/loader.md b/v1.0/tools/loader.md new file mode 100755 index 0000000000000..ce72325c3071c --- /dev/null +++ b/v1.0/tools/loader.md @@ -0,0 +1,145 @@ +--- +title: Loader Instructions +category: advanced +--- + +# Loader Instructions + +## What is Loader? + +Loader is a data import tool to load data to TiDB. + +[Download the Binary](http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.tar.gz). + +## Why did we develop Loader? + +Since tools like mysqldump will take us days to migrate massive amounts of data, we used the mydumper/myloader suite of Percona to multi-thread export and import data. During the process, we found that mydumper works well. However, as myloader lacks functions of error retry and savepoint, it is inconvenient for us to use. Therefore, we developed loader, which reads the output data files of mydumper and imports data to TiDB through mysql protocol. + +## What can Loader do? + ++ Multi-thread import data + ++ Support table level concurrent import and scattered hot spot write + ++ Support concurrent import of a single large table and scattered hot spot write + ++ Support mydumper data format + ++ Support error retry + ++ Support savepoint + ++ Improve the speed of importing data through system variable + +## Usage + +> **Note:** +> - Do not import the `mysql` system database from the MySQL instance to the downstream TiDB instance. +> - If mydumper uses the `-m` parameter, the data is exported without the table structure and the loader can not import the data. +> - If you use the default `checkpoint-schema` parameter, after importing the data of a database, run `drop database tidb_loader` before you begin to import the next database. +> - It is recommended to specify the `checkpoint-schema = "tidb_loader"` parameter when importing data. + +### Parameter description + +``` + -L string: the log level setting, which can be set as debug, info, warn, error, fatal (default: "info") + + -P int: the port of TiDB (default: 4000) + + -V boolean: prints version and exit + + -c string: config file + + -checkpoint-schema string: the database name of checkpoint. In the execution process, loader will constantly update this database. After recovering from an interruption, loader will get the process of the last run through this database. (default: "tidb_loader") + + -d string: the storage directory of data that need to import (default: "./") + + -h string: the host of TiDB (default: "127.0.0.1") + + -p string: the account and password of TiDB + + -pprof-addr string: the pprof address of Loader. It tunes the perfomance of Loader (default: ":10084") + + -t int: the number of thread,increase this as TiKV nodes increase (default: 16) + + -u string: the user name of TiDB (default: "root") +``` + +### Configuration file + +Apart from command line parameters, you can also use configuration files. The format is shown as below: + +```toml +# Loader log level, which can be set as "debug", "info", "warn", "error" and "fatal" (default: "info") +log-level = "info" + +# Loader log file +log-file = "loader.log" + +# Directory of the dump to import (default: "./") +dir = "./" + +# Loader pprof address, used to tune the performance of Loader (default: "127.0.0.1:10084") +pprof-addr = "127.0.0.1:10084" + +# The checkpoint data is saved to TiDB, and the schema name is defined here. +checkpoint-schema = "tidb_loader" + +# Number of threads restoring concurrently for worker pool (default: 16). Each worker restore one file at a time. +pool-size = 16 + +# The target database information +[db] +host = "127.0.0.1" +user = "root" +password = "" +port = 4000 + +# The sharding synchronising rules support wildcharacter. +# 1. The asterisk character (*, also called "star") matches zero or more characters, +# for example, "doc*" matches "doc" and "document" but not "dodo"; +# asterisk character must be in the end of the wildcard word, +# and there is only one asterisk in one wildcard word. +# 2. The question mark '?' matches exactly one character. +# [[route-rules]] +# pattern-schema = "shard_db_*" +# pattern-table = "shard_table_*" +# target-schema = "shard_db" +# target-table = "shard_table" +``` + +### Usage example + +Command line parameter: + +``` +./bin/loader -d ./test -h 127.0.0.1 -u root -P 4000 +``` + +Or use configuration file "config.toml": + +``` +./bin/loader -c=config.toml +``` + +## FAQ + +### The scenario of synchronising data from sharded tables + +Loader supports importing data from sharded tables into one table within one database according to the route-rules. Before synchronising, check the following items: + +- Whether the sharding rules can be represented using the `route-rules` syntax. +- Whether the sharded tables contain monotone increasing primary keys, or whether there are conflicts in the unique indexes or the primary keys after the combination. + +To combine tables, start the `route-rules` parameter in the configuration file of Loader: + +- To use the table combination function, it is required to fill the `pattern-schema` and `target-schema`. +- If the `pattern-table` and `target-table` are NULL, the table name is not combined or converted. + +``` +[[route-rules]] +pattern-schema = "example_db" +pattern-table = "table_*" +target-schema = "example_db" +target-table = "table" +``` \ No newline at end of file diff --git a/v1.0/tools/pd-control.md b/v1.0/tools/pd-control.md new file mode 100755 index 0000000000000..a3f60b0ffbd24 --- /dev/null +++ b/v1.0/tools/pd-control.md @@ -0,0 +1,381 @@ +--- +title: PD Control User Guide +category: tools +--- + +# PD Control User Guide + +As a command line tool of PD, PD Control obtains the state information of the cluster and tunes the cluster. + +## Source code compiling + +1. [Go](https://golang.org/) Version 1.7 or later +2. In the PD root directory, use the `make` command to compile and generate `bin/pd-ctl` + +> **Note:** Generally, you don't need to compile source code as the PD Control tool already exists in the released Binary or Docker. However, dev users can refer to the above instruction for compiling source code. + +## Usage + +Single-command mode: + + ./pd-ctl store -d -u http://127.0.0.1:2379 + +Interactive mode: + + ./pd-ctl -u http://127.0.0.1:2379 + +Use environment variables: + +```bash +export PD_ADDR=http://127.0.0.1:2379 +./pd-ctl +``` + +Use TLS to encrypt: + +```bash +./pd-ctl -u https://127.0.0.1:2379 --cacert="path/to/ca" --cert="path/to/cert" --key="path/to/key" +``` + +## Command line flags + +### \-\-pd,-u + ++ PD address ++ Default address: http://127.0.0.1:2379 ++ Enviroment variable: PD_ADDR + +### \-\-detach,-d + ++ Use single command line mode (not entering readline) ++ Default: false + +### --cacert + ++ Specify the path to the certificate file of the trusted CA in PEM format ++ Default: "" + +### --cert + ++ Specify the path to the certificate of SSL in PEM format ++ Default: "" + +### --key + ++ Specify the path to the certificate key file of SSL in PEM format, which is the private key of the certificate specified by `--cert` ++ Default: "" + +### --version,-V + ++ Print the version information and exit ++ Default: false + +## Command + +### `cluster` + +Use this command to view the basic information of the cluster. + +Usage: + +```bash +>> cluster // To show the cluster information +{ + "id": 6493707687106161130, + "max_peer_count": 3 +} +``` + +### `config [show | set \ \]` + +Use this command to view or modify the configuration information. + +Usage: + +```bash +>> config show // Display the config information of the scheduler +{ + "max-snapshot-count": 3, + "max-pending-peer-count": 16, + "max-store-down-time": "1h0m0s", + "leader-schedule-limit": 64, + "region-schedule-limit": 16, + "replica-schedule-limit": 24, + "tolerant-size-ratio": 2.5, + "schedulers-v2": [ + { + "type": "balance-region", + "args": null + }, + { + "type": "balance-leader", + "args": null + }, + { + "type": "hot-region", + "args": null + } + ] +} +>> config show all // Display all config information +>> config show namespace ts1 // Display the config information of the namespace named ts1 +{ + "leader-schedule-limit": 64, + "region-schedule-limit": 16, + "replica-schedule-limit": 24, + "max-replicas": 3, +} +>> config show replication // Display the config information of replication +{ + "max-replicas": 3, + "location-labels": "" +} +``` + +- `leader-schedule-limit` controls the number of tasks scheduling the leader at the same time. This value affects the speed of leader balance. A larger value means a higher speed and setting the value to 0 closes the scheduling. Usually the leader scheduling has a small load, and you can increase the value in need. + + ```bash + >> config set leader-schedule-limit 4 // 4 tasks of leader scheduling at the same time at most + ``` + +- `region-schedule-limit` controls the number of tasks scheduling the region at the same time. This value affects the speed of region balance. A larger value means a higher speed and setting the value to 0 closes the scheduling. Usually the region scheduling has a large load, so do not set a too large value. + + ```bash + >> config set region-schedule-limit 2 // 2 tasks of region scheduling at the same time at most + ``` + +- `replica-schedule-limit` controls the number of tasks scheduling the replica at the same time. This value affects the scheduling speed when the node is down or removed. A larger value means a higher speed and setting the value to 0 closes the scheduling. Usually the replica scheduling has a large load, so do not set a too large value. + + ```bash + >> config set replica-schedule-limit 4 // 4 tasks of replica scheduling at the same time at most + ``` + +The configuration above is global. You can also tune the configuration by configuring different namespaces. The global configuration is used if the corresponding configuration of the namespace is not set. + +> **Note:** The configuration of the namespace only supports editing `leader-schedule-limit`, `region-schedule-limit`, `replica-schedule-limit` and `max-replicas`. + + ```bash + >> config set namespace ts1 leader-schedule-limit 4 // 4 tasks of leader scheduling at the same time at most for the namespace named ts1 + >> config set namespace ts2 region-schedule-limit 2 // 2 tasks of region scheduling at the same time at most for the namespace named ts2 + ``` + +### `config delete namespace \ [\]` + +Use this command to delete the configuration of namespace. + +Usage: + +After you configure the namespace, if you want it to continue to use global configuration, delete the configuration information of the namespace using the following command: + +```bash +>> config delete namespace ts1 // Delete the configuration of the namespace named ts1 +``` + +If you want to use global configuration only for a certain configuration of the namespace, use the following command: + +```bash +>> config delete namespace region-schedule-limit ts2 // Delete the region-schedule-limit configuration of the namespace named ts2 +``` + +### `health` + +Use this command to view the health information of the cluster. + +Usage: + +```bash +>> health // Display the health information +{"health": "true"} +``` + +### `hot [read | write | store]` + +Use this command to view the hot spot information of the cluster. + +Usage: + +```bash +>> hot read // Display hot spot for the read operation +>> hot write // Display hot spot for the write operation +>> hot store // Display hot spot for all the read and write operations +``` + +### `label [store]` + +Use this command to view the label information of the cluster. + +Usage: + +```bash +>> label // Display all labels +>> label store zone cn // Display all stores including the "zone":"cn" label +``` + +### `member [leader | delete]` + +Use this command to view the PD members or remove a specified member. + +Usage: + +```bash +>> member // Display the information of all members +{ + "members": [......] +} +>> member leader show // Display the information of the leader +{ + "name": "pd", + "addr": "http://192.168.199.229:2379", + "id": 9724873857558226554 +} +>> member delete name pd2 // Delete "pd2" +Success! +>> member delete id 1319539429105371180 // Delete a node using id +Success! +``` + +### `operator [show | add | remove]` + +Use this command to view and control the scheduling operation. + +Usage: + +```bash +>> operator show // Display all operators +>> operator show admin // Display all admin operators +>> operator show leader // Display all leader operators +>> operator show region // Display all region operators +>> operator add add-peer 1 2 // Add a replica of region 1 on store 2 +>> operator remove remove-peer 1 2 // Remove a replica of region 1 on store 2 +>> operator add transfer-leader 1 2 // Schedule the leader of region 1 to store 2 +>> operator add transfer-region 1 2 3 4 // Schedule region 1 to store 2,3,4 +>> operator add transfer-peer 1 2 3 // Schedule the replica of region 1 on store 2 to store 3 +>> operator remove 1 // Remove the scheduling operation of region 1 +``` + +### `ping` + +Use this command to view the time that `ping` PD takes. + +Usage: + +```bash +>> ping +time: 43.12698ms +``` + +### `region \` + +Use this command to view the region information. + +Usage: + +```bash +>> region // Display the information of all regions +{ + "count": 1, + "regions": [......] +} + +>> region 2 // Display the information of the region with the id of 2 +{ + "region": { + "id": 2, + ...... + } + "leader": { + ...... + } +} +``` + +### `region key [--format=raw|pb|proto|protobuf] \` + +Use this command to query the region that a specific key resides in. It supports the raw and protobuf formats. + +Raw format usage (default): + +```bash +>> region key abc +{ + "region": { + "id": 2, + ...... + } +} +``` + +Protobuf format usage: + +```bash +>> region key --format=pb t\200\000\000\000\000\000\000\377\035_r\200\000\000\000\000\377\017U\320\000\000\000\000\000\372 +{ + "region": { + "id": 2, + ...... + } +} +``` + +### `scheduler [show | add | remove]` + +Use this command to view and control the scheduling strategy. + +Usage: + +```bash +>> scheduler show // Display all schedulers +>> scheduler add grant-leader-scheduler 1 // Schedule all the leaders of the regions on store 1 to store 1 +>> scheduler add evict-leader-scheduler 1 // Move all the region leaders on store 1 out +>> scheduler add shuffle-leader-scheduler // Randomly exchange the leader on different stores +>> scheduler add shuffle-region-scheduler // Randomly scheduling the regions on different stores +>> scheduler remove grant-leader-scheduler-1 // Remove the corresponding scheduler +``` + +### `store [delete | label | weight] \` + +Use this command to view the store information or remove a specified store. + +Usage: + +```bash +>> store // Display information of all stores +{ + "count": 3, + "stores": [...] +} +>> store 1 // Get the store with the store id of 1 + ...... +>> store delete 1 // Delete the store with the store id of 1 + ...... +>> store label 1 zone cn // Set the value of the label with the "zone" key to "cn" for the store with the store id of 1 +>> store weight 1 5 10 // Set the leader weight to 5 and region weight to 10 for the store with the store id of 1 +``` + +### `table_ns [create | add | remove | set_store | rm_store | set_meta | rm_meta]` + +Use this command to view the namespace information of the table. + +Usage: + +```bash +>> table_ns add ts1 1 // Add the table with the table id of 1 to the namespace named ts1 +>> table_ns create ts1 // Add the namespace named ts1 +>> table_ns remove ts1 1 // Remove the table with the table id of 1 from the namespace named ts1 +>> table_ns rm_meta ts1 // Remove the metadata from the namespace named ts1 +>> table_ns rm_store 1 ts1 // Remove the table with the store id of 1 from the namespace named ts1 +>> table_ns set_meta ts1 // Add the metadata to namespace named ts1 +>> table_ns set_store 1 ts1 // Add the table with the store id of 1 to the namespace named ts1 +``` + +### `tso` + +Use this command to parse the physical and logical time of TSO. + +Usage: + +```bash +>> tso 395181938313123110 // Parse TSO +system: 2017-10-09 05:50:59 +0800 CST +logic: 120102 +``` \ No newline at end of file diff --git a/v1.0/tools/syncer.md b/v1.0/tools/syncer.md new file mode 100755 index 0000000000000..a619176ccdd37 --- /dev/null +++ b/v1.0/tools/syncer.md @@ -0,0 +1,523 @@ +--- +title: Syncer User Guide +category: advanced +--- + +# Syncer User Guide + +## About Syncer + +Syncer is a tool used to import data incrementally. It is a part of the TiDB enterprise toolset. To obtain Syncer, see [Download the TiDB enterprise toolset](#download-the-tidb-toolset-linux). + +## Syncer architecture + +![syncer sharding](../media/syncer_architecture.png) + +## Download the TiDB enterprise toolset (Linux) + +```bash +# Download the tool package. +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf tidb-enterprise-tools-latest-linux-amd64.tar.gz +cd tidb-enterprise-tools-latest-linux-amd64 +``` + +## Where to deploy Syncer + +You can deploy Syncer to any of the machines that can connect to MySQL or the TiDB cluster. But it is recommended to deploy Syncer to the TiDB cluster. + +## Use Syncer to import data incrementally + +Before importing data, read [Check before importing data using Syncer](#check-before-importing-data-using-syncer). + +### 1. Set the position to synchronize + +Edit the meta file of Syncer, assuming the meta file is `syncer.meta`: + +```bash +# cat syncer.meta +binlog-name = "mysql-bin.000003" +binlog-pos = 930143241 +binlog-gtid = "2bfabd22-fff7-11e6-97f7-f02fa73bcb01:1-23,61ccbb5d-c82d-11e6-ac2e-487b6bd31bf7:1-4" +``` + +> **Note:** +> +> - The `syncer.meta` file only needs to be configured when it is first used. The position is automatically updated when the new subsequent binlog is synchronized. +> - If you use the binlog position to synchronize, you only need to configure `binlog-name` and `binlog-pos`; if you use `binlog-gtid` to synchronize, you need to configure `binlog-gtid` and set `--enable-gtid` when starting Syncer. + +### 2. Start Syncer + +Description of Syncer command line options: + +``` +Usage of Syncer: + -L string + log level: debug, info, warn, error, fatal (default "info") + -V + to print Syncer version info (default false) + -auto-fix-gtid + to automatically fix the gtid info when MySQL master and slave switches (default false) + -b int + the size of batch transactions (default 10) + -c int + the number of batch threads that Syncer processes (default 16) + -config string + to specify the corresponding configuration file when starting Syncer; for example, `--config config.toml` + -enable-gtid + to start Syncer using the mode; default false; before enabling this option, you need to enable GTID in the upstream MySQL + -log-file string + to specify the log file directory, such as `--log-file ./syncer.log` + -log-rotate string + to specify the log file rotating cycle, hour/day (default "day") + -meta string + to specify the meta file of Syncer upstream (in the same directory with the configuration file by default "syncer.meta") + -server-id int + to specify MySQL slave sever-id (default 101) + -status-addr string + to specify Syncer metrics, such as `--status-addr 127:0.0.1:10088` +``` + +The `config.toml` configuration file of Syncer: + +```toml +log-level = "info" + +server-id = 101 + +# The file path for meta: +meta = "./syncer.meta" + +worker-count = 16 +batch = 10 + +# The testing address for pprof. It can also be used by Prometheus to pull Syncer metrics. +# Change "127.0.0.1" to the IP address of the corresponding host +status-addr = "127.0.0.1:10086" + +# Note: skip-sqls is abandoned, and use skip-ddls instead. +# skip-ddls skips the DDL statements that are incompatible with TiDB, and supports regular expressions. +# skip-ddls = ["^CREATE\\s+USER"] + +# Note: skip-events is abandoned, and use skip-dmls instead. +# skip-dmls skips the DML statements. The type value can be 'insert', 'update' and 'delete'. +# The 'delete' statements that skip-dmls skips in the foo.bar table: +# [[skip-dmls]] +# db-name = "foo" +# tbl-name = "bar" +# type = "delete" +# +# The 'delete' statements that skip-dmls skips in all tables: +# [[skip-dmls]] +# type = "delete" +# +# The 'delete' statements that skip-dmls skips in all foo.* tables: +# [[skip-dmls]] +# db-name = "foo" +# type = "delete" + +# Specify the database name to be synchronized. Support regular expressions. Start with '~' to use regular expressions. +# replicate-do-db = ["~^b.*","s1"] + +# Specify the db.table to be synchronized. +# db-name and tbl-name do not support the `db-name ="dbname,dbname2"` format. +# [[replicate-do-table]] +# db-name ="dbname" +# tbl-name = "table-name" + +# [[replicate-do-table]] +# db-name ="dbname1" +# tbl-name = "table-name1" + +# Specify the db.table to be synchronized. Support regular expressions. Start with '~' to use regular expressions. +# [[replicate-do-table]] +# db-name ="test" +# tbl-name = "~^a.*" + +# Specify the database you want to ignore in synchronization. Support regular expressions. Start with '~' to use regular expressions. +# replicate-ignore-db = ["~^b.*","s1"] + +# Specify the database table you want to ignore in synchronization. +# db-name and tbl-name do not support the `db-name ="dbname,dbname2"` format. +# [[replicate-ignore-table]] +# db-name = "your_db" +# tbl-name = "your_table" + +# Specify the database table you want to ignore in synchronization. Support regular expressions. Start with '~' to use regular expressions. +# [[replicate-ignore-table]] +# db-name ="test" +# tbl-name = "~^a.*" + +# The sharding synchronizing rules support wildcharacter. +# 1. The asterisk character ("*", also called "star") matches zero or more characters, +# For example, "doc*" matches "doc" and "document" but not "dodo"; +# The asterisk character must be in the end of the wildcard word, +# and there is only one asterisk in one wildcard word. +# 2. The question mark ("?") matches any single character. +# [[route-rules]] +# pattern-schema = "route_*" +# pattern-table = "abc_*" +# target-schema = "route" +# target-table = "abc" + +# [[route-rules]] +# pattern-schema = "route_*" +# pattern-table = "xyz_*" +# target-schema = "route" +# target-table = "xyz" + +[from] +host = "127.0.0.1" +user = "root" +password = "" +port = 3306 + +[to] +host = "127.0.0.1" +user = "root" +password = "" +port = 4000 +``` + +Start Syncer: + +```bash +./bin/syncer -config config.toml + +2016/10/27 15:22:01 binlogsyncer.go:226: [info] begin to sync binlog from position (mysql-bin.000003, 1280) +2016/10/27 15:22:01 binlogsyncer.go:130: [info] register slave for master server 127.0.0.1:3306 +2016/10/27 15:22:01 binlogsyncer.go:552: [info] rotate to (mysql-bin.000003, 1280) +2016/10/27 15:22:01 syncer.go:549: [info] rotate binlog to (mysql-bin.000003, 1280) +``` + +### 3. Insert data into MySQL + +```sql +INSERT INTO t1 VALUES (4, 4), (5, 5); +``` + +### 4. Log in to TiDB and view the data + +```sql +mysql -h127.0.0.1 -P4000 -uroot -p +mysql> select * from t1; ++----+------+ +| id | age | ++----+------+ +| 1 | 1 | +| 2 | 2 | +| 3 | 3 | +| 4 | 4 | +| 5 | 5 | ++----+------+ +``` + +Syncer outputs the current synchronized data statistics every 30 seconds: + +```bash +2017/06/08 01:18:51 syncer.go:934: [info] [syncer]total events = 15, total tps = 130, recent tps = 4, +master-binlog = (ON.000001, 11992), master-binlog-gtid=53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-74, +syncer-binlog = (ON.000001, 2504), syncer-binlog-gtid = 53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-17 +2017/06/08 01:19:21 syncer.go:934: [info] [syncer]total events = 15, total tps = 191, recent tps = 2, +master-binlog = (ON.000001, 11992), master-binlog-gtid=53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-74, +syncer-binlog = (ON.000001, 2504), syncer-binlog-gtid = 53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-35 +``` + +The update in MySQL is automatically synchronized in TiDB. + +## Description of Syncer configuration + +### Specify the database to be synchronized + +This section describes the priority of parameters when you use Syncer to synchronize the database. + +- To use the route-rules, see [Support for synchronizing data from sharded tables](#support-for-synchronizing-data-from-sharded-tables). +- Priority: replicate-do-db --> replicate-do-table --> replicate-ignore-db --> replicate-ignore-table + +```toml +# Specify the ops database to be synchronized. +# Specify to synchronize the database starting with ti. +replicate-do-db = ["ops","~^ti.*"] + +# The "china" database includes multiple tables such as guangzhou, shanghai and beijing. You only need to synchronize the shanghai and beijing tables. +# Specify to synchronize the shanghai table in the "china" database. +[[replicate-do-table]] +db-name ="china" +tbl-name = "shanghai" + +# Specify to synchronize the beijing table in the "china" database. +[[replicate-do-table]] +db-name ="china" +tbl-name = "beijing" + +# The "ops" database includes multiple tables such as ops_user, ops_admin, weekly. You only need to synchronize the ops_user table. +# Because replicate-do-db has a higher priority than replicate-do-table, it is invalid if you only set to synchronize the ops_user table. In fact, the whole "ops" database is synchronized. +[[replicate-do-table]] +db-name ="ops" +tbl-name = "ops_user" + +# The "history" database includes multiple tables such as 2017_01 2017_02 ... 2017_12/2016_01 2016_02 ... 2016_12. You only need to synchronize the tables of 2017. +[[replicate-do-table]] +db-name ="history" +tbl-name = "~^2017_.*" + +# Ignore the "ops" and "fault" databases in synchronization +# Ignore the databases starting with "www" in synchronization +# Because replicate-do-db has a higher priority than replicate-ignore-db, it is invalid to ignore the "ops" database here in synchronization. +replicate-ignore-db = ["ops","fault","~^www"] + +# The "fault" database includes multiple tables such as faults, user_feedback, ticket. +# Ignore the user_feedback table in synchronization. +# Because replicate-ignore-db has a higher priority than replicate-ignore-table, it is invalid if you only set to synchronize the user_feedback table. In fact, the whole "fault" database is synchronized. +[[replicate-ignore-table]] +db-name = "fault" +tbl-name = "user_feedback" + +# The "order" database includes multiple tables such as 2017_01 2017_02 ... 2017_12/2016_01 2016_02 ... 2016_12. You need to ignore the tables of 2016. +[[replicate-ignore-table]] +db-name ="order" +tbl-name = "~^2016_.*" +``` + +### Support for synchronizing data from sharded tables + +You can use Syncer to import data from sharded tables into one table within one database according to the `route-rules`. But before synchronizing, you need to check: + +- Whether the sharding rules can be represented using the `route-rules` syntax. +- Whether the sharded tables contain unique increasing primary keys, or whether conflicts exist in the unique indexes or the primary keys after the combination. + +Currently, the support for DDL is still in progress. + +![syncer sharding](../media/syncer_sharding.png) + +#### Usage of synchronizing data from sharded tables + +1. Start Syncer in all MySQL instances and configure the route-rules. +2. In scenarios using replicate-do-db & replicate-ignore-db and route-rules at the same time, you need to specify the target-schema & target-table content in route-rules. + +```toml +# The scenarios are as follows: +# Database A includes multiple databases such as order_2016 and history_2016. +# Database B includes multiple databases such as order_2017 and history_2017. +# Specify to synchronize order_2016 in database A; the data tables are 2016_01 2016_02 ... 2016_12 +# Specify to synchronize order_2017 in database B; the data tables are 2017_01 2017_02 ... 2017_12 +# Use order_id as the primary key in the table, and the primary keys among data do not conflict. +# Ignore the history_2016 and history_2017 databases in synchronization +# The target database is "order" and the target data tables are order_2017 and order_2016. + +# When Syncer finds that the route-rules is enabled after Syncer gets the upstream data, it first combines databases and tables, and then determines do-db & do-table. +# You need to configure the database to be synchronized, which is required when you determine the target-schema & target-table. +[[replicate-do-table]] +db-name ="order" +tbl-name = "order_2016" + +[[replicate-do-table]] +db-name ="order" +tbl-name = "order_2017" + +[[route-rules]] +pattern-schema = "order_2016" +pattern-table = "2016_??" +target-schema = "order" +target-table = "order_2016" + +[[route-rules]] +pattern-schema = "order_2017" +pattern-table = "2017_??" +target-schema = "order" +target-table = "order_2017" +``` + +### Check before importing data using Syncer + +1. Check the `server-id` of the source database. + + - Check the `server-id` using the following command: + + ``` + mysql> show global variables like 'server_id'; + +---------------+------- + | Variable_name | Value | + +---------------+-------+ + | server_id | 1 | + +---------------+-------+ + 1 row in set (0.01 sec) + ``` + + - If the result is null or 0, Syncer cannot synchronize data. + - Syncer server-id must be different from MySQL server-id, and must be unique in the MySQL cluster. + +2. Check the related binlog parameters + + - Check whether the binlog is enabled in MySQL using the following command: + + ``` + mysql> show global variables like 'log_bin'; + +--------------------+---------+ + | Variable_name | Value | + +--------------------+---------+ + | log_bin | ON | + +--------------------+---------+ + 1 row in set (0.00 sec) + ``` + + - If the result is `log_bin = OFF`, you need to enable the binlog. See the [document about enabling the binlog](https://dev.mysql.com/doc/refman/5.7/en/replication-howto-masterbaseconfig.html). + +3. Check whether the binlog format in MySQL is ROW. + + - Check the binlog format using the following command: + + ``` + mysql> show global variables like 'binlog_format'; + +--------------------+----------+ + | Variable_name | Value | + +--------------------+----------+ + | binlog_format | ROW | + +--------------------+----------+ + 1 row in set (0.00 sec) + ``` + + - If the binlog format is not ROW, set it to ROW using the following command: + + ``` + mysql> set global binlog_format=ROW; + mysql> flush logs; + Query OK, 0 rows affected (0.01 sec) + ``` + + - If MySQL is connected, it is recommended to restart MySQL or kill all connections. + +4. Check whether MySQL `binlog_row_image` is FULL. + + - Check `binlog_row_image` using the following command: + + ``` + mysql> show global variables like 'binlog_row_image'; + +--------------------------+---------+ + | Variable_name | Value | + +--------------------------+---------+ + | binlog_row_image | FULL | + +--------------------------+----------+ + 1 row in set (0.01 sec) + ``` + + - If the result of `binlog_row_image` is not FULL, set it to FULL using the following command: + + ``` + mysql> set global binlog_row_image = FULL; + Query OK, 0 rows affected (0.01 sec) + ``` + +5. Check user privileges of mydumper. + + - To export data using mydumper, the user must have the privilege of `select, reload`. + - You can add the `--no-locks` option when the operation object is RDS, to avoid applying for the privilege of `reload`. + +6. Check user privileges of synchronizing the upstream and downstream data. + + - The following privileges granted by the upstream MySQL synchronization account at least: + + `select, replication slave, replication client` + + - For the downstream TiDB, you can temporarily use the root account with the same privileges. + +7. Check the GTID and POS related information. + + Check the binlog information using the following statement: + + ``` + show binlog events in 'mysql-bin.000023' from 136676560 limit 10; + ``` + +## Syncer monitoring solution + +The `syncer` monitoring solution contains the following components: + +- Prometheus, an open source time series database, used to store the monitoring and performance metrics +- Grafana, an open source project for analyzing and visualizing metrics, used to display the performance metrics +- AlertManager, combined with Grafana to implement the alerting mechanism + +See the following diagram: + +![syncer_monitor_scheme](../media/syncer_monitor_scheme.png) + +### Configure Syncer monitor and alert + +Syncer provides the metric interface, and requires Prometheus to actively obtain data. Take the following steps to configure Syncer monitor and alert: + +1. To add the Syncer job information to Prometheus, flush the following content to the configuration file of Prometheus. The monitor is enabled when you restart Prometheus. + + ```yaml + - job_name: 'syncer_ops' // name of the job, to distinguish the reported data + static_configs: + - targets: ['10.1.1.4:10086'] // Syncer monitoring address and port; to inform Prometheus of obtaining the monitoring data of Syncer + ``` + +2. To configure Prometheus [alert](https://prometheus.io/docs/alerting/alertmanager/), flush the following content to the `alert.rule` configuration file. The alert is enabled when you restart Prometheus. + + ``` + # syncer + ALERT syncer_status + IF syncer_binlog_file{node='master'} - ON(instance, job) syncer_binlog_file{node='syncer'} > 1 + FOR 1m + LABELS {channels="alerts", env="test-cluster"} + ANNOTATIONS { + summary = "syncer status error", + description="alert: syncer_binlog_file{node='master'} - ON(instance, job) syncer_binlog_file{node='syncer'} > 1 instance: {{ $labels.instance }} values: {{ $value }}", + } + ``` + +#### Configure Grafana + +1. Log in to the Grafana Web interface. + + - The default address is: http://localhost:3000 + - The default account name: admin + - The password for the default account: admin + +2. Import the configuration file of Grafana dashboard. + + Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source. + +### Description of Grafana Syncer metrics + +#### title: binlog events + +- metrics: `irate(syncer_binlog_events_total[1m])` +- info: the master binlog statistics that has been synchronized by Syncer, including the five major types of `query`, `rotate`, `update_rows`, `write_rows` and `delete_rows` + +#### title: syncer_binlog_file + +- metrics: `syncer_binlog_file` +- info: the number of master binlog files synchronized by Syncer + +#### title: binlog pos + +- metrics: `syncer_binlog_pos` +- info: the binlog-pos information that Syncer synchronizes the current master binlog + +#### title: syncer_gtid + +- metrics: `syncer_gtid` +- info: the binlog-gtid information that Syncer synchronizes the current master binlog + +#### title: syncer_binlog_file + +- metrics: `syncer_binlog_file{node="master"} - ON(instance, job) syncer_binlog_file{node="syncer"}` +- info: the number of different binlog files between the upstream and the downstream in the process of synchronization; the normal value is 0, which indicates real-time synchronization; a larger value indicates a larger number of binlog files discrepancy + +#### title: binlog skipped events + +- metrics: `irate(syncer_binlog_skipped_events_total[1m])` +- info: the total number of SQL statements that Syncer skips when the upstream synchronizes binlog files with the downstream; you can configure the format of SQL statements skipped by Syncer using the `skip-sqls` parameter in the `syncer.toml` file. + +#### title: syncer_txn_costs_gauge_in_second + +- metrics: `syncer_txn_costs_gauge_in_second` +- info: the time consumed by Syncer when it processes one batch (unit: second) \ No newline at end of file diff --git a/v1.0/tools/tidb-binlog-kafka.md b/v1.0/tools/tidb-binlog-kafka.md new file mode 100755 index 0000000000000..d5717b139920f --- /dev/null +++ b/v1.0/tools/tidb-binlog-kafka.md @@ -0,0 +1,414 @@ +--- +title: TiDB-Binlog user guide +category: tool +--- + +# TiDB-Binlog User Guide + +This document describes how to deploy the Kafka version of TiDB-Binlog. If you need to deploy the local version of TiDB-Binlog, see the [TiDB-Binlog user guide for the local version](tidb-binlog.md). + +## About TiDB-Binlog + +TiDB-Binlog is a tool for enterprise users to collect binlog files for TiDB and provide real-time backup and synchronization. + +TiDB-Binlog supports the following scenarios: + +- **Data synchronization**: to synchronize TiDB cluster data to other databases +- **Real-time backup and recovery**: to back up TiDB cluster data, and recover in case of cluster outages + +## TiDB-Binlog architecture + +The TiDB-Binlog architecture is as follows: + +![TiDB-Binlog architecture](../media/tidb_binlog_kafka_architecture.png) + +The TiDB-Binlog cluster mainly consists of three components: + +### Pump + +Pump is a daemon that runs on the background of each TiDB host. Its main function is to record the binlog files generated by TiDB in real time and write to the file in the disk sequentially. + +### Drainer + +Drainer collects binlog files from each Pump node, converts them into specified database-compatible SQL statements in the commit order of the transactions in TiDB, and synchronizes to the target database or writes to the file sequentially. + +### Kafka & ZooKeeper + +The Kafka cluster stores the binlog data written by Pump and provides the binlog data to Drainer for reading. + +> **Note:** In the local version of TiDB-Binlog, the binlog is stored in files, while in the latest version, the binlog is stored using Kafka. + +## Install TiDB-Binlog + +### Download Binary for the CentOS 7.3+ platform + +```bash +# Download the tool package. +wget http://download.pingcap.org/tidb-binlog-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/tidb-binlog-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c tidb-binlog-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf tidb-binlog-latest-linux-amd64.tar.gz +cd tidb-binlog-latest-linux-amd64 +``` + +## Deploy TiDB-Binlog + +### Note + +- You need to deploy a Pump for each TiDB server in the TiDB cluster. Currently, the TiDB server only supports the binlog in UNIX socket. + +- When you deploy a Pump manually, to start the service, follow the order of Pump -> TiDB; to stop the service, follow the order of TiDB -> Pump. + + We set the startup parameter `binlog-socket` as the specified unix socket file path of the corresponding parameter `socket` in Pump. The final deployment architecture is as follows: + + ![TiDB Pump deployment architecture](../media/tidb_pump_deployment.jpeg) + +- Drainer does not support renaming DDL on the table of the ignored schemas (schemas in the filter list). + +- To start Drainer in the existing TiDB cluster, usually you need to do a full backup, get the savepoint, import the full backup, and start Drainer and synchronize from the savepoint. + + To guarantee the integrity of data, perform the following operations 10 minutes after Pump is started: + + - Use the `generate_binlog_position` tool of the [tidb-tools](https://github.com/pingcap/tidb-tools)project to generate the Drainer savepoint file. Use `generate_binlog_position` to compile this tool. See the [README description](https://github.com/pingcap/tidb-tools/blob/master/generate_binlog_position/README.md) for usage. You can also download this tool from [generate_binlog_position](https://download.pingcap.org/generate_binlog_position-latest-linux-amd64.tar.gz) and use `sha256sum` to verify the [sha256](https://download.pingcap.org/generate_binlog_position-latest-linux-amd64.sha256) file. + - Do a full backup. For example, back up TiDB using mydumper. + - Import the full backup to the target system. + - The savepoint file started by the Kafka version of Drainer is stored in the checkpoint table of the downstream database tidb_binlog by default. If no valid data exists in the checkpoint table, configure `initial-commit-ts` to make Drainer work from a specified position when it is started: + + ``` + bin/drainer --config=conf/drainer.toml --data-dir=${drainer_savepoint_dir} + ``` + +- The drainer outputs `pb` and you need to set the following parameters in the configuration file: + + ``` + [syncer] + db-type = "pb" + disable-dispatch = true + + [syncer.to] + dir = "/path/pb-dir" + ``` + +- Deploy Kafka and ZooKeeper cluster before deploying TiDB-Binlog. Make sure that Kafka is 0.9 version or later. + +#### Recommended Kafka cluster configuration + +|Name|Number|Memory size|CPU|Hard disk| +|:---:|:---:|:---:|:---:|:---:| +|Kafka|3+|16G|8+|2+ 1TB| +|ZooKeeper|3+|8G|4+|2+ 300G| + +#### Recommended Kafka parameter configuration + +- `auto.create.topics.enable = true`: if no topic exists, Kafka automatically creates a topic on the broker. +- `broker.id`: a required parameter to identify the Kafka cluster. Keep the parameter value unique. For example, `broker.id = 1`. +- `fs.file-max = 1000000`: Kafka uses a lot of files and network sockets. It is recommended to change the parameter value to 1000000. Change the value using `vi /etc/sysctl.conf`. + +### Deploy Pump using TiDB-Ansible + +- If you have not deployed the Kafka cluster, use the [Kafka-Ansible](https://github.com/pingcap/thirdparty-ops/tree/master/kafka-ansible) to deploy. +- When you deploy the TiDB cluster using [TiDB-Ansible](https://github.com/pingcap/tidb-ansible), edit the `tidb-ansible/inventory.ini` file, set `enable_binlog = True`, and configure the `zookeeper_addrs` variable as the ZooKeeper address of the Kafka cluster. In this way, Pump is deployed while you deploy the TiDB cluster. + +Configuration example: + +``` +# binlog trigger +enable_binlog = True +# zookeeper address of kafka cluster, example: +# zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181" +zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181" +``` + +### Deploy Pump using Binary + +A usage example: + +Assume that we have three PDs, three ZooKeepers, and one TiDB. The information of each node is as follows: + +``` +TiDB="192.168.0.10" +PD1="192.168.0.16" +PD2="192.168.0.15" +PD3="192.168.0.14" +ZK1="192.168.0.13" +ZK2="192.168.0.12" +ZK3="192.168.0.11" +``` + +Deploy Drainer/Pump on the machine with the IP address "192.168.0.10". + +The IP address of the corresponding PD cluster is "192.168.0.16,192.168.0.15,192.168.0.14". + +The ZooKeeper IP address of the corresponding Kafka cluster is "192.168.0.13,192.168.0.12,192.168.0.11". + +This example describes how to use Pump/Drainer. + +1. Description of Pump command line options + + ``` + Usage of Pump: + -L string + log level: debug, info, warn, error, fatal (default "info") + -V + to print Pump version info + -addr string + the RPC address that Pump provides service (-addr= "192.168.0.10:8250") + -advertise-addr string + the RPC address that Pump provides external service (-advertise-addr="192.168.0.10:8250") + -config string + to configure the file path of Pump; if you specifies the configuration file, Pump reads the configuration first; if the corresponding configuration also exists in the command line argument, Pump uses the command line configuration to cover that in the configuration file + -data-dir string + the path of storing Pump data + -enable-tolerant + after enabling tolerant, Pump wouldn't return error if it fails to write binlog (default true) + -zookeeper-addrs string (-zookeeper_addrs="192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181") + the ZooKeeper address; this option gets the Kafka address from ZooKeeper + -gc int + the maximum days that the binlog is retained (default 7), and 0 means retaining the binlog permanently + -heartbeat-interval int + the interval between heartbeats that Pump sends to PD (unit: second) + -log-file string + the path of the log file + -log-rotate string + the log file rotating frequency (hour/day) + -metrics-addr string + the Prometheus pushgataway address; leaving it empty disables Prometheus push + -metrics-interval int + the frequency of reporting monitoring information (default 15, unit: second) + -pd-urls string + the node address of the PD cluster (-pd-urls="http://192.168.0.16:2379,http://192.168.0.15:2379,http://192.168.0.14:2379") + -socket string + the monitoring address of the unix socket service (default "unix:///tmp/pump.sock") + ``` + +2. Pump configuration file + + ```toml + # Pump configuration. + # the RPC address that Pump provides service (default "192.168.0.10:8250") + addr = "192.168.0.10:8250" + + # the RPC address that Pump provides external service (default "192.168.0.10:8250") + advertise-addr = "" + + # an integer value to control expiry date of the binlog data, indicates how long (in days) the binlog data is stored. + # (default value is 0, means binlog data would never be removed) + gc = 7 + + # the path of storing Pump data + data-dir = "data.pump" + + # the ZooKeeper address; You can set the option to get the Kafka address from ZooKeeper + zookeeper-addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181" + + # the interval between heartbeats that Pump sends to PD (unit: second) + heartbeat-interval = 3 + + # the node address of the PD cluster + pd-urls = "http://192.168.0.16:2379,http://192.168.0.15:2379,http://192.168.0.14:2379" + + # the monitoring address of the unix socket service (default "unix:///tmp/pump.sock") + socket = "unix:///tmp/pump.sock" + ``` + +3. Startup example + + ```bash + ./bin/pump -config pump.toml + ``` + +### Deploy Drainer using Binary + +1. Description of Drainer command line arguments + + ``` + Usage of Drainer: + -L string + log level: debug, info, warn, error, fatal (default "info") + -V + to print Pump version info + -addr string + the address that Drainer provides service (default "192.168.0.10:8249") + -c int + to synchronize the downstream concurrency number, and a bigger value means better throughput performance (default 1) + -config string + to configure the file path of Drainer; if you specifies the configuration file, Drainer reads the configuration first; if the corresponding configuration also exists in the command line argument, Pump uses the command line configuration to cover that in the configuration file + -data-dir string + the path of storing Drainer data (default "data.drainer") + -zookeeper-addrs string (-zookeeper-addrs="192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181") + the ZooKeeper address; you can set this option to get the Kafka address from ZooKeeper + -dest-db-type string + the downstream service type of Drainer (default "mysql") + -detect-interval int + the interval of detecting Pump's status from PD (default 10, unit: second) + -disable-dispatch + whether to disable dispatching sqls in a single binlog; if you set the value to true, it is restored into a single transaction to synchronize in the order of each binlog (If the downstream service type is "mysql", set the value to false) + -ignore-schemas string + the DB filtering list (default "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql,test"); does not support the rename DDL operation on the table of ignore schemas + -initial-commit-ts (default 0) + If Drainer does not provide related breakpoint information, you can use this option to configure the related breakpoint information + -log-file string + the path of the log file + -log-rotate string + the log file rotating frequency (hour/day) + -metrics-addr string + the Prometheus pushgataway address; leaving it empty disables Prometheus push + -metrics-interval int + the frequency of reporting monitoring information (default 15, unit: second) + -pd-urls string + the node address of the PD cluster (-pd-urls="http://192.168.0.16:2379,http://192.168.0.15:2379,http://192.168.0.14:2379") + -txn-batch int + the number of SQL statements in a single transaction that is output to the downstream database (default 1) + ``` + +2. Drainer configuration file + + ```toml + # Drainer configuration + + # the address that Drainer provides service ("192.168.0.10:8249") + addr = "192.168.0.10:8249" + + # the interval of detecting Pump's status from PD (default 10, unit: second) + detect-interval = 10 + + # the path of storing Drainer data (default "data.drainer") + data-dir = "data.drainer" + + # the ZooKeeper address; you can use this option to get the Kafka address from ZooKeeper + zookeeper-addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181" + + # the node address of the PD cluster + pd-urls = "http://192.168.0.16:2379,http://192.168.0.15:2379,http://192.168.0.14:2379" + + # the path of the log file + log-file = "drainer.log" + + # Syncer configuration. + [syncer] + + # the DB filtering list (default "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql,test") + # does not support the rename DDL operation on the table of ignore schemas + ignore-schemas = "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql" + + # the number of SQL statements in a single transaction that is output to the downstream database (default 1) + txn-batch = 1 + + # to synchronize the downstream concurrency number, and a bigger value means better throughput performance (default 1) + worker-count = 1 + + # whether to disable dispatching sqls in a single binlog; + # if you set the value to true, it is restored into a single transaction to synchronize in the order of each binlog (If the downstream service type is "mysql", set the value to false) + disable-dispatch = false + + # the downstream service type of Drainer (default "mysql") + # valid values: "mysql", "pb" + db-type = "mysql" + + # replicate-do-db priority over replicate-do-table if have same db name + # and we support regex expression, + # the regex expression starts with '~' + + # replicate-do-db = ["~^b.*","s1"] + + # [[syncer.replicate-do-table]] + # db-name ="test" + # tbl-name = "log" + + # [[syncer.replicate-do-table]] + # db-name ="test" + # tbl-name = "~^a.*" + + # server parameters of the downstream database when the db-type is set to "mysql" + [syncer.to] + host = "192.168.0.10" + user = "root" + password = "" + port = 3306 + + # the directory of the binlog file when the db-type is set to "pb" + # [syncer.to] + # dir = "data.drainer" + ``` + +3. Startup example + + ```bash + ./bin/drainer -config drainer.toml + ``` + +## Download PbReader (Linux) + +PbReader parses the pb file generated by Drainer and translates it into SQL statements. + +CentOS 7+ + +```bash +# Download PbReader package +wget http://download.pingcap.org/pb_reader-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/pb_reader-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c pb_reader-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf pb_reader-latest-linux-amd64.tar.gz +cd pb_reader-latest-linux-amd64 +``` + +The PbReader usage example + +```bash +./bin/pbReader -binlog-file=binlog-0000000000000000 +``` + +## Monitor TiDB-Binlog + +This section introduces how to monitor TiDB-Binlog's status and performance, and display the metrics using Prometheus and Grafana. + +### Configure Pump/Drainer + +Use the Pump service deployed using Ansible. Set metrics in startup parameters. + +When you start Drainer, set the two parameters of `--metrics-addr` and `--metrics-interval`. Set `--metrics-addr` as the address of Push Gateway. Set `--metrics-interval` as the frequency of push (default 15 seconds). + +### Configure Grafana + +#### Create a Prometheus data source + +1. Login the Grafana Web interface. + + - The default address is: [http://localhost:3000](http://localhost:3000) + + - The default account name: admin + + - The password for the default account: admin + +2. Click the Grafana logo to open the sidebar menu. + +3. Click "Data Sources" in the sidebar. + +4. Click "Add data source". + +5. Specify the data source information: + + - Specify the name for the data source. + - For Type, select Prometheus. + - For Url, specify the Prometheus address. + - Specify other fields as needed. + +6. Click "Add" to save the new data source. + +#### Create a Grafana dashboard + +1. Click the Grafana logo to open the sidebar menu. + +2. On the sidebar menu, click "Dashboards" -> "Import" to open the "Import Dashboard" window. + +3. Click "Upload .json File" to upload a JSON file (Download [TiDB Grafana Config](https://grafana.com/tidb)). + +4. Click "Save & Open". A Prometheus dashboard is created. \ No newline at end of file diff --git a/v1.0/tools/tidb-binlog.md b/v1.0/tools/tidb-binlog.md new file mode 100755 index 0000000000000..fccf4b3db281a --- /dev/null +++ b/v1.0/tools/tidb-binlog.md @@ -0,0 +1,345 @@ +--- +title: TiDB-Binlog user guide +category: tool +--- + +# TiDB-Binlog User Guide + +## About TiDB-Binlog + +TiDB-Binlog is a tool for enterprise users to collect binlog files for TiDB and provide real-time backup and synchronization. + +TiDB-Binlog supports the following scenarios: + +- **Data synchronization**: to synchronize TiDB cluster data to other databases +- **Real-time backup and recovery**: to back up TiDB cluster data, and recover in case of cluster outages + +## TiDB-Binlog architecture + +The TiDB-Binlog architecture is as follows: + +![TiDB-Binlog architecture](../media/architecture.jpeg) + +The TiDB-Binlog cluster mainly consists of two components: + +### Pump + +Pump is a daemon that runs on the background of each TiDB host. Its main function is to record the binlog files generated by TiDB in real time and write to the file in the disk sequentially. + +### Drainer + +Drainer collects binlog files from each Pump node, converts them into specified database-compatible SQL statements in the commit order of the transactions in TiDB, and synchronizes to the target database or writes to the file sequentially. + +## Install TiDB-Binlog + +### Download Binary for the CentOS 7.3+ platform + +```bash +# Download the tool package. +wget http://download.pingcap.org/tidb-binlog-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/tidb-binlog-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c tidb-binlog-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf tidb-binlog-latest-linux-amd64.tar.gz +cd tidb-binlog-latest-linux-amd64 +``` + +### Deploy TiDB-Binlog + +- It is recommended to deploy Pump using Ansible. +- Build a new TiDB cluster with a startup order of pd-server -> tikv-server -> pump -> tidb-server -> drainer. + - Edit the `tidb-ansible inventory.ini` file: + + ```ini + enable_binlog = True + ``` + + - Run `ansible-playbook deploy.yml` + - Run `ansible-playbook start.yml` + +- Deploy Binlog for an existing TiDB cluster. + - Edit the `tidb-ansible inventory.ini` file: + + ```ini + enable_binlog = True + ``` + + - Run `ansible-playbook rolling_update.yml` + +### Note + +- You need to deploy a Pump for each TiDB server in a TiDB cluster. Currently, the TiDB server only supports the binlog in UNIX socket. + + We set the startup parameter `binlog-socket` as the specified unix socket file path of the corresponding parameter `socket` in Pump. The final deployment architecture is as follows: + + ![TiDB pump deployment architecture](../media/tidb_pump_deployment.jpeg) + +- Currently, you need to deploy Drainer manually. + +- Drainer does not support renaming DDL on the table of the ignored schemas (schemas in the filter list). + +- To start Drainer in the existing TiDB cluster, usually you need to do a full backup, get the savepoint, import the full backup, and start Drainer and synchronize from the savepoint. + +- To guarantee the integrity of data, perform the following operations 10 minutes after Pump is started: + + - Run Drainer at the `gen-savepoint` model and generate the Drainer savepoint file: + + ``` + bin/drainer -gen-savepoint --data-dir= ${drainer_savepoint_dir} --pd-urls=${pd_urls} + ``` + + - Do a full backup. For example, back up TiDB using mydumper. + - Import the full backup to the target system. + - Set the file path of the savepoint and start Drainer: + + ``` + bin/drainer --config=conf/drainer.toml --data-dir=${drainer_savepoint_dir} + ``` + +- The drainer outputs `pb` and you need to set the following parameters in the configuration file. + + ``` + [syncer] + db-type = "pb" + disable-dispatch = true + + [syncer.to] + dir = "/path/pb-dir" + ``` + +### Examples and parameters explanation + +#### Pump + +Example + +```bash +./bin/pump -config pump.toml +``` + +Parameters Explanation + +``` +Usage of Pump: +-L string + log level: debug, info, warn, error, fatal (default "info") +-V + print Pump version info +-addr string + addr(i.e. 'host:port') to listen on for client traffic (default "127.0.0.1:8250"). +-advertise-addr string + addr(i.e. 'host:port') to advertise to the public +-config string + path to the Pump configuration file +-data-dir string + path to store binlog data +-gc int + recycle binlog files older than gc days, zero means never recycle (default 7) +-heartbeat-interval int + number of seconds between heartbeat ticks (default 2) +-log-file string + log file path +-log-rotate string + log file rotate type, hour/day +-metrics-addr string + Prometheus pushgataway address; leaving it empty will disable Prometheus push +-metrics-interval int + Prometheus client push interval in second, set "0" to disable Prometheus push (default 15) +-pd-urls string + a comma separated list of the PD endpoints (default "http://127.0.0.1:2379") +-socket string + unix socket addr to listen on for client traffic +``` + +Configuration file + +``` +# Pump Configuration. + +# addr(i.e. 'host:port') to listen on for client traffic +addr = "127.0.0.1:8250" + +# addr(i.e. 'host:port') to advertise to the public +advertise-addr = "" + +# a integer value to control expiry date of the binlog data, indicates for how long (in days) the binlog data would be stored. + +# (default value is 0, means binlog data would never be removed) +gc = 7 + +# path to the data directory of Pump's data +data-dir = "data.pump" + +# number of seconds between heartbeat ticks (in 2 seconds) +heartbeat-interval = 2 + +# a comma separated list of PD endpoints +pd-urls = "http://127.0.0.1:2379" + +# unix socket addr to listen on for client traffic +socket = "unix:///tmp/pump.sock" +``` + +#### Drainer + +Example + +```bash +./bin/drainer -config drainer.toml +``` + +Parameters Explanation + +``` +Usage of Drainer: +-L string + log level: debug, info, warn, error, fatal (default "info") +-V + print version info +-addr string + addr (i.e. 'host:port') to listen on for Drainer connections (default "127.0.0.1:8249") +-c int + parallel worker count (default 1) +-config string + path to the configuration file +-data-dir string + Drainer data directory path (default data.drainer) (default "data.drainer") +-dest-db-type string + target db type: mysql or pb; see syncer section in conf/drainer.toml (default "mysql") +-detect-interval int + the interval time (in seconds) of detecting Pumps' status (default 10) +-disable-dispatch + disable dispatching sqls that in one same binlog; if set true, work-count and txn-batch would be useless +-gen-savepoint + generate the savepoint from cluster +-ignore-schemas string + disable synchronizing those schemas (default "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql") +-log-file string + log file path +-log-rotate string + log file rotate type, hour/day +-metrics-addr string + Prometheus pushgateway address; leaving it empty will disable Prometheus push +-metrics-interval int + Prometheus client push interval in second, set "0" to disable Prometheus push (default 15) +-pd-urls string + a comma separated list of PD endpoints (default "http://127.0.0.1:2379") +-txn-batch int + number of binlog events in a transaction batch (default 1) +``` + +Configuration file + +``` +# Drainer Configuration + +# addr (i.e. 'host:port') to listen on for Drainer connections +addr = "127.0.0.1:8249" + +# the interval time (in seconds) of detect Pumps' status +detect-interval = 10 + +# Drainer meta data directory path +data-dir = "data.drainer" + +# a comma separated list of PD endpoints +pd-urls = "http://127.0.0.1:2379" + +# The file path of log +log-file = "drainer.log" + +# syncer Configuration +[syncer] + +# disable synchronizing these schemas +ignore-schemas = "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql" + +# number of binlog events in a transaction batc +txn-batch = 1 + +# worker count to execute binlogs +worker-count = 1 + +disable-dispatch = false + +# downstream storage, equal to --dest-db-type +# valid values are "mysql", "pb" +db-type = "mysql" + +# The replicate-do-db prioritizes over replicate-do-table if having the same db name. +# Regular expressions are supported, and starting with '~' declares the use of regular expressions. +# replicate-do-db = ["~^b.*","s1"] +# [[syncer.replicate-do-table]] +# db-name ="test" +# tbl-name = "log" + +# [[syncer.replicate-do-table]] +# db-name ="test" +# tbl-name = "~^a.*" + +# the downstream mysql protocol database +[syncer.to] +host = "127.0.0.1" +user = "root" +password = "" +port = 3306 + +# uncomment this if you want to use pb as db-type +# [syncer.to] +# dir = "data.drainer" +``` + +## Monitor TiDB-Binlog + +This section introduces how to monitor TiDB-Binlog's status and performance, and display the metrics using Prometheus and Grafana. + +### Configure Pump/Drainer + +Use the Pump service deployed using Ansible. Set metrics in startup parameters. + +When you start Drainer, set the two parameters of `--metrics-addr` and `--metrics-interval`. Set `--metrics-addr` as the address of Push Gateway. Set `--metrics-interval` as the frequency of push (default 15 seconds). + +### Configure Grafana + +#### Create a Prometheus data source + +1. Login the Grafana Web interface. + + - The default address is: [http://localhost:3000](http://localhost:3000) + + - The default account name: admin + + - The password for the default account: admin + +2. Click the Grafana logo to open the sidebar menu. + +3. Click "Data Sources" in the sidebar. + +4. Click "Add data source". + +5. Specify the data source information: + + - Specify the name for the data source. + + - For Type, select Prometheus. + + - For Url, specify the Prometheus address. + + - Specify other fields as needed. + +6. Click "Add" to save the new data source. + +#### Create a Grafana dashboard + +1. Click the Grafana logo to open the sidebar menu. + +2. On the sidebar menu, click "Dashboards" -> "Import" to open the "Import Dashboard" window. + +3. Click "Upload .json File" to upload a JSON file (Download [TiDB Grafana Config](https://grafana.com/tidb)). + +4. Click "Save & Open". + +5. A Prometheus dashboard is created. \ No newline at end of file diff --git a/v1.0/trouble-shooting.md b/v1.0/trouble-shooting.md new file mode 100755 index 0000000000000..d928fb5bf2348 --- /dev/null +++ b/v1.0/trouble-shooting.md @@ -0,0 +1,106 @@ +--- +title: TiDB Cluster Troubleshooting Guide +category: advanced +--- + +# TiDB Cluster Troubleshooting Guide + +You can use this guide to help you diagnose and solve basic problems while using TiDB. If your problem is not resolved, please collect the following information and [create an issue](https://github.com/pingcap/tidb/issues/new): + +- The exact error message and the operations while the error occurs +- The state of all the components +- The `error` / `fatal` / `panic` information in the log of the component that reports the error +- The configuration and deployment topology +- The TiDB component related issue in `dmesg` + +For other information, see [Frequently Asked Questions (FAQ)](FAQ.md). + +## Cannot connect to the database + +1. Make sure all the services are started, including `tidb-server`, `pd-server`, and `tikv-server`. +2. Use the `ps` command to check if all the processes are running. + + - If a certain process is not running, see the following corresponding sections to diagnose and solve the issue. + + If all the processes are running, check the `tidb-server` log to see if the following messages are displayed: + - InfomationSchema is out of date: This message is displayed if the `tikv-server` cannot be connected. Check the state and log of `pd-server` and `tikv-server`. + - panic: This message is displayed if there is an issue with the program. Please provide the detailed panic log and [create an issue](https://github.com/pingcap/tidb/issues/new). + +3. If the data is cleared and the services are re-deployed, make sure that: + + - All the data in `tikv-server` and `pd-server` are cleared. + The specific data is stored in `tikv-server` and the metadata is stored in `pd-server`. If only one of the two servers is cleared, the data will be inconsistent. + - After the data in `pd-server` and `tikv-server` are cleared and the `pd-server` and `tikv-server` are restarted, the `tidb-server` must be restarted too. + The cluster ID is randomly allocated when the `pd-server` is initialized. So when the cluster is re-deployed, the cluster ID changes and you need to restart the `tidb-server` to get the new cluster ID. + +## Cannot start `tidb-server` + +See the following for the situations when the `tidb-server` cannot be started: + +- Error in the startup parameters. + See the [TiDB configuration and options](op-guide/configuration.md#tidb). +- The port is occupied. + Use the `lsof -i:port` command to show all the networking related to a given port and make sure the port to start the `tidb-server` is not occupied. ++ Cannot connect to `pd-server`. + + - Check if the network between TiDB and PD is running smoothly, including whether the network can be pinged or if there is any issue with the Firewall configuration. + - If there is no issue with the network, check the state and log of the `pd-server` process. + +## Cannot start `tikv-server` + +See the following for the situations when the `tikv-server` cannot be started: + +- Error in the startup parameters: See the [TiKV configuration and options](op-guide/configuration.md#tikv). +- The port is occupied: Use the `lsof -i:port` command to show all the networking related to a given port and make sure the port to start the `tikv-server` is not occupied. ++ Cannot connect to `pd-server`. + - Check if the network between TiDB and PD is running smoothly, including whether the network can be pinged or if there is any issue with the Firewall configuration. + - If there is no issue with the network, check the state and log of the `pd-server` process. +- The file is occupied. + Do not open two TiKV files on one database file directory. + +## Cannot start `pd-server` + +See the following for the situations when the `pd-server` cannot be started: + +- Error in the startup parameters. + See the [PD configuration and options](op-guide/configuration.md##placement-driver-pd). +- The port is occupied. + Use the `lsof -i:port` command to show all the networking related to a given port and make sure the port to start the `pd-server` is not occupied. + +## The TiDB/TiKV/PD process aborts unexpectedly + +- Is the process started on the foreground? The process might exit because the client aborts. + +- Is `nohup+&` run in the command line? This might cause the process to abort because it receives the hup signal. It is recommended to write and run the startup command in a script. + +## TiDB panic + +Please provide panic log and [create an issue](https://github.com/pingcap/tidb/issues/new). + +## The connection is rejected + +Make sure the network parameters of the operating system are correct, including but not limited to: + +- The port in the connection string is consistent with the `tidb-server` starting port. +- The firewall is configured correctly. + +## Open too many files + +Before starting the process, make sure the result of `ulimit -n` is large enough. It is recommended to set the value to `unlimited` or larger than `1000000`. + +## Database access times out and the system load is too high + +Provide the following information: + ++ The deployment topology + - How many `tidb-server`/`pd-server`/`tikv-server` instances are deployed? + - How are these instances distributed in the machines? ++ The hardware configuration of the machines where these instances are deployed: + - The number of CPU cores + - The size of the memory + - The type of the disk (SSD or Hard Drive Disk) + - Are they physical machines or virtual machines? +- Are there other services besides the TiDB cluster? +- Are the `pd-server`s and `tikv-server`s deployed separately? +- What is the current operation? +- Check the CPU thread name using the `top -H` command. +- Are there any exceptions in the network or IO monitoring data recently? diff --git a/v2.0/.gitignore b/v2.0/.gitignore new file mode 100755 index 0000000000000..069892e5fd170 --- /dev/null +++ b/v2.0/.gitignore @@ -0,0 +1,10 @@ +# Created by .ignore support plugin (hsz.mobi) +### Example user template template +### Example user template + +# IntelliJ project files +.idea/ +*.iml +out +gen +.DS_Store diff --git a/v2.0/FAQ.md b/v2.0/FAQ.md new file mode 100755 index 0000000000000..caf43da05f9ad --- /dev/null +++ b/v2.0/FAQ.md @@ -0,0 +1,1057 @@ +--- +title: TiDB FAQ +summary: Learn about the most frequently asked questions (FAQs) relating to TiDB. +category: faq +--- + +# TiDB FAQ + +This document lists the Most Frequently Asked Questions about TiDB. + +## About TiDB + +### TiDB introduction and architecture + +#### What is TiDB? + +TiDB is a distributed SQL database that features in horizontal scalability, high availability and consistent distributed transactions. It also enables you to use MySQL's SQL syntax and protocol to manage and retrieve data. + +#### What is TiDB's architecture? + +The TiDB cluster has three components: the TiDB server, the PD (Placement Driver) server, and the TiKV server. For more details, see [TiDB architecture](overview.md/#tidb-architecture). + +#### Is TiDB based on MySQL? + +No. TiDB supports MySQL syntax and protocol, but it is a new open source database that is developed and maintained by PingCAP, Inc. + +#### What is the respective responsibility of TiDB, TiKV and PD (Placement Driver)? + +- TiDB works as the SQL computing layer, mainly responsible for parsing SQL, specifying query plan, and generating executor. +- TiKV works as a distributed Key-Value storage engine, used to store the real data. In short, TiKV is the storage engine of TiDB. +- PD works as the cluster manager of TiDB, which manages TiKV metadata, allocates timestamps, and makes decisions for data placement and load balancing. + +#### Is it easy to use TiDB? + +Yes, it is. When all the required services are started, you can use TiDB as easily as a MySQL server. You can replace MySQL with TiDB to power your applications without changing a single line of code in most cases. You can also manage TiDB using the popular MySQL management tools. + +#### How is TiDB compatible with MySQL? + +Currently, TiDB supports the majority of MySQL 5.7 syntax, but does not support trigger, stored procedures, user-defined functions, and foreign keys. For more details, see [Compatibility with MySQL](sql/mysql-compatibility.md). + +#### How is TiDB highly available? + +TiDB is self-healing. All of the three components, TiDB, TiKV and PD, can tolerate failures of some of their instances. With its strong consistency guarantee, whether it’s data machine failures or even downtime of an entire data center, your data can be recovered automatically. For more information, see [High availability](overview.md#high-availability). + +#### How is TiDB strongly consistent? + +TiDB uses the [Raft consensus algorithm](https://raft.github.io/) to ensure consistency among multiple replicas. At the bottom layer, TiDB uses a model of replication log + State Machine to replicate data. For the write requests, the data is written to a Leader and the Leader then replicates the command to its Followers in the form of log. When the majority of nodes in the cluster receive this log, this log is committed and can be applied into the State Machine. TiDB has the latest data even if a minority of the replicas are lost. + +#### Does TiDB support distributed transactions? + +Yes. The transaction model in TiDB is inspired by Google’s Percolator, a paper published in 2006. It’s mainly a two-phase commit protocol with some practical optimizations. This model relies on a timestamp allocator to assign monotone increasing timestamp for each transaction, so the conflicts can be detected. PD works as the timestamp allocator in a TiDB cluster. + +#### What programming language can I use to work with TiDB? + +Any language supported by MySQL client or driver. + +#### Can I use other Key-Value storage engines with TiDB? + +Yes. TiKV and TiDB support many popular standalone storage engines, such as GolevelDB and BoltDB. If the storage engine is a KV engine that supports transactions and it provides a client that meets the interface requirement of TiDB, then it can connect to TiDB. + +#### What's the recommended solution for the deployment of three geo-distributed data centers? + +The architecture of TiDB guarantees that it fully supports geo-distribution and multi-activeness. Your data and applications are always-on. All the outages are transparent to your applications and your data can recover automatically. The operation depends on the network latency and stability. It is recommended to keep the latency within 5ms. Currently, we already have similar use cases. For details, contact info@pingcap.com. + +#### Does TiDB provide any other knowledge resource besides the documentation? + +Currently, [TiDB documentation](https://www.pingcap.com/docs/overview) is the most important and timely way to get knowledge of TiDB. In addition, we also have some technical communication groups. If you have any needs, contact info@pingcap.com. + +#### What are the MySQL variables that TiDB is compatible with? + +See [The System Variables](sql/variable.md). + +#### Does TiDB support `select for update`? + +Yes. But it differs from MySQL in syntax. As a distributed database, TiDB uses the optimistic lock. `select for update` does not lock data when the transaction is started, but checks conflicts when the transaction is committed. If the check reveals conflicts, the committing transaction rolls back. + +#### Can the codec of TiDB guarantee that the UTF-8 string is memcomparable? Is there any coding suggestion if our key needs to support UTF-8? + +The character sets of TiDB use UTF-8 by default and currently only support UTF-8. The string of TiDB uses the memcomparable format. + +#### What is the length limit for the TiDB user name? + +32 characters at most. + +#### What is the maximum number of statements in a transaction? + +5000 at most. + +#### Does TiDB support XA? + +No. The JDBC drive of TiDB is MySQL JDBC (Connector/J). When using Atomikos, set the data source to `type="com.mysql.jdbc.jdbc2.optional.MysqlXADataSource"`. TiDB does not support the connection with MySQL JDBC XADataSource. MySQL JDBC XADataSource only works for MySQL (for example, using DML to modify the `redo` log). + +After you configure the two data sources of Atomikos, set the JDBC drives to XA. When Atomikos operates TM and RM (DB), Atomikos sends the command including XA to the JDBC layer. Taking MySQL for an example, when XA is enabled in the JDBC layer, JDBC will send a series of XA logic operations to InnoDB, including using DML to change the `redo` log. This is the operation of the two-phase commit. The current TiDB version does not support the upper application layer JTA/XA and does not parse XA operations sent by Atomikos. + +As a standalone database, MySQL can only implement across-database transactions using XA; while TiDB supports distributed transactions using Google Percolator transaction model and its performance stability is higher than XA, so TiDB does not support XA and there is no need for TiDB to support XA. + +#### Does `show processlist` display the system process ID? + +The display content of TiDB `show processlist` is almost the same as that of MySQL `show processlist`. TiDB `show processlist` does not display the system process ID. The ID that it displays is the current session ID. The differences between TiDB `show processlist` and MySQL `show processlist` are as follows: + +- As TiDB is a distributed database, the `tidb-server` instance is a stateless engine for parsing and executing the SQL statements (for details, see [TiDB architecture](overview.md#tidb-architecture)). `show processlist` displays the session list executed in the `tidb-server` instance that the user logs in to from the MySQL client, not the list of all the sessions running in the cluster. But MySQL is a standalone database and its `show processlist` displays all the SQL statements executed in MySQL. +- TiDB `show processlist` displays the estimated memory usage (unit: Byte) of the current session, which is not displayed in MySQL `show processlist`. + +#### How to modify the user password and privilege? + +To modify the user password in TiDB, it is recommended to use `set password for 'root'@'%' = '0101001';` or `alter`, not `update mysql.user` which might lead to the condition that the password in other nodes is not refreshed timely. + +It is recommended to use the official standard statements when modifying the user password and privilege. For details, see [TiDB user account management](sql/user-account-management.md). + +#### Why does the auto-increment ID of the later inserted data is smaller than that of the earlier inserted data in TiDB? + +The auto-increment ID feature in TiDB is only guaranteed to be automatically incremental and unique but is not guaranteed to be allocated sequentially. Currently, TiDB is allocating IDs in batches. If data is inserted into multiple TiDB servers simultaneously, the allocated IDs are not sequential. When multiple threads concurrently insert data to multiple `tidb-server` instances, the auto-increment ID of the later inserted data may be smaller. TiDB allows specifying `AUTO_INCREMENT` for the integer field, but allows only one `AUTO_INCREMENT` field in a single table. For details, see [DDL](sql/ddl.md). + +#### How to modify the `sql_mode` in TiDB except using the `set` command? + +The configuration method of TiDB `sql_mode` is different from that of MySQL `sql_mode`. TiDB does not support using the configuration file to configure `sql\_mode` of the database; it only supports using the `set` command to configure `sql\_mode` of the database. You can use `set @@global.sql_mode = 'STRICT_TRANS_TABLES';` to configure it. + +#### What authentication protocols does TiDB support? What's the process? + +- Like MySQL, TiDB supports the SASL protocol for user login authentication and password processing. + +- When the client connects to TiDB, the challenge-response authentication mode starts. The process is as follows: + + 1. The client connects to the server. + 2. The server sends a random string challenge to the client. + 3. The client sends the username and response to the server. + 4. The server verifies the response. + +### TiDB techniques + +#### TiKV for data storage + +See [TiDB Internal (I) - Data Storage](https://www.pingcap.com/blog/2017-07-11-tidbinternal1/). + +#### TiDB for data computing + +See [TiDB Internal (II) - Computing](https://www.pingcap.com/blog/2017-07-11-tidbinternal2/). + +#### PD for scheduling + +See [TiDB Internal (III) - Scheduling](https://www.pingcap.com/blog/2017-07-20-tidbinternal3/). + +## Install, deploy and upgrade + +### Prepare + +#### Operating system version requirements + +| Linux OS Platform | Version | +| :-----------------------:| :----------: | +| Red Hat Enterprise Linux | 7.3 or later | +| CentOS | 7.3 or later | +| Oracle Enterprise Linux | 7.3 or later | + +##### Why it is recommended to deploy the TiDB cluster on CentOS 7? + +As an open source distributed NewSQL database with high performance, TiDB can be deployed in the Intel architecture server and major virtualization environments and runs well. TiDB supports most of the major hardware networks and Linux operating systems. For details, see [Software and Hardware Requirements](op-guide/recommendation.md) for deploying TiDB. + +#### Server requirements + +You can deploy and run TiDB on the 64-bit generic hardware server platform in the Intel x86-64 architecture. The requirements and recommendations about server hardware configuration for development, testing and production environments are as follows: + +##### Development and testing environments + +| Component | CPU | Memory | Local Storage | Network | Instance Number (Minimum Requirement) | +| :------: | :-----: | :-----: | :----------: | :------: | :----------------: | +| TiDB | 8 core+ | 16 GB+ | SAS, 200 GB+ | Gigabit network card | 1 (can be deployed on the same machine with PD) | +| PD | 8 core+ | 16 GB+ | SAS, 200 GB+ | Gigabit network card | 1 (can be deployed on the same machine with TiDB) | +| TiKV | 8 core+ | 32 GB+ | SAS, 200 GB+ | Gigabit network card | 3 | +| | | | | Total Server Number | 4 | + +##### Production environment + +| Component | CPU | Memory | Hard Disk Type | Network | Instance Number (Minimum Requirement) | +| :-----: | :------: | :------: | :------: | :------: | :-----: | +| TiDB | 16 core+ | 48 GB+ | SAS | 10 Gigabit network card (2 preferred) | 2 | +| PD | 8 core+ | 16 GB+ | SSD | 10 Gigabit network card (2 preferred) | 3 | +| TiKV | 16 core+ | 48 GB+ | SSD | 10 Gigabit network card (2 preferred) | 3 | +| Monitor | 8 core+ | 16 GB+ | SAS | Gigabit network card | 1 | +| | | | | Total Server Number | 9 | + +##### What's the purposes of 2 network cards of 10 gigabit? + +As a distributed cluster, TiDB has a high demand on time, especially for PD, because PD needs to distribute unique timestamps. If the time in the PD servers is not consistent, it takes longer waiting time when switching the PD server. The bond of two network cards guarantees the stability of data transmission, and 10 gigabit guarantees the transmission speed. Gigabit network cards are prone to meet bottlenecks, therefore it is strongly recommended to use 10 gigabit network cards. + +##### Is it feasible if we don't use RAID for SSD? + +If the resources are adequate, it is recommended to use RAID 10 for SSD. If the resources are inadequate, it is acceptable not to use RAID for SSD. + +##### What's the recommended configuration of TiDB components? + +- TiDB has a high requirement on CPU and memory. If you need to open Binlog, the local disk space should be increased based on the service volume estimation and the time requirement for the GC operation. But the SSD disk is not a must. +- PD stores the cluster metadata and has frequent Read and Write requests. It demands a high I/O disk. A disk of low performance will affect the performance of the whole cluster. It is recommended to use SSD disks. In addition, a larger number of Regions has a higher requirement on CPU and memory. +- TiKV has a high requirement on CPU, memory and disk. It is required to use SSD. + +For details, see [TiDB software and hardware requirements](op-guide/recommendation.md). + +### Install and deploy + +#### Deploy TiDB using Ansible (recommended) + +See [Ansible Deployment](op-guide/ansible-deployment.md). + +##### Why the modified `toml` configuration for TiKV/PD does not take effect? + +You need to set the `--config` parameter in TiKV/PD to make the `toml` configuration effective. TiKV/PD does not read the configuration by default. Currently, this issue only occurs when deploying using Binary. For TiKV, edit the configuration and restart the service. For PD, the configuration file is only read when PD is started for the first time, after which you can modify the configuration using pd-ctl. For details, see [PD Control User Guide](tools/pd-control.md). + +##### Should I deploy the TiDB monitoring framework (Prometheus + Grafana) on a standalone machine or on multiple machines? What is the recommended CPU and memory? + +The monitoring machine is recommended to use standalone deployment. It is recommended to use an 8 core CPU with 16 GB+ memory and a 500 GB+ hard disk. + +##### Why the monitor cannot display all metrics? + +Check the time difference between the machine time of the monitor and the time within the cluster. If it is large, you can correct the time and the monitor will display all the metrics. + +##### What is the function of supervise/svc/svstat service? + +- supervise: the daemon process, to manage the processes +- svc: to start and stop the service +- svstat: to check the process status + +##### Description of inventory.ini variables + +| Variable | Description | +| ---- | ------- | +| cluster_name | the name of a cluster, adjustable | +| tidb_version | the version of TiDB, configured by default in TiDB-Ansible branches | +| deployment_method | the method of deployment, binary by default, Docker optional | +| process_supervision | the supervision way of processes, systemd by default, supervise optional | +| timezone | the timezone of the managed node, adjustable, `Asia/Shanghai` by default, used with the `set_timezone` variable | +| set_timezone | to edit the timezone of the managed node, True by default; False means closing | +| enable_elk | currently not supported | +| enable_firewalld | to enable the firewall, closed by default | +| enable_ntpd | to monitor the NTP service of the managed node, True by default; do not close it | +| machine_benchmark | to monitor the disk IOPS of the managed node, True by default; do not close it | +| set_hostname | to edit the hostname of the managed node based on the IP, False by default | +| enable_binlog | whether to deploy Pump and enable the binlog, False by default, dependent on the Kafka cluster; see the `zookeeper_addrs` variable | +| zookeeper_addrs | the ZooKeeper address of the binlog Kafka cluster | +| enable_slow_query_log | to record the slow query log of TiDB into a single file: ({{ deploy_dir }}/log/tidb_slow_query.log). False by default, to record it into the TiDB log | +| deploy_without_tidb | the Key-Value mode, deploy only PD, TiKV and the monitoring service, not TiDB; set the IP of the tidb_servers host group to null in the `inventory.ini` file | + +#### Deploy TiDB offline using Ansible + +It is not recommended to deploy TiDB offline using Ansible. If the Control Machine has no access to external network, you can deploy TiDB offline using Ansible. For details, see [Offline Deployment Using Ansible](op-guide/offline-ansible-deployment.md). + +#### How to deploy TiDB quickly using Docker Compose on a single machine? + +You can use Docker Compose to build a TiDB cluster locally, including the cluster monitoring components. You can also customize the version and number of instances for each component. The configuration file can also be customized. You can only use this deployment method for testing and development environment. For details, see [Building the Cluster Using Docker Compose](op-guide/docker-compose.md). + +#### How to separately record the slow query log in TiDB? How to locate the slow query SQL statement? + +1. The slow query definition for TiDB is in the `conf/tidb.yml` configuration file of `tidb-ansible`. The `slow-threshold: 300` parameter is used to configure the threshold value of the slow query (unit: millisecond). + + The slow query log is recorded in `tidb.log` by default. If you want to generate a slow query log file separately, set `enable_slow_query_log` in the `inventory.ini` configuration file to `True`. + + Then run `ansible-playbook rolling_update.yml --tags=tidb` to perform a rolling update on the `tidb-server` instance. After the update is finished, the `tidb-server` instance will record the slow query log in `tidb_slow_query.log`. + +2. If a slow query occurs, you can locate the `tidb-server` instance where the slow query is and the slow query time point using Grafana and find the SQL statement information recorded in the log on the corresponding node. + +#### How to add the `label` configuration if `label` of TiKV was not configured when I deployed the TiDB cluster for the first time? + +The configuration of TiDB `label` is related to the cluster deployment architecture. It is important and is the basis for PD to execute global management and scheduling. If you did not configure `label` when deploying the cluster previously, you should adjust the deployment structure by manually adding the `location-labels` information using the PD management tool `pd-ctl`, for example, `config set location-labels "zone, rack, host"` (you should configure it based on the practical `label` level name). + +For the usage of `pd-ctl`, see [PD Control Instruction](tools/pd-control.md). + +#### Why does the `dd` command for the disk test use the `oflag=direct` option? + +The Direct mode wraps the Write request into the I/O command and sends this command to the disk to bypass the file system cache and directly test the real I/O Read/Write performance of the disk. + +#### How to use the `fio` command to test the disk performance of the TiKV instance? + +- Random Read test: + + ``` + ./fio -ioengine=libaio -bs=32k -direct=1 -thread -rw=randread -size=10G -filename=fio_randread_test.txt -name='PingCAP' -iodepth=4 -runtime=60 + ``` + +- The mix test of sequential Write and random Read: + + ``` + ./fio -ioengine=libaio -bs=32k -direct=1 -thread -rw=randrw -percentage_random=100,0 -size=10G -filename=fio_randr_write_test.txt -name='PingCAP' -iodepth=4 -runtime=60 + ``` + +#### Error `UNREACHABLE! "msg": "Failed to connect to the host via ssh: " ` when deploying TiDB using TiDB-Ansible + +Two possible reasons and solutions: + +- The SSH mutual trust is not configured as required. It’s recommended to follow [the steps described in the official document](op-guide/ansible-deployment.md/#step-5-configure-the-ssh-mutual-trust-and-sudo-rules-on-the-control-machine) and check whether it is successfully configured using `ansible -i inventory.ini all -m shell -a 'whoami' -b`. +- If it involves the scenario where a single server is assigned multiple roles, for example, the mixed deployment of multiple components or multiple TiKV instances are deployed on a single server, this error might be caused by the SSH reuse mechanism. You can use the option of `ansible … -f 1` to avoid this error. + +### Upgrade + +#### How to perform rolling updates using Ansible? + +- Apply rolling updates to the TiKV node (only update the TiKV service). + + ``` + ansible-playbook rolling_update.yml --tags=tikv + ``` + +- Apply rolling updates to all services. + + ``` + ansible-playbook rolling_update.yml + ``` + +#### How are the rolling updates done? + +When you apply rolling updates to the TiDB services, the running application is not affected. You need to configure the minimum cluster topology (TiDB * 2, PD * 3, TiKV * 3). If the Pump/Drainer service is involved in the cluster, it is recommended to stop Drainer before rolling updates. When you update TiDB, Pump is also updated. + +#### How to upgrade when I deploy TiDB using Binary? + +It is not recommended to deploy TiDB using Binary. The support for upgrading using Binary is not as friendly as using Ansible. It is recommended to deploy TiDB using Ansible. + +#### Should I upgrade TiKV or all components generally? + +Generally you should upgrade all components, because the whole version is tested together. Upgrade a single component only when an emergent issue occurs and you need to upgrade this component. + +#### What causes "Timeout when waiting for search string 200 OK" when starting or upgrading a cluster? How to deal with it? + +Possible reasons: + +- The process did not start normally. +- The port is occupied. +- The process did not stop normally. +- You use `rolling_update.yml` to upgrade the cluster when the cluster is stopped (operation error). + +Solution: + +- Log into the node to check the status of the process or port. +- Correct the incorrect operation procedure. + +## Manage the cluster + +### Daily management + +#### What are the common operations? + +| Job | Playbook | +|:----------------------------------|:-----------------------------------------| +| Start the cluster | `ansible-playbook start.yml` | +| Stop the cluster | `ansible-playbook stop.yml` | +| Destroy the cluster | `ansible-playbook unsafe_cleanup.yml` (If the deployment directory is a mount point, an error will be reported, but implementation results will remain unaffected) | +| Clean data (for test) | `ansible-playbook unsafe_cleanup_data.yml` | +| Apply rolling updates | `ansible-playbook rolling_update.yml` | +| Apply rolling updates to TiKV | `ansible-playbook rolling_update.yml --tags=tikv` | +| Apply rolling updates to components except PD | `ansible-playbook rolling_update.yml --skip-tags=pd` | +| Apply rolling updates to the monitoring components | `ansible-playbook rolling_update_monitor.yml` | + +#### How to log into TiDB? + +You can log into TiDB like logging into MySQL. For example: + +``` +mysql -h 127.0.0.1 -uroot -P4000 +``` + +#### How to modify the system variables in TiDB? + +Similar to MySQL, TiDB includes static and solid parameters. You can directly modify static parameters using `set global xxx = n`, but the new value of a parameter is only effective within the life cycle in this instance. + +#### Where and what are the data directories in TiDB (TiKV)? + +TiDB data directories are in `${[data-dir](https://pingcap.com/docs-cn/op-guide/configuration/#data-dir-1)}/data/` by default, which include four directories of backup, db, raft, and snap, used to store backup, data, Raft data, and mirror data respectively. + +#### What are the system tables in TiDB? + +Similar to MySQL, TiDB includes system tables as well, used to store the information required by the server when it runs. + +#### Where are the TiDB/PD/TiKV logs? + +By default, TiDB/PD/TiKV outputs standard error in the logs. If a log file is specified by `--log-file` during the startup, the log is output to the specified file and executes rotation daily. + +#### How to safely stop TiDB? + +If the cluster is deployed using Ansible, you can use the `ansible-playbook stop.yml` command to stop the TiDB cluster. If the cluster is not deployed using Ansible, `kill` all the services directly. The components of TiDB will do `graceful shutdown`. + +#### Can `kill` be executed in TiDB? + +- You can `kill` DML statements. First use `show processlist` to find the ID corresponding with the session, and then run `kill tidb [session id]`. +- You can `kill` DDL statements. First use `admin show ddl jobs` to find the ID of the DDL job you need to kill, and then run `admin cancel ddl jobs 'job_id' [, 'job_id'] ...`. For more details, see the [`ADMIN` statement](sql/admin.md#admin-statement). + +#### Does TiDB support session timeout? + +Currently, TiDB does not support session timeout in the database level. If you want to implement session timeout, use the session ID started by side records in the absence of LB (Load Balancing), and customize the session timeout on the application. After timeout, kill SQL using `kill tidb [session id]` on the node that starts the query. It is currently recommended to implement session timeout using applications. When the timeout time is reached, the application layer reports an exception and continues to execute subsequent program segments. + +#### What is the TiDB version management strategy for production environment? How to avoid frequent upgrade? + +Currently, TiDB has a standard management of various versions. Each release contains a detailed change log and [release notes](https://github.com/pingcap/TiDB/releases). Whether it is necessary to upgrade in the production environment depends on the application system. It is recommended to learn the details about the functional differences between the previous and later versions before upgrading. + +Take `Release Version: v1.0.3-1-ga80e796` as an example of version number description: + +- `v1.0.3` indicates the standard GA version. +- `-1` indicates the current version has one commit. +- `ga80e796` indicates the version `git-hash`. + +#### What's the difference between various TiDB master versions? How to avoid using the wrong TiDB-Ansible version? + +The TiDB community is highly active. After the 1.0 GA release, the engineers have been keeping optimizing and fixing bugs. Therefore, the TiDB version is updated quite fast. If you want to keep informed of the latest version, see [TiDB Weekly update](https://pingcap.com/weekly/). + +It is recommended to deploy the TiDB cluster using the latest version of TiDB-Ansible, which will also be updated along with the TiDB version. TiDB has a unified management of the version number after the 1.0 GA release. You can view the version number using the following two methods: + +- `select tidb_version()` +- `tidb-server -V` + +#### Is there a graphical deployment tool for TiDB? + +Currently no. + +#### How to scale TiDB horizontally? + +As your business grows, your database might face the following three bottlenecks: + +- Lack of storage resources which means that the disk space is not enough. + +- Lack of computing resources such as high CPU occupancy. + +- Not enough write and read capacity. + +You can scale TiDB as your business grows. + +- If the disk space is not enough, you can increase the capacity simply by adding more TiKV nodes. When the new node is started, PD will migrate the data from other nodes to the new node automatically. + +- If the computing resources are not enough, check the CPU consumption situation first before adding more TiDB nodes or TiKV nodes. When a TiDB node is added, you can configure it in the Load Balancer. + +- If the capacity is not enough, you can add both TiDB nodes and TiKV nodes. + +#### Why does TiDB use gRPC instead of Thrift? Is it because Google uses it? + +Not really. We need some good features of gRPC, such as flow control, encryption and streaming. + +#### What does the 92 indicate in `like(bindo.customers.name, jason%, 92)`? + +The 92 indicates the escape character, which is ASCII 92 by default. + +### Manage the PD server + +#### The `TiKV cluster is not bootstrapped` message is displayed when I access PD. + +Most of the APIs of PD are available only when the TiKV cluster is initialized. This message is displayed if PD is accessed when PD is started while TiKV is not started when a new cluster is deployed. If this message is displayed, start the TiKV cluster. When TiKV is initialized, PD is accessible. + +#### The `etcd cluster ID mismatch` message is displayed when starting PD. + +This is because the `--initial-cluster` in the PD startup parameter contains a member that doesn't belong to this cluster. To solve this problem, check the corresponding cluster of each member, remove the wrong member, and then restart PD. + +#### What's the maximum tolerance for time synchronization error of PD? + +Theoretically, the smaller of the tolerance, the better. During leader changes, if the clock goes back, the process won't proceed until it catches up with the previous leader. PD can tolerate any synchronization error, but a larger error value means a longer period of service stop during the leader change. + +#### How does the client connection find PD? + +The client connection can only access the cluster through TiDB. TiDB connects PD and TiKV. PD and TiKV are transparent to the client. When TiDB connects to any PD, the PD tells TiDB who is the current leader. If this PD is not the leader, TiDB reconnects to the leader PD. + +#### What is the difference between the `leader-schedule-limit` and `region-schedule-limit` scheduling parameters in PD? + +- The `leader-schedule-limit` scheduling parameter is used to balance the Leader number of different TiKV servers, affecting the load of query processing. +- The `region-schedule-limit` scheduling parameter is used to balance the replica number of different TiKV servers, affecting the data amount of different nodes. + +#### Is the number of replicas in each region configurable? If yes, how to configure it? + +Yes. Currently, you can only update the global number of replicas. When started for the first time, PD reads the configuration file (conf/pd.yml) and uses the max-replicas configuration in it. If you want to update the number later, use the pd-ctl configuration command `config set max-replicas $num` and view the enabled configuration using `config show all`. The updating does not affect the applications and is configured in the background. + +Make sure that the total number of TiKV instances is always greater than or equal to the number of replicas you set. For example, 3 replicas need 3 TiKV instances at least. Additional storage requirements need to be estimated before increasing the number of replicas. For more information about pd-ctl, see [PD Control User Guide](tools/pd-control.md). + +#### How to check the health status of the whole cluster when lacking command line cluster management tools? + +You can determine the general status of the cluster using the pd-ctl tool. For detailed cluster status, you need to use the monitor to determine. + +#### How to delete the monitoring data of a cluster node that is offline? + +The offline node usually indicates the TiKV node. You can determine whether the offline process is finished by the pd-ctl or the monitor. After the node is offline, perform the following steps: + +1. Manually stop the relevant services on the offline node. +2. Delete the `node_exporter` data of the corresponding node from the Prometheus configuration file. +3. Delete the data of the corresponding node from Ansible `inventory.ini`. + +#### Why couldn't I connect to the PD server using `127.0.0.1` when I was using the PD Control? + +If your TiDB cluster is deployed using TiDB-Ansible, the PD external service port is not bound to `127.0.0.1`, so PD Control does not recognize `127.0.0.1` and you can only connect to it using the local IP address. + +### Manage the TiDB server + +#### How to set the `lease` parameter in TiDB? + +The lease parameter (`--lease=60`) is set from the command line when starting a TiDB server. The value of the lease parameter impacts the Database Schema Changes (DDL) speed of the current session. In the testing environments, you can set the value to 1s for to speed up the testing cycle. But in the production environments, it is recommended to set the value to minutes (for example, 60) to ensure the DDL safety. + +#### What is the processing time of a DDL operation? + +The processing time is different for different scenarios. Generally, you can consider the following three scenarios: + +1. The `Add Index` operation with a relatively small number of rows in the corresponding data table: about 3s +2. The `Add Index` operation with a relatively large number of rows in the corresponding data table: the processing time depends on the specific number of rows and the QPS at that time (the `Add Index` operation has a lower priority than ordinary SQL operations) +3. Other DDL operations: about 1s + +If the TiDB server instance that receives the DDL request is the same TiDB server instance that the DDL owner is in, the first and third scenarios above may cost only dozens to hundreds of milliseconds. + +#### Why it is very slow to run DDL statements sometimes? + +Possible reasons: + +- If you run multiple DDL statements together, the last few DDL statements might run slowly. This is because the DDL statements are executed serially in the TiDB cluster. +- After you start the cluster successfully, the first DDL operation may take a longer time to run, usually around 30s. This is because the TiDB cluster is electing the leader that processes DDL statements. +- The processing time of DDL statements in the first ten minutes after starting TiDB would be much longer than the normal case if you meet the following conditions: 1) TiDB cannot communicate with PD as usual when you are stopping TiDB (including the case of power failure); 2) TiDB fails to clean up the registration data from PD in time because TiDB is stopped by the `kill -9` command. If you run DDL statements during this period, for the state change of each DDL, you need to wait for 2 * lease (lease = 45s). +- If a communication issue occurs between a TiDB server and a PD server in the cluster, the TiDB server cannot get or update the version information from the PD server in time. In this case, you need to wait for 2 * lease for the state processing of each DDL. + +#### Can I use S3 as the backend storage engine in TiDB? + +No. Currently, TiDB only supports the distributed storage engine and the Goleveldb/Rocksdb/Boltdb engine. + +#### Can the `Infomation_schema` support more real information? + +The tables in `Infomation_schema` exist mainly for compatibility with MySQL, and some third-party software queries information in the tables. Currently, most of those tables are null. More parameter information is to be involved in the tables as TiDB updates later. + +For the `Infomation_schema` that TiDB currently supports, see [The TiDB System Database](sql/system-database.md). + +#### What's the explanation of the TiDB Backoff type scenario? + +In the communication process between the TiDB server and the TiKV server, the `Server is busy` or `backoff.maxsleep 20000ms` log message is displayed when processing a large volume of data. This is because the system is busy while the TiKV server processes data. At this time, usually you can view that the TiKV host resources usage rate is high. If this occurs, you can increase the server capacity according to the resources usage. + +#### What's the maximum number of concurrent connections that TiDB supports? + +The current TiDB version has no limit for the maximum number of concurrent connections. If too large concurrency leads to an increase of response time, you can increase the capacity by adding TiDB nodes. + +#### How to view the creation time of a table? + +The `create_time` of tables in the `information_schema` is the creation time. + +#### What is the meaning of `EXPENSIVE_QUERY` in the TiDB log? + +When TiDB is executing a SQL statement, the query will be `EXPENSIVE_QUERY` if each operator is estimated to process over 10000 pieces of data. You can modify the `tidb-server` configuration parameter to adjust the threshold and then restart the `tidb-server`. + +#### How to control or change the execution priority of SQL commits? + +TiDB has the following high priority and low priority syntax: + +- HIGH_PRIORITY: this statement has a high priority, that is, TiDB gives priority to this statement and executes it first. + +- LOW_PRIORITY: this statement has a low priority, that is, TiDB reduces the priority of this statement during the execution period. + +You can combine the above two parameters with the DML of TiDB to use them. For usage details, see [TiDB DML](sql/dml.md). For example: + +1. Adjust the priority by writing SQL statements in the database: + + ``` + select HIGH_PRIORITY | LOW_PRIORITY count(*) from table_name; + insert HIGH_PRIORITY | LOW_PRIORITY into table_name insert_values; + delete HIGH_PRIORITY | LOW_PRIORITY from table_name; + update HIGH_PRIORITY | LOW_PRIORITY table_reference set assignment_list where where_condition; + replace HIGH_PRIORITY | LOW_PRIORITY into table_name; + ``` + +2. The full table scan statement automatically adjusts itself to a low priority. `analyze` has a low priority by default. + +#### What's the trigger strategy for `auto analyze` in TiDB? + +Trigger strategy: `auto analyze` is automatically triggered when the number of pieces of data in a new table reaches 1000 and this table has no write operation within one minute. + +When the modified number or the current total row number is larger than `tidb_auto_analyze_ratio`, the `analyze` statement is automatically triggered. The default value of `tidb_auto_analyze_ratio` is 0, indicating that this feature is disabled. To ensure safety, its minimum value is 0.3 when the feature is enabled, and it must be smaller than `pseudo-estimate-ratio` whose default value is 0.7, otherwise pseudo statistics will be used for a period of time. It is recommended to set `tidb_auto_analyze_ratio` to 0.5. + +#### How to use a specific index with hint in a SQL statement? + +Its usage is similar to MySQL: + +``` +select column_name from table_name use index(index_name)where where_condition; +``` + +### Manage the TiKV server + +#### What is the recommended number of replicas in the TiKV cluster? Is it better to keep the minimum number for high availability? + +Use 3 replicas for test. If you increase the number of replicas, the performance declines but it is more secure. Whether to configure more replicas depends on the specific business needs. + +#### The `cluster ID mismatch` message is displayed when starting TiKV. + +This is because the cluster ID stored in local TiKV is different from the cluster ID specified by PD. When a new PD cluster is deployed, PD generates random cluster IDs. TiKV gets the cluster ID from PD and stores the cluster ID locally when it is initialized. The next time when TiKV is started, it checks the local cluster ID with the cluster ID in PD. If the cluster IDs don't match, the `cluster ID mismatch` message is displayed and TiKV exits. + +If you previously deploy a PD cluster, but then you remove the PD data and deploy a new PD cluster, this error occurs because TiKV uses the old data to connect to the new PD cluster. + +#### The `duplicated store address` message is displayed when starting TiKV. + +This is because the address in the startup parameter has been registered in the PD cluster by other TiKVs. This error occurs when there is no data folder under the directory that TiKV `--store` specifies, but you use the previous parameter to restart the TiKV. + +To solve this problem, use the [store delete](https://github.com/pingcap/pd/tree/master/pdctl#store-delete-) function to delete the previous store and then restart TiKV. + +#### TiKV master and slave use the same compression algorithm, why the results are different? + +Currently, some files of TiKV master have a higher compression rate, which depends on the underlying data distribution and RocksDB implementation. It is normal that the data size fluctuates occasionally. The underlying storage engine adjusts data as needed. + +#### What are the features of TiKV block cache? + +TiKV implements the Column Family (CF) feature of RocksDB. By default, the KV data is eventually stored in the 3 CFs (default, write and lock) within RocksDB. + +- The default CF stores real data and the corresponding parameter is in [rocksdb.defaultcf]. The write CF stores the data version information (MVCC) and index-related data, and the corresponding parameter is in `[rocksdb.writecf]`. The lock CF stores the lock information and the system uses the default parameter. +- The Raft RocksDB instance stores Raft logs. The default CF mainly stores Raft logs and the corresponding parameter is in `[raftdb.defaultcf]`. +- Each CF has an individual block-cache to cache data blocks and improve RocksDB read speed. The size of block-cache is controlled by the `block-cache-size` parameter. A larger value of the parameter means more hot data can be cached and is more favorable to read operation. At the same time, it consumes more system memory. +- Each CF has an individual write-buffer and the size is controlled by the `write-buffer-size` parameter. + +#### Why it occurs that "TiKV channel full"? + +- The Raftstore thread is too slow. You can view the CPU usage status of Raftstore. +- TiKV is too busy (read, write, disk I/O, etc.) and cannot manage to handle it. + +#### Why does TiKV frequently switch Region leader? + +- Network problem leads to the failure of communication between nodes. You can view the monitoring information of Report failures. +- The original main leader node fails, and cannot send information to the follower in time. +- The Raftstore thread fails. + +#### If the leader node is down, will the service be affected? How long? + +TiDB uses Raft to synchronize data among multiple replicas and guarantees the strong consistency of data. If one replica goes wrong, the other replicas can guarantee data security. The default number of replicas in each Region is 3. Based on the Raft protocol, a leader is elected in each Region, and if a single Region leader fails, a new Region leader is soon elected after a maximum of 2 * lease time (lease time is 10 seconds). + +#### What are the TiKV scenarios that take up high I/O, memory, CPU, and exceed the parameter configuration? + +Writing or reading a large volume of data in TiKV takes up high I/O, memory and CPU. Executing very complex queries costs a lot of memory and CPU resources, such as the scenario that generates large intermediate result sets. + +#### Does TiKV support SAS/SATA disks or mixed deployment of SSD/SAS disks? + +No. For OLTP scenarios, TiDB requires high I/O disks for data access and operation. As a distributed database with strong consistency, TiDB has some write amplification such as replica replication and bottom layer storage compaction. Therefore, it is recommended to use NVMe SSD as the storage disks in TiDB best practices. Mixed deployment of TiKV and PD is not supported. + +#### Is the Range of the Key data table divided before data access? + +No. It differs from the table splitting rules of MySQL. In TiKV, the table Range is dynamically split based on the size of Region. + +#### How does Region split? + +Region is not divided in advance, but it follows a Region split mechanism. When the Region size exceeds the value of the `region_split_size` parameter, split is triggered. After the split, the information is reported to PD. + +#### Does TiKV have the `innodb_flush_log_trx_commit` parameter like MySQL, to guarantee the security of data? + +Yes. Currently, the standalone storage engine uses two RocksDB instances. One instance is used to store the raft-log. When the `sync-log` parameter in TiKV is set to true, each commit is mandatorily flushed to the raft-log. If a crash occurs, you can restore the KV data using the raft-log. + +#### What is the recommended server configuration for WAL storage, such as SSD, RAID level, cache strategy of RAID card, NUMA configuration, file system, I/O scheduling strategy of the operating system? + +WAL belongs to ordered writing, and currently, we do not apply a unique configuration to it. Recommended configuration is as follows: + +- SSD +- RAID 10 preferred +- Cache strategy of RAID card and I/O scheduling strategy of the operating system: currently no specific best practices; you can use the default configuration in Linux 7 or later +- NUMA: no specific suggestion; for memory allocation strategy, you can use `interleave = all` +- File system: ext4 + +#### How is the write performance in the most strict data available mode of `sync-log = true`? + +Generally, enabling `sync-log` reduces about 30% of the performance. For the test about `sync-log = false`, see [Performance test result for TiDB using Sysbench](benchmark/sysbench.md). + +#### Can the Raft + multiple replicas in the upper layer implement complete data security? Is it required to apply the most strict mode to standalone storage? + +Raft uses strong consistency, and only when the data has been written into more than 50% of the nodes, the application returns ACK (two out of three nodes). In this case, data consistency is guaranteed. However, theoretically, two nodes might crash. Therefore, for scenarios that have a strict requirement on data security, such as scenarios in financial industry, you need to enable the `sync-log`. + +#### In data writing using the Raft protocol, multiple network roundtrips occur. What is the actual write delay? + +Theoretically, TiDB has 4 more network roundtrips than standalone databases. + +#### Does TiDB have a InnoDB memcached plugin like MySQL which can directly use the KV interface and does not need the independent cache? + +TiKV supports calling the interface separately. Theoretically, you can take an instance as the cache. Because TiDB is a distributed relational database, we do not support TiKV separately. + +#### What is the Coprocessor component used for? + +- Reduce the data transmission between TiDB and TiKV +- Make full use of the distributed computing resources of TiKV to execute computing pushdown + +#### The error message `IO error: No space left on device While appending to file` is displayed. + +This is because the disk space is not enough. You need to add nodes or enlarge the disk space. + +### TiDB test + +#### What is the performance test result for TiDB using Sysbench? + +At the beginning, many users tend to do a benchmark test or a comparison test between TiDB and MySQL. We have also done a similar official test and find the test result is consistent at large, although the test data has some bias. Because the architecture of TiDB differs greatly from MySQL, it is hard to find a benchmark point. The suggestions are as follows: + +- Do not spend too much time on the benchmark test. Pay more attention to the difference of scenarios using TiDB. +- See the official test. For the Sysbench test and comparison test between TiDB and MySQL, see [Performance test result for TiDB using Sysbench](benchmark/sysbench.md). + +#### What's the relationship between the TiDB cluster capacity (QPS) and the number of nodes? How does TiDB compare to MySQL? + +- Within 10 nodes, the relationship between TiDB write capacity (Insert TPS) and the number of nodes is roughly 40% linear increase. Because MySQL uses single-node write, its write capacity cannot be scaled. +- In MySQL, the read capacity can be increased by adding slave, but the write capacity cannot be increased except using sharding, which has many problems. +- In TiDB, both the read and write capacity can be easily increased by adding more nodes. + +#### The performance test of MySQL and TiDB by our DBA shows that the performance of a standalone TiDB is not as good as MySQL. + +TiDB is designed for scenarios where sharding is used because the capacity of a MySQL standalone is limited, and where strong consistency and complete distributed transactions are required. One of the advantages of TiDB is pushing down computing to the storage nodes to execute concurrent computing. + +TiDB is not suitable for tables of small size (such as below ten million level), because its strength in concurrency cannot be shown with a small size of data and limited Regions. A typical example is the counter table, in which records of a few lines are updated high frequently. In TiDB, these lines become several Key-Value pairs in the storage engine, and then settle into a Region located on a single node. The overhead of background replication to guarantee strong consistency and operations from TiDB to TiKV leads to a poorer performance than a MySQL standalone. + +### Backup and restore + +#### How to back up data in TiDB? + +Currently, the major way of backing up data in TiDB is using `mydumper`. For details, see [mydumper repository](https://github.com/maxbube/mydumper). Although the official MySQL tool `mysqldump` is also supported in TiDB to back up and restore data, its performance is poorer than `mydumper`/`loader` and it needs much more time to back up and restore large volumes of data. Therefore, it is not recommended to use `mysqldump`. + +Keep the size of the data file exported from `mydumper` as small as possible. It is recommended to keep the size within 64M. You can set value of the `-F` parameter to 64. + +You can edit the `t` parameter of `loader` based on the number of TiKV instances and load status. For example, in scenarios of three TiKV instances, you can set its value to `3 * (1 ~ n)`. When the TiKV load is very high and `backoffer.maxSleep 15000ms is exceeded` displays a lot in `loader` and TiDB logs, you can adjust the parameter to a smaller value. When the TiKV load is not very high, you can adjust the parameter to a larger value accordingly. + +## Migrate the data and traffic + +### Full data export and import + +#### Mydumper + +See the [mydumper repository](https://github.com/maxbube/mydumper). + +#### Loader + +See [Loader Instructions](tools/loader.md). + +#### How to migrate an application running on MySQL to TiDB? + +Because TiDB supports most MySQL syntax, generally you can migrate your applications to TiDB without changing a single line of code in most cases. You can use [checker](https://github.com/pingcap/tidb-tools/tree/master/checker) to check whether the Schema in MySQL is compatible with TiDB. + +#### If I accidentally import the MySQL user table into TiDB, or forget the password and cannot log in, how to deal with it? + +Restart the TiDB service, add the `-skip-grant-table=true` parameter in the configuration file. Log into the cluster without password and recreate the user, or recreate the `mysql.user` table using the following statement: + +```sql +DROP TABLE IF EXIST mysql.user; + +CREATE TABLE if not exists mysql.user ( + Host CHAR(64), + User CHAR(16), + Password CHAR(41), + Select_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Insert_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Update_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Delete_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Create_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Drop_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Process_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Grant_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + References_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Alter_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Show_db_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Super_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Create_tmp_table_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Lock_tables_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Execute_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Create_view_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Show_view_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Create_routine_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Alter_routine_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Index_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Create_user_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Event_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + Trigger_priv ENUM('N','Y') NOT NULL DEFAULT 'N', + PRIMARY KEY (Host, User)); + +INSERT INTO mysql.user VALUES ("%", "root", "", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y"); +``` + +#### How to export the data in TiDB? + +Currently, TiDB does not support `select into outfile`. You can use the following methods to export the data in TiDB: + +- See [MySQL uses mysqldump to export part of the table data](http://blog.csdn.net/xin_yu_xin/article/details/7574662) in Chinese and export data using mysqldump and the WHERE condition. +- Use the MySQL client to export the results of `select` to a file. + +#### How to migrate from DB2 or Oracle to TiDB? + +To migrate all the data or migrate incrementally from DB2 or Oracle to TiDB, see the following solution: + +- Use the official migration tool of Oracle, such as OGG, Gateway, CDC (Change Data Capture). +- Develop a program for importing and exporting data. +- Export Spool as text file, and import data using Load infile. +- Use a third-party data migration tool. + +Currently, it is recommended to use OGG. + +#### Error: `java.sql.BatchUpdateExecption:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in batches + +In Sqoop, `--batch` means committing 100 `statement`s in each batch, but by default each `statement` contains 100 SQL statements. So, 100 * 100 = 10000 SQL statements, which exceeds 5000, the maximum number of statements allowed in a single TiDB transaction. + +Two solutions: + +- Add the `-Dsqoop.export.records.per.statement=10` option as follows: + + ``` + sqoop export \ + -Dsqoop.export.records.per.statement=10 \ + --connect jdbc:mysql://mysql.example.com/sqoop \ + --username sqoop ${user} \ + --password ${passwd} \ + --table ${tab_name} \ + --export-dir ${dir} \ + --batch + ``` + +- You can also increase the limited number of statements in a single TiDB transaction, but this will consume more memory. + +### Migrate the data incrementally + +#### Syncer + +##### Syncer user guide + +See [Syncer User Guide](docs/tools/syncer.md). + +##### How to configure to monitor Syncer status? + +Download and import [Syncer Json](https://github.com/pingcap/docs/blob/master/etc/Syncer.json) to Grafana. Edit the Prometheus configuration file and add the following content: + +``` +- job_name: ‘syncer_ops’ // task name + static_configs: + - targets: [’10.10.1.1:10096’] // Syncer monitoring address and port, informing Prometheus to pull the data of Syncer +``` + +Restart Prometheus. + +##### Is there a current solution to synchronizing data from TiDB to other databases like HBase and Elasticsearch? + +No. Currently, the data synchronization depends on the application itself. + +##### Does Syncer support synchronizing only some of the tables when Syncer is synchronizing data? + +Yes. For details, see [Syncer User Guide](tools/syncer.md) + +##### Do frequent DDL operations affect the synchronization speed of Syncer? + +Frequent DDL operations may affect the synchronization speed. For Sycner, DDL operations are executed serially. When DDL operations are executed during data synchronization, data will be synchronized serially and thus the synchronization speed will be slowed down. + +#### Wormhole + +Wormhole is a data synchronization service, which enables the user to easily synchronize all the data or synchronize incrementally using Web console. It supports multiple types of data migration, such as from MySQL to TiDB, and from MongoDB to TiDB. + +#### If the machine that Syncer is in is broken and the directory of the `syncer.meta` file is lost, what should I do? + +When you synchronize data using Syncer GTID, the `syncer.meta` file is constantly updated during the synchronization process. The current version of Syncer does not contain the design for high availability. The `syncer.meta` configuration file of Syncer is directly stored on the hard disks, which is similar to other tools in the MySQL ecosystem, such as mydumper. + +Two solutions: + +- Put the `syncer.meta` file in a relatively secure disk. For example, use disks with RAID 1. +- Restore the location information of history synchronization according to the monitoring data that Syncer reports to Prometheus regularly. But the location information might be inaccurate due to the delay when a large amount of data is synchronized. + +### Migrate the traffic + +#### How to migrate the traffic quickly? + +It is recommended to build a multi-source MySQL, MongoDB -> TiDB real-time synchronization environment using Syncer or Wormhole. You can migrate the read and write traffic in batches by editing the network configuration as needed. Deploy a stable network LB (HAproxy, LVS, F5, DNS, etc.) on the upper layer, in order to implement seamless migration by directly editing the network configuration. + +#### Is there a limit for the total write and read capacity in TiDB? + +The total read capacity has no limit. You can increase the read capacity by adding more TiDB servers. Generally the write capacity has no limit as well. You can increase the write capacity by adding more TiKV nodes. + +#### The error message `transaction too large` is displayed. + +As distributed transactions need to conduct two-phase commit and the bottom layer performs Raft replication, if a transaction is very large, the commit process would be quite slow and the following Raft replication flow is thus struck. To avoid this problem, we limit the transaction size: + +- Each Key-Value entry is no more than 6MB +- The total number of Key-Value entry is no more than 300,000 rows +- The total size of Key-Value entry is no more than 100MB + +There are [similar limits](https://cloud.google.com/spanner/docs/limits) on Google Cloud Spanner. + +#### How to import data in batches? + +1. When you import data, insert in batches and keep the number of rows within 10,000 for each batch. + +2. As for `insert` and `select`, you can open the hidden parameter `set @@session.tidb_batch_insert=1;`, and `insert` will execute large transactions in batches. In this way, you can avoid the timeout caused by large transactions, but this may lead to the loss of atomicity. An error in the process of execution leads to partly inserted transaction. Therefore, use this parameter only when necessary, and use it in session to avoid affecting other statements. When the transaction is finished, use `set @@session.tidb_batch_insert=0` to close it. + +3. As for `delete` and `update`, you can use `limit` plus circulation to operate. + +#### Does TiDB release space immediately after deleting data? + +None of the `DELETE`, `TRUNCATE` and `DROP` operations release data immediately. For the `TRUNCATE` and `DROP` operations, after the TiDB GC (Garbage Collection) time (10 minutes by default), the data is deleted and the space is released. For the `DELETE` operation, the data is deleted but the space is not released according to TiDB GC. When subsequent data is written into RocksDB and executes `COMPACT`, the space is reused. + +#### Can I execute DDL operations on the target table when loading data? + +No. None of the DDL operations can be executed on the target table when you load data, otherwise the data fails to be loaded. + +#### Does TiDB support the `replace into` syntax? + +Yes. But the `load data` does not support the `replace into` syntax. + +#### Why does the query speed getting slow after deleting data? + +Deleting a large amount of data leaves a lot of useless keys, affecting the query efficiency. Currently the Region Merge feature is in development, which is expected to solve this problem. For details, see the [deleting data section in TiDB Best Practices](https://pingcap.com/blog/2017-07-24-tidbbestpractice/#write). + +#### What is the most efficient way of deleting data? + +When deleting a large amount of data, it is recommended to use `Delete * from t where xx limit 5000;`. It deletes through the loop and uses `Affected Rows == 0` as a condition to end the loop, so as not to exceed the limit of transaction size. With the prerequisite of meeting business filtering logic, it is recommended to add a strong filter index column or directly use the primary key to select the range, such as `id >= 5000*n+m and id < 5000*(n+1)+m`. + +If the amount of data that needs to be deleted at a time is very large, this loop method will get slower and slower because each deletion traverses backward. After deleting the previous data, lots of deleted flags remain for a short period (then all will be processed by Garbage Collection) and influence the following Delete statement. If possible, it is recommended to refine the Where condition. See [details in TiDB Best Practices](https://pingcap.com/blog/2017-07-24-tidbbestpractice/#write). + +#### How to improve the data loading speed in TiDB? + +- Currently Lightning is in development for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data). +- Data loading in TiDB is related to the status of disks and the whole cluster. When loading data, pay attention to metrics like the disk usage rate of the host, TiClient Error, Backoff, Thread CPU and so on. You can analyze the bottlenecks using these metrics. + +#### What should I do if it is slow to reclaim storage space after deleting data? + +You can configure concurrent GC to increase the speed of reclaiming storage space. The default concurrency is 1, and you can modify it to at most 50% of the number of TiKV instances using the following command: + +``` +update mysql.tidb set VARIABLE_VALUE="3" where VARIABLE_NAME="tikv_gc_concurrency"; +``` + +## SQL optimization + +### TiDB execution plan description + +See [Understand the Query Execution Plan](sql/understanding-the-query-execution-plan.md). + +### Statistics collection + +See [Introduction to Statistics](sql/statistics.md). + +#### How to optimize `select count(1)`? + +The `count(1)` statement counts the total number of rows in a table. Improving the degree of concurrency can significantly improve the speed. To modify the concurrency, refer to the [document](sql/tidb-specific.md#tidb_distsql_scan_concurrency). But it also depends on the CPU and I/O resources. TiDB accesses TiKV in every query. When the amount of data is small, all MySQL is in memory, and TiDB needs to conduct a network access. + +Recommendations: + +1. Improve the hardware configuration. See [Software and Hardware Requirements](op-guide/recommendation.md). +2. Improve the concurrency. The default value is 10. You can improve it to 50 and have a try. But usually the improvement is 2-4 times of the default value. +3. Test the `count` in the case of large amount of data. +4. Optimize the TiKV configuration. See [Performance Tuning for TiKV](op-guide/tune-TiKV.md). + +#### How to view the progress of adding an index? + +Use `admin show ddl` to view the current job of adding an index. + +#### How to view the DDL job? + +- `admin show ddl`: to view the running DDL job +- `admin show ddl jobs`: to view all the results in the current DDL job queue (including tasks that are running and waiting to run) and the last ten results in the completed DDL job queue +- `admin show ddl job queries 'job_id' [, 'job_id'] ...`: to view the original SQL statement of the DDL task corresponding to the `job_id`; the `job_id` only searches the running DDL job and the last ten results in the DDL history job queue + +#### Does TiDB support CBO (Cost-Based Optimization)? If yes, to what extent? + +Yes. TiDB uses the cost-based optimizer. The cost model and statistics are constantly optimized. TiDB also supports correlation algorithms like hash join and soft merge. + +#### How to determine whether I need to execute `analyze` on a table? + +View the `Healthy` field using `show stats_healthy` and generally you need to execute `analyze` on a table when the field value is smaller than 60. + +#### What is the ID rule when a query plan is presented as a tree? What is the execution order for this tree? + +No rule exists for these IDs but the IDs are unique. When IDs are generated, a counter works and adds one when one plan is generated. The execution order has nothing to do with the ID. The whole query plan is a tree and the execution process starts from the root node and the data is returned to the upper level continuously. For details about the query plan, see [Understanding the TiDB Query Execution Plan](sql/understanding-the-query-execution-plan.md). + +#### In the TiDB query plan, `cop` tasks are in the same root. Are they executed concurrently? + +Currently the computing tasks of TiDB belong to two different types of tasks: `cop task` and `root task`. + +`cop task` is the computing task which is pushed down to the KV end for distributed execution; `root task` is the computing task for single point execution on the TiDB end. + +Generally the input data of `root task` comes from `cop task`; when `root task` processes data, `cop task` of TiKV can processes data at the same time and waits for the pull of `root task` of TiDB. Therefore, `cop` tasks can be considered as executed concurrently; but their data has an upstream and downstream relationship. During the execution process, they are executed concurrently during some time. For example, the first `cop task` is processing the data in [100, 200] and the second `cop task` is processing the data in [1, 100]. For details, see [Understanding the TiDB Query Plan](sql/understanding-the-query-execution-plan.md). + +## Database optimization + +### TiDB + +#### Edit TiDB options + +See [The TiDB Command Options](sql/server-command-option.md). + +#### How to scatter the hotspots? + +In TiDB, data is divided into Regions for management. Generally, the TiDB hotspot means the Read/Write hotspot in a Region. In TiDB, for the table whose primary key (PK) is not an integer or which has no PK, you can properly break Regions by configuring `SHARD_ROW_ID_BITS` to scatter the Region hotspots. For details, see the introduction of `SHARD_ROW_ID_BITS` in [TiDB Specific System Variables and Syntax](sql/tidb-specific.md). + +### TiKV + +#### Tune TiKV performance + +See [Tune TiKV Performance](op-guide/tune-tikv.md). + +## Monitor + +### Prometheus monitoring framework + +See [Overview of the Monitoring Framework](op-guide/monitor-overview.md). + +### Key metrics of monitoring + +See [Key Metrics](op-guide/dashboard-overview-info.md). + +#### Is there a better way of monitoring the key metrics? + +The monitoring system of TiDB consists of Prometheus and Grafana. From the dashboard in Grafana, you can monitor various running metrics of TiDB which include the monitoring metrics of system resources, of client connection and SQL operation, of internal communication and Region scheduling. With these metrics, the database administrator can better understand the system running status, running bottlenecks and so on. In the practice of monitoring these metrics, we list the key metrics of each TiDB component. Generally you only need to pay attention to these common metrics. For details, see [Key Metrics](op-guide/dashboard-overview-info.md). + +#### The Prometheus monitoring data is deleted each month by default. Could I set it to two months or delete the monitoring data manually? + +Yes. Find the startup script on the machine where Prometheus is started, edit the startup parameter and restart Prometheus. + +#### Region Health monitor + +In TiDB 2.0, Region health is monitored in the PD metric monitoring page, in which the `Region Health` monitoring item shows the statistics of all the Region replica status. `miss` means shortage of replicas and `extra` means the extra replica exists. In addition, `Region Health` also shows the isolation level by `label`. `level-1` means the Region replicas are isolated physically in the first `label` level. All the Regions are in `level-0` when `location label` is not configured. + +#### What is the meaning of `selectsimplefull` in Statement Count monitor? + +It means full table scan but the table might be a small system table. + +#### What is the difference between `QPS` and `Statement OPS` in the monitor? + +The `QPS` statisctics is about all the SQL statements, including `use database`, `load data`, `begin`, `commit`, `set`, `show`, `insert` and `select`. + +The `Statement OPS` statistics is only about applications related SQL statements, including `select`, `update` and `insert`, therefore the `Statement OPS` statistics matches the applications better. + +## Troubleshoot + +### TiDB custom error messages + +#### ERROR 9001 (HY000): PD Server Timeout + +A PD request timeout. Check the status, monitoring data and log of the PD server, and the network between the TiDB server and the PD server. + +#### ERROR 9002 (HY000): TiKV Server Timeout + +A TiKV request timeout. Check the status, monitoring data and log of the TiKV server, and the network between the TiDB server and the TiKV server. + +#### ERROR 9003 (HY000): TiKV Server is Busy + +The TiKV server is busy. This usually occurs when the database load is very high. Check the status, monitoring data and log of the TiKV server. + +#### ERROR 9004 (HY000): Resolve Lock Timeout + +A lock resolving timeout. This usually occurs when a large number of transaction conflicts exist. Check the application code to see whether lock contention exists in the database. + +#### ERROR 9005 (HY000): Region is unavailable + +The accessed Region is not available. A Raft Group is not available, with possible reasons like an inadequate number of replicas. This usually occurs when the TiKV server is busy or the TiKV node is shut down. Check the status, monitoring data and log of the TiKV server. + +#### ERROR 9006 (HY000): GC life time is shorter than transaction duration + +The interval of `GC Life Time` is too short. The data that should have been read by long transactions might be deleted. You can add `GC Life Time` using the following command: + +``` +update mysql.tidb set variable_value='30m' where variable_name='tikv_gc_life_time'; +``` + +> **Note:** "30m" means only cleaning up the data generated 30 minutes ago, which might consume some extra storage space. + +### MySQL native error messages + +#### ERROR 2013 (HY000): Lost connection to MySQL server during query + +- Check whether panic is in the log. +- Check whether OOM exists in dmesg using `dmesg -T | grep -i oom`. +- A long time of no access might also lead to this error. It is usually caused by TCP timeout. If TCP is not used for a long time, the operating system kills it. + +#### ERROR 1105 (HY000): other error: unknown error Wire Error(InvalidEnumValue(4004)) + +This error usually occurs when the version of TiDB does not match with the version of TiKV. To avoid version mismatch, upgrade all components when you upgrade the version. + +#### ERROR 1148 (42000): the used command is not allowed with this TiDB version + +When you execute the `LOAD DATA LOCAL` statement but the MySQL client does not allow executing this statement (the value of the `local_infile` option is 0), this error occurs. + +The solution is to use the `--local-infile=1` option when you start the MySQL client. For example, use command like `mysql --local-infile=1 -u root -h 127.0.0.1 -P 4000`. The default value of `local-infile` is different in different versions of MySQL client, therefore you need to configure it in some MySQL clients and do not need to configure it in some others. + +#### ERROR 9001 (HY000): PD server timeout start timestamp may fall behind safe point + +This error occurs when TiDB fails to access PD. A worker in the TiDB background continuously queries the safepoint from PD and this error occurs if it fails to query within 100s. Generally it is because the PD failure or network failure between TiDB and PD. For the details of common errors, see [Error Number and Fault Diagnosis](sql/error.md). diff --git a/v2.0/LICENSE b/v2.0/LICENSE new file mode 100755 index 0000000000000..8dada3edaf50d --- /dev/null +++ b/v2.0/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/v2.0/QUICKSTART.md b/v2.0/QUICKSTART.md new file mode 100755 index 0000000000000..afa334a416073 --- /dev/null +++ b/v2.0/QUICKSTART.md @@ -0,0 +1,255 @@ +--- +title: TiDB Quick Start Guide +summary: Learn how to deploy a TiDB cluster and try it quickly. +category: quick start +--- + +# TiDB Quick Start Guide + +## About TiDB + +TiDB (The pronunciation is: /'taɪdiːbi:/ tai-D-B, etymology: titanium) is an open-source distributed scalable Hybrid Transactional and Analytical Processing (HTAP) database. It features infinite horizontal scalability, strong consistency, and high availability. TiDB is MySQL compatible and serves as a one-stop data warehouse for both OLTP (Online Transactional Processing) and OLAP (Online Analytical Processing) workloads. + +## About this guide + +This guide outlines how to perform a quick deployment of a TiDB cluster using TiDB-Ansible and walks you through the basic TiDB operations and administrations. + +## Deploy the TiDB cluster + +This section describes how to deploy a TiDB cluster. A TiDB cluster consists of different components: TiDB servers, TiKV servers, and Placement Driver (PD) servers. + +The architecture is as follows: + +![TiDB Architecture](media/tidb-architecture.png) + +To quickly deploy a TiDB testing cluster, see [Deploy TiDB Using Docker Compose](op-guide/docker-compose.md). + +## Try TiDB + +This section describes some basic CRUD operations in TiDB. + +### Create, show, and drop a database + +- To create a database, use the `CREATE DATABASE` statement. The Syntax is as follows: + + ```sql + CREATE DATABASE db_name [options]; + ``` + + For example, the following statement creates a database with the name `samp_db`: + + ```sql + CREATE DATABASE IF NOT EXISTS samp_db; + ``` + +- To show the databases, use the `SHOW DATABASES` statement: + + ```sql + SHOW DATABASES; + ``` + +- To delete a database, use the `DROP DATABASE` statement. For example: + + ```sql + DROP DATABASE samp_db; + ``` + +### Create, show, and drop a table + +- To create a table, use the `CREATE TABLE` statement. The Syntax is as follows: + + ```sql + CREATE TABLE table_name column_name data_type constraint; + ``` + + For example: + + ```sql + CREATE TABLE person ( + number INT(11), + name VARCHAR(255), + birthday DATE + ); + ``` + + Add `IF NOT EXISTS` to prevent an error if the table exists: + + ```sql + CREATE TABLE IF NOT EXISTS person ( + number INT(11), + name VARCHAR(255), + birthday DATE + ); + ``` + +- To view the statement that creates the table, use the `SHOW CREATE` statement. For example: + + ```sql + SHOW CREATE table person; + ``` + +- To show all the tables in a database, use the `SHOW TABLES` statement. For example: + + ```sql + SHOW TABLES FROM samp_db; + ``` + +- To show the information about all the columns in a table, use the `SHOW FULL COLUMNS` statement. For example: + + ```sql + SHOW FULL COLUMNS FROM person; + ``` + +- To delete a table, use the `DROP TABLE` statement. For example: + + ```sql + DROP TABLE person; + ``` + + or + + ```sql + DROP TABLE IF EXISTS person; + ``` + +### Create, show, and drop an index + +- To create an index for the column whose value is not unique, use the `CREATE INDEX` or `ALTER TABLE` statement. For example: + + ```sql + CREATE INDEX person_num ON person (number); + ``` + + or + + ```sql + ALTER TABLE person ADD INDEX person_num (number); + ``` + +- To create a unique index for the column whose value is unique, use the `CREATE UNIQUE INDEX` or `ALTER TABLE` statement. For example: + + ```sql + CREATE UNIQUE INDEX person_num ON person (number); + ``` + + or + + ```sql + ALTER TABLE person ADD UNIQUE person_num on (number); + ``` + +- To show all the indexes in a table, use the `SHOW INDEX` statement: + + ```sql + SHOW INDEX from person; + ``` + +- To delete an index, use the `DROP INDEX` or `ALTER TABLE` statement. For example: + + ```sql + DROP INDEX person_num ON person; + ALTER TABLE person DROP INDEX person_num; + ``` + +### Insert, select, update, and delete data + +- To insert data into a table, use the `INSERT` statement. For example: + + ```sql + INSERT INTO person VALUES("1","tom","20170912"); + ``` + +- To view the data in a table, use the `SELECT` statement. For example: + + ```sql + SELECT * FROM person; + +--------+------+------------+ + | number | name | birthday | + +--------+------+------------+ + | 1 | tom | 2017-09-12 | + +--------+------+------------+ + ``` + +- To update the data in a table, use the `UPDATE` statement. For example: + + ```sql + UPDATE person SET birthday='20171010' WHERE name='tom'; + + SELECT * FROM person; + +--------+------+------------+ + | number | name | birthday | + +--------+------+------------+ + | 1 | tom | 2017-10-10 | + +--------+------+------------+ + ``` + +- To delete the data in a table, use the `DELETE` statement. For example: + + ```sql + DELETE FROM person WHERE number=1; + SELECT * FROM person; + Empty set (0.00 sec) + ``` + +### Create, authorize, and delete a user + +- To create a user, use the `CREATE USER` statement. The following example creates a user named `tiuser` with the password `123456`: + + ```sql + CREATE USER 'tiuser'@'localhost' IDENTIFIED BY '123456'; + ``` + +- To grant `tiuser` the privilege to retrieve the tables in the `samp_db` database: + + ```sql + GRANT SELECT ON samp_db.* TO 'tiuser'@'localhost'; + ``` + +- To check the privileges of `tiuser`: + + ```sql + SHOW GRANTS for tiuser@localhost; + ``` + +- To delete `tiuser`: + + ```sql + DROP USER 'tiuser'@'localhost'; + ``` + +## Monitor the TiDB cluster + +Open a browser to access the monitoring platform: `http://172.16.10.3:3000`. + +The default account and password are: `admin`/`admin`. + +### About the key metrics + +Service | Panel Name | Description | Normal Range +---- | ---------------- | ---------------------------------- | -------------- +PD | Storage Capacity | the total storage capacity of the TiDB cluster | +PD | Current Storage Size | the occupied storage capacity of the TiDB cluster | +PD | Store Status -- up store | the number of TiKV nodes that are up | +PD | Store Status -- down store | the number of TiKV nodes that are down | `0`. If the number is bigger than `0`, it means some node(s) are not down. +PD | Store Status -- offline store | the number of TiKV nodes that are manually offline| +PD | Store Status -- Tombstone store | the number of TiKV nodes that are Tombstone| +PD | Current storage usage | the storage occupancy rate of the TiKV cluster | If it exceeds 80%, you need to consider adding more TiKV nodes. +PD | 99% completed cmds duration seconds | the 99th percentile duration to complete a pd-server request| less than 5ms +PD | average completed cmds duration seconds | the average duration to complete a pd-server request | less than 50ms +PD | leader balance ratio | the leader ratio difference of the nodes with the biggest leader ratio and the smallest leader ratio | It is less than 5% for a balanced situation. It becomes bigger when a node is restarting. +PD | region balance ratio | the region ratio difference of the nodes with the biggest region ratio and the smallest region ratio | It is less than 5% for a balanced situation. It becomes bigger when adding or removing a node. +TiDB | handle requests duration seconds | the response time to get TSO from PD| less than 100ms +TiDB | tidb server QPS | the QPS of the cluster | application specific +TiDB | connection count | the number of connections from application servers to the database | Application specific. If the number of connections hops, you need to find out the reasons. If it drops to 0, you can check if the network is broken; if it surges, you need to check the application. +TiDB | statement count | the number of different types of statement within a given time | application specific +TiDB | Query Duration 99th percentile | the 99th percentile query time | +TiKV | 99% & 99.99% scheduler command duration | the 99th percentile and 99.99th percentile scheduler command duration| For 99%, it is less than 50ms; for 99.99%, it is less than 100ms. +TiKV | 95% & 99.99% storage async_request duration | the 95th percentile and 99.99th percentile Raft command duration | For 95%, it is less than 50ms; for 99.99%, it is less than 100ms. +TiKV | server report failure message | There might be an issue with the network or the message might not come from this cluster. | If there are large amount of messages which contains `unreachable`, there might be an issue with the network. If the message contains `store not match`, the message does not come from this cluster. +TiKV | Vote |the frequency of the Raft vote | Usually, the value only changes when there is a split. If the value of Vote remains high for a long time, the system might have a severe issue and some nodes are not working. +TiKV | 95% and 99% coprocessor request duration | the 95th percentile and the 99th percentile coprocessor request duration | Application specific. Usually, the value does not remain high. +TiKV | Pending task | the number of pending tasks | Except for PD worker, it is not normal if the value is too high. +TiKV | stall | RocksDB stall time | If the value is bigger than 0, it means that RocksDB is too busy, and you need to pay attention to IO and CPU usage. +TiKV | channel full | The channel is full and the threads are too busy. | If the value is bigger than 0, the threads are too busy. +TiKV | 95% send message duration seconds | the 95th percentile message sending time | less than 50ms +TiKV | leader/region | the number of leader/region per TiKV server| application specific diff --git a/v2.0/README.md b/v2.0/README.md new file mode 100755 index 0000000000000..2b93486af25cb --- /dev/null +++ b/v2.0/README.md @@ -0,0 +1,278 @@ +# TiDB Documentation + +## Documentation List + ++ About TiDB + - [TiDB Introduction](overview.md#tidb-introduction) + - [TiDB Architecture](overview.md#tidb-architecture) +- [TiDB Quick Start Guide](QUICKSTART.md) +- [TiDB Tutorial](https://www.pingcap.com/blog/how_to_spin_up_an_htap_database_in_5_minutes_with_tidb_tispark/) ++ TiDB User Guide + + TiDB Server Administration + - [The TiDB Server](sql/tidb-server.md) + - [The TiDB Command Options](sql/server-command-option.md) + - [The TiDB Data Directory](sql/tidb-server.md#tidb-data-directory) + - [The TiDB System Database](sql/system-database.md) + - [The TiDB System Variables](sql/variable.md) + - [The Proprietary System Variables and Syntax in TiDB](sql/tidb-specific.md) + - [The TiDB Server Logs](sql/tidb-server.md#tidb-server-logs) + - [The TiDB Access Privilege System](sql/privilege.md) + - [TiDB User Account Management](sql/user-account-management.md) + - [Use Encrypted Connections](sql/encrypted-connections.md) + + SQL Optimization + - [Understand the Query Execution Plan](sql/understanding-the-query-execution-plan.md) + - [Introduction to Statistics](sql/statistics.md) + + Language Structure + - [Literal Values](sql/literal-values.md) + - [Schema Object Names](sql/schema-object-names.md) + - [Keywords and Reserved Words](sql/keywords-and-reserved-words.md) + - [User-Defined Variables](sql/user-defined-variables.md) + - [Expression Syntax](sql/expression-syntax.md) + - [Comment Syntax](sql/comment-syntax.md) + + Globalization + - [Character Set Support](sql/character-set-support.md) + - [Character Set Configuration](sql/character-set-configuration.md) + - [Time Zone](sql/time-zone.md) + + Data Types + - [Numeric Types](sql/datatype.md#numeric-types) + - [Date and Time Types](sql/datatype.md#date-and-time-types) + - [String Types](sql/datatype.md#string-types) + - [JSON Types](sql/datatype.md#json-types) + - [The ENUM data type](sql/datatype.md#the-enum-data-type) + - [The SET Type](sql/datatype.md#the-set-type) + - [Data Type Default Values](sql/datatype.md#data-type-default-values) + + Functions and Operators + - [Function and Operator Reference](sql/functions-and-operators-reference.md) + - [Type Conversion in Expression Evaluation](sql/type-conversion-in-expression-evaluation.md) + - [Operators](sql/operators.md) + - [Control Flow Functions](sql/control-flow-functions.md) + - [String Functions](sql/string-functions.md) + - [Numeric Functions and Operators](sql/numeric-functions-and-operators.md) + - [Date and Time Functions](sql/date-and-time-functions.md) + - [Bit Functions and Operators](sql/bit-functions-and-operators.md) + - [Cast Functions and Operators](sql/cast-functions-and-operators.md) + - [Encryption and Compression Functions](sql/encryption-and-compression-functions.md) + - [Information Functions](sql/information-functions.md) + - [JSON Functions](sql/json-functions.md) + - [Aggregate (GROUP BY) Functions](sql/aggregate-group-by-functions.md) + - [Miscellaneous Functions](sql/miscellaneous-functions.md) + - [Precision Math](sql/precision-math.md) + + SQL Statement Syntax + - [Data Definition Statements](sql/ddl.md) + - [Data Manipulation Statements](sql/dml.md) + - [Transactions](sql/transaction.md) + - [Database Administration Statements](sql/admin.md) + - [Prepared SQL Statement Syntax](sql/prepare.md) + - [Utility Statements](sql/util.md) + - [TiDB SQL Syntax Diagram](https://pingcap.github.io/sqlgram/) + - [JSON Functions and Generated Column](sql/json-functions-generated-column.md) + - [Connectors and APIs](sql/connection-and-APIs.md) + - [TiDB Transaction Isolation Levels](sql/transaction-isolation.md) + - [Error Codes and Troubleshooting](sql/error.md) + - [Compatibility with MySQL](sql/mysql-compatibility.md) + - [TiDB Memory Control](sql/tidb-memory-control.md) + - [Slow Query Log](sql/slow-query.md) + + Advanced Usage + - [Read Data From History Versions](op-guide/history-read.md) + - [Garbage Collection (GC)](op-guide/gc.md) ++ TiDB Operations Guide + - [Hardware and Software Requirements](op-guide/recommendation.md) + + Deploy + - [Ansible Deployment (Recommended)](op-guide/ansible-deployment.md) + - [Offline Deployment Using Ansible](op-guide/offline-ansible-deployment.md) + - [Docker Deployment](op-guide/docker-deployment.md) + - [Docker Compose Deployment](op-guide/docker-compose.md) + - [Cross-Region Deployment](op-guide/location-awareness.md) + + Configure + - [Configuration Flags](op-guide/configuration.md) + - [Configuration File Description](op-guide/tidb-config-file.md) + - [Modify Component Configuration Using Ansible](op-guide/ansible-deployment-rolling-update.md#modify-component-configuration) + - [Enable TLS Authentication](op-guide/security.md) + - [Generate Self-signed Certificates](op-guide/generate-self-signed-certificates.md) + + Monitor + - [Overview of the Monitoring Framework](op-guide/monitor-overview.md) + - [Key Metrics](op-guide/dashboard-overview-info.md) + - [Monitor a TiDB Cluster](op-guide/monitor.md) + + Scale + - [Scale a TiDB Cluster](op-guide/horizontal-scale.md) + - [Scale Using Ansible](op-guide/ansible-deployment-scale.md) + + Upgrade + - [Upgrade the Component Version](op-guide/ansible-deployment-rolling-update.md#upgrade-the-component-version) + - [TiDB 2.0 Upgrade Guide](op-guide/tidb-v2-upgrade-guide.md) + - [Tune Performance](op-guide/tune-tikv.md) + + Backup and Migrate + - [Backup and Restore](op-guide/backup-restore.md) + + Migrate + - [Migration Overview](op-guide/migration-overview.md) + - [Migrate All the Data](op-guide/migration.md#use-the-mydumper--loader-tool-to-export-and-import-all-the-data) + - [Migrate the Data Incrementally](op-guide/migration.md#use-the-syncer-tool-to-import-data-incrementally-optional) + - [TiDB-Ansible Common Operations](op-guide/ansible-operation.md) + - [Troubleshoot](trouble-shooting.md) ++ TiDB Enterprise Tools + - [Syncer](tools/syncer.md) + - [Loader](tools/loader.md) + - [TiDB-Binlog](tools/tidb-binlog-kafka.md) + - [PD Control](tools/pd-control.md) + - [PD Recover](tools/pd-recover.md) + - [TiKV Control](tools/tikv-control.md) + - [TiDB Controller](tools/tidb-controller.md) ++ TiKV Documentation + - [Overview](tikv/tikv-overview.md) + + Install and Deploy TiKV + - [Prerequisites](op-guide/recommendation.md) + - [Install and Deploy TiKV Using Docker Compose](tikv/deploy-tikv-docker-compose.md) + - [Install and Deploy TiKV Using Ansible](tikv/deploy-tikv-using-ansible.md) + - [Install and Deploy TiKV Using Docker](tikv/deploy-tikv-using-docker.md) + + Client Drivers + - [Go](tikv/go-client-api.md) ++ TiSpark Documentation + - [Quick Start Guide](tispark/tispark-quick-start-guide.md) + - [User Guide](tispark/tispark-user-guide.md) +- [Frequently Asked Questions (FAQ)](FAQ.md) +- [TiDB Best Practices](https://pingcap.github.io/blog/2017/07/24/tidbbestpractice/) ++ [Releases](releases/rn.md) + - [2.1 RC1](releases/21rc1.md) + - [2.0.6](releases/206.md) + - [2.0.5](releases/205.md) + - [2.1 Beta](releases/21beta.md) + - [2.0.4](releases/204.md) + - [2.0.3](releases/203.md) + - [2.0.2](releases/202.md) + - [2.0.1](releases/201.md) + - [2.0](releases/2.0ga.md) + - [2.0 RC5](releases/2rc5.md) + - [2.0 RC4](releases/2rc4.md) + - [2.0 RC3](releases/2rc3.md) + - [2.0 RC1](releases/2rc1.md) + - [1.1 Beta](releases/11beta.md) + - [1.0.8](releases/108.md) + - [1.0.7](releases/107.md) + - [1.1 Alpha](releases/11alpha.md) + - [1.0.6](releases/106.md) + - [1.0.5](releases/105.md) + - [1.0.4](releases/104.md) + - [1.0.3](releases/103.md) + - [1.0.2](releases/102.md) + - [1.0.1](releases/101.md) + - [1.0](releases/ga.md) + - [Pre-GA](releases/prega.md) + - [RC4](releases/rc4.md) + - [RC3](releases/rc3.md) + - [RC2](releases/rc2.md) + - [RC1](releases/rc1.md) +- [TiDB Adopters](adopters.md) +- [TiDB Roadmap](ROADMAP.md) +- [Connect with us](community.md) ++ More Resources + - [Frequently Used Tools](https://github.com/pingcap/tidb-tools) + - [PingCAP Blog](https://pingcap.com/blog/) + - [Weekly Update](https://pingcap.com/weekly/) + +## TiDB Introduction + +TiDB (The pronunciation is: /'taɪdiːbi:/ tai-D-B, etymology: titanium) is an open-source distributed scalable Hybrid Transactional and Analytical Processing (HTAP) database. It features infinite horizontal scalability, strong consistency, and high availability. TiDB is MySQL compatible and serves as a one-stop data warehouse for both OLTP (Online Transactional Processing) and OLAP (Online Analytical Processing) workloads. + +- __Horizontal scalability__ + + TiDB provides horizontal scalability simply by adding new nodes. Never worry about infrastructure capacity ever again. + +- __MySQL compatibility__ + + Easily replace MySQL with TiDB to power your applications without changing a single line of code in most cases and still benefit from the MySQL ecosystem. + +- __Distributed transaction__ + + TiDB is your source of truth, guaranteeing ACID compliance, so your data is accurate and reliable anytime, anywhere. + +- __Cloud Native__ + + TiDB is designed to work in the cloud -- public, private, or hybrid -- making deployment, provisioning, and maintenance drop-dead simple. + +- __No more ETL__ + + ETL (Extract, Transform and Load) is no longer necessary with TiDB's hybrid OLTP/OLAP architecture, enabling you to create new values for your users, easier and faster. + +- __High availability__ + + With TiDB, your data and applications are always on and continuously available, so your users are never disappointed. + +TiDB is designed to support both OLTP and OLAP scenarios. For complex OLAP scenarios, use [TiSpark](tispark/tispark-user-guide.md). + +Read the following three articles to understand TiDB techniques: + +- [Data Storage](https://pingcap.github.io/blog/2017/07/11/tidbinternal1/) +- [Computing](https://pingcap.github.io/blog/2017/07/11/tidbinternal2/) +- [Scheduling](https://pingcap.github.io/blog/2017/07/20/tidbinternal3/) + +## Roadmap + +Read the [Roadmap](https://github.com/pingcap/docs/blob/master/ROADMAP.md). + +## Connect with us + +- **Twitter**: [@PingCAP](https://twitter.com/PingCAP) +- **Reddit**: https://www.reddit.com/r/TiDB/ +- **Stack Overflow**: https://stackoverflow.com/questions/tagged/tidb +- **Mailing list**: [Google Group](https://groups.google.com/forum/#!forum/tidb-user) + +## TiDB architecture + +To better understand TiDB’s features, you need to understand the TiDB architecture. + +![image alt text](media/tidb-architecture.png) + +The TiDB cluster has three components: the TiDB server, the PD server, and the TiKV server. + +### TiDB server + +The TiDB server is in charge of the following operations: + +1. Receiving the SQL requests + +2. Processing the SQL related logics + +3. Locating the TiKV address for storing and computing data through Placement Driver (PD) + +4. Exchanging data with TiKV + +5. Returning the result + +The TiDB server is stateless. It does not store data and it is for computing only. TiDB is horizontally scalable and provides the unified interface to the outside through the load balancing components such as Linux Virtual Server (LVS), HAProxy, or F5. + +### Placement Driver server + +The Placement Driver (PD) server is the managing component of the entire cluster and is in charge of the following three operations: + +1. Storing the metadata of the cluster such as the region location of a specific key. + +2. Scheduling and load balancing regions in the TiKV cluster, including but not limited to data migration and Raft group leader transfer. + +3. Allocating the transaction ID that is globally unique and monotonic increasing. + +As a cluster, PD needs to be deployed to an odd number of nodes. Usually it is recommended to deploy to 3 online nodes at least. + +### TiKV server + +The TiKV server is responsible for storing data. From an external view, TiKV is a distributed transactional Key-Value storage engine. Region is the basic unit to store data. Each Region stores the data for a particular Key Range which is a left-closed and right-open interval from StartKey to EndKey. There are multiple Regions in each TiKV node. TiKV uses the Raft protocol for replication to ensure the data consistency and disaster recovery. The replicas of the same Region on different nodes compose a Raft Group. The load balancing of the data among different TiKV nodes are scheduled by PD. Region is also the basic unit for scheduling the load balance. + +## Features + +### Horizontal Scalability + +Horizontal scalability is the most important feature of TiDB. The scalability includes two aspects: the computing capability and the storage capacity. The TiDB server processes the SQL requests. As the business grows, the overall processing capability and higher throughput can be achieved by simply adding more TiDB server nodes. Data is stored in TiKV. As the size of the data grows, the scalability of data can be resolved by adding more TiKV server nodes. PD schedules data in Regions among the TiKV nodes and migrates part of the data to the newly added node. So in the early stage, you can deploy only a few service instances. For example, it is recommended to deploy at least 3 TiKV nodes, 3 PD nodes and 2 TiDB nodes. As business grows, more TiDB and TiKV instances can be added on-demand. + +### High availability + +High availability is another important feature of TiDB. All of the three components, TiDB, TiKV and PD, can tolerate the failure of some instances without impacting the availability of the entire cluster. For each component, See the following for more details about the availability, the consequence of a single instance failure and how to recover. + +#### TiDB + +TiDB is stateless and it is recommended to deploy at least two instances. The front-end provides services to the outside through the load balancing components. If one of the instances is down, the Session on the instance will be impacted. From the application’s point of view, it is a single request failure but the service can be regained by reconnecting to the TiDB server. If a single instance is down, the service can be recovered by restarting the instance or by deploying a new one. + +#### PD + +PD is a cluster and the data consistency is ensured using the Raft protocol. If an instance is down but the instance is not a Raft Leader, there is no impact on the service at all. If the instance is a Raft Leader, a new Leader will be elected to recover the service. During the election which is approximately 3 seconds, PD cannot provide service. It is recommended to deploy three instances. If one of the instances is down, the service can be recovered by restarting the instance or by deploying a new one. + +#### TiKV + +TiKV is a cluster and the data consistency is ensured using the Raft protocol. The number of the replicas can be configurable and the default is 3 replicas. The load of TiKV servers are balanced through PD. If one of the node is down, all the Regions in the node will be impacted. If the failed node is the Leader of the Region, the service will be interrupted and a new election will be initiated. If the failed node is a Follower of the Region, the service will not be impacted. If a TiKV node is down for a period of time (default 30 minutes), PD will move the data to another TiKV node. diff --git a/v2.0/ROADMAP.md b/v2.0/ROADMAP.md new file mode 100755 index 0000000000000..d618867f9af14 --- /dev/null +++ b/v2.0/ROADMAP.md @@ -0,0 +1,87 @@ +--- +title: TiDB Roadmap +summary: Learn about the roadmap of TiDB. +category: Roadmap +--- + +# TiDB Roadmap + +This document defines the roadmap for TiDB development. + +## TiDB: + ++ [ ] Optimizer + - [x] Refactor Ranger + - [ ] Optimize the cost model + - [ ] Join Reorder ++ [ ] Statistics + - [x] Update statistics dynamically according to the query feedback + - [x] Analyze table automatically + - [ ] Improve the accuracy of Row Count estimation ++ [ ] Executor + - [ ] Push down the Projection operator to the Coprocessor + - [ ] Improve the performance of the HashJoin operator + - [ ] Parallel Operators + - [x] Projection + - [ ] Aggregation + - [ ] Sort + - [x] Compact Row Format to reduce memory usage + - [ ] File Sort +- [ ] View +- [ ] Window Function +- [ ] Common Table Expression +- [ ] Table Partition +- [ ] Cluster Index +- [ ] Improve DDL + - [x] Speed up Add Index operation + - [ ] Parallel DDL +- [ ] Support `utf8_general_ci` collation + +## TiKV: + +- [ ] Raft + - [x] Region merge + - [ ] Local read thread + - [ ] Multi-thread raftstore + - [x] None voter + - [x] Pre-vote + - [ ] Multi-thread apply pool + - [ ] Split region in batch + - [ ] Raft Engine +- [x] RocksDB + - [x] DeleteRange + - [ ] BlobDB +- [x] Transaction + - [x] Optimize transaction conflicts + - [ ] Distributed GC +- [x] Coprocessor + - [x] Streaming +- [ ] Tool + - [x] Import distributed data + - [ ] Export distributed data + - [ ] Disaster Recovery +- [ ] Flow control and degradation + +## PD: + +- [x] Improve namespace + - [x] Different replication policies for different namespaces and tables +- [x] Decentralize scheduling table Regions +- [x] Scheduler supports prioritization to be more controllable +- [ ] Use machine learning to optimize scheduling +- [ ] Cluster Simulator + +## TiSpark: + +- [ ] Limit/Order push-down +- [x] Access through the DAG interface and deprecate the Select interface +- [ ] Index Join and parallel merge join +- [ ] Data Federation + +## SRE & tools: + +- [X] Kubernetes based integration for the on-premise version +- [ ] Dashboard UI for the on-premise version +- [ ] The cluster backup and recovery tool +- [ ] The data migration tool (Wormhole V2) +- [ ] Security and system diagnosis diff --git a/v2.0/adopters.md b/v2.0/adopters.md new file mode 100755 index 0000000000000..7ad9a87af08a1 --- /dev/null +++ b/v2.0/adopters.md @@ -0,0 +1,78 @@ +--- +title: TiDB Adopters +summary: Learn about the list of TiDB adopters in various industries. +category: adopters +--- + +# TiDB Adopters + +This is a list of TiDB adopters in various industries. + +| Company | Industry | Success Story | +| :--- | :--- | :--- | +|[Mobike](https://en.wikipedia.org/wiki/Mobike)|Ridesharing|[English](https://www.pingcap.com/blog/Use-Case-TiDB-in-Mobike/); [Chinese](https://www.pingcap.com/cases-cn/user-case-mobike/)| +|[Jinri Toutiao](https://en.wikipedia.org/wiki/Toutiao)|Mobile News Platform|[Chinese](https://www.pingcap.com/cases-cn/user-case-toutiao/)| +|[Yiguo.com](https://www.crunchbase.com/organization/shanghai-yiguo-electron-business)|E-commerce|[English](https://www.datanami.com/2018/02/22/hybrid-database-capturing-perishable-insights-yiguo/); [Chinese](https://www.pingcap.com/cases-cn/user-case-yiguo)| +|[Yuanfudao.com](https://www.crunchbase.com/organization/yuanfudao)|EdTech|[English](https://www.pingcap.com/blog/2017-08-08-tidbforyuanfudao/); [Chinese](https://www.pingcap.com/cases-cn/user-case-yuanfudao/)| +|[Ele.me](https://en.wikipedia.org/wiki/Ele.me)|Food Delivery|[English](https://www.pingcap.com/blog/use-case-tidb-in-eleme/); [Chinese](https://www.pingcap.com/cases-cn/user-case-eleme-1/)| +|[LY.com](https://www.crunchbase.com/organization/ly-com)|Travel|[Chinese](https://www.pingcap.com/cases-cn/user-case-tongcheng/)| +|[Qunar.com](https://www.crunchbase.com/organization/qunar-com)|Travel|[Chinese](https://www.pingcap.com/cases-cn/user-case-qunar/)| +|[Hulu](https://www.hulu.com)|Entertainment|| +|[VIPKID](https://en.wikipedia.org/wiki/VIPKID)|EdTech|| +|[Lenovo](https://en.wikipedia.org/wiki/Lenovo)|Enterprise Technology|| +|[Bank of Beijing](https://en.wikipedia.org/wiki/Bank_of_Beijing)|Banking|| +|[Industrial and Commercial Bank of China](https://en.wikipedia.org/wiki/Industrial_and_Commercial_Bank_of_China)|Banking|| +|[iQiyi](https://en.wikipedia.org/wiki/IQiyi)|Media and Entertainment|| +|[Yimian Data](https://www.crunchbase.com/organization/yimian-data)|Big Data|[Chinese](https://www.pingcap.com/cases-cn/user-case-yimian)| +|[Phoenix New Media](https://www.crunchbase.com/organization/phoenix-new-media)|Media|[Chinese](https://www.pingcap.com/cases-cn/user-case-ifeng/)| +|[Mobikok](http://www.mobikok.com/en/)|AdTech|[Chinese](https://pingcap.com/cases-cn/user-case-mobikok/)| +|[LinkDoc Technology](https://www.crunchbase.com/organization/linkdoc-technology)|HealthTech|[Chinese](https://www.pingcap.com/cases-cn/user-case-linkdoc/)| +|[G7 Networks](https://www.english.g7.com.cn/)| Logistics|[Chinese](https://www.pingcap.com/cases-cn/user-case-g7/)| +|[360 Finance](https://www.crunchbase.com/organization/360-finance)|FinTech|[Chinese](https://www.pingcap.com/cases-cn/user-case-360/)| +|[GAEA](http://www.gaea.com/en/)|Gaming|[English](https://www.pingcap.com/blog/2017-05-22-Comparison-between-MySQL-and-TiDB-with-tens-of-millions-of-data-per-day/); [Chinese](https://www.pingcap.com/cases-cn/user-case-gaea-ad/)| +|[YOOZOO Games](https://www.crunchbase.com/organization/yoozoo-games)|Gaming|[Chinese](https://pingcap.com/cases-cn/user-case-youzu/)| +|[Seasun Games](https://www.crunchbase.com/organization/seasun)|Gaming|[Chinese](https://pingcap.com/cases-cn/user-case-xishanju/)| +|[NetEase Games](https://game.163.com/en/)|Gaming|| +|[FUNYOURS JAPAN](http://company.funyours.co.jp/)|Gaming|[Chinese](https://pingcap.com/cases-cn/user-case-funyours-japan/)| +|[Zhaopin.com](https://www.crunchbase.com/organization/zhaopin)|Recruiting|| +|[Panda.tv](https://www.crunchbase.com/organization/panda-tv)|Live Streaming|| +|[Hoodinn](https://www.crunchbase.com/organization/hoodinn)|Gaming|| +|[Ping++](https://www.crunchbase.com/organization/ping-5)|Mobile Payment|[Chinese](https://pingcap.com/cases-cn/user-case-ping++/)| +|[Hainan eKing Technology](https://www.crunchbase.com/organization/hainan-eking-technology)|Enterprise Technology|[Chinese](https://pingcap.com/cases-cn/user-case-ekingtech/)| +|[LianLian Tech](http://www.10030.com.cn/web/)|Mobile Payment|| +|[Tongdun Technology](https://www.crunchbase.com/organization/tongdun-technology)|FinTech|| +|[Wacai](https://www.crunchbase.com/organization/wacai)|FinTech|| +|[Tree Finance](https://www.treefinance.com.cn/)|FinTech|| +|[2Dfire.com](http://www.2dfire.com/)|FoodTech|[Chinese](https://www.pingcap.com/cases-cn/user-case-erweihuo/)| +|[Happigo.com](https://www.crunchbase.com/organization/happigo-com)|E-commerce|| +|[Mashang Consumer Finance](https://www.crunchbase.com/organization/ms-finance)|FinTech|| +|[Tencent OMG](https://en.wikipedia.org/wiki/Tencent)|Media|| +|[Terren](http://webterren.com.zigstat.com/)|Media|| +|[LeCloud](https://www.crunchbase.com/organization/letv-2)|Media|| +|[Miaopai](https://en.wikipedia.org/wiki/Miaopai)|Media|| +|[Snowball Finance](https://www.crunchbase.com/organization/snowball-finance)|FinTech|| +|[Yimutian](http://www.ymt.com/)|E-commerce|| +|[Gengmei](https://www.crunchbase.com/organization/gengmei)|Plastic Surgery|| +|[Acewill](https://www.crunchbase.com/organization/acewill)|FoodTech|| +|[Keruyun](https://www.crunchbase.com/organization/keruyun-technology-beijing-co-ltd)|SaaS|[Chinese](https://pingcap.com/cases-cn/user-case-keruyun/)| +|[Youju Tech](https://www.ujuz.cn/)|E-Commerce|| +|[Maizuo](https://www.crunchbase.com/organization/maizhuo)|E-Commerce|| +|[Mogujie](https://www.crunchbase.com/organization/mogujie)|E-Commerce|| +|[Zhuan Zhuan](https://www.crunchbase.com/organization/zhuan-zhuan)|Online Marketplace|[Chinese](https://pingcap.com/cases-cn/user-case-zhuanzhuan/)| +|[Shuangchuang Huipu](http://scphjt.com/)|FinTech|| +|[Meizu](https://en.wikipedia.org/wiki/Meizu)|Media|| +|[SEA group](https://sea-group.org/?lang=en)|Gaming|| +|[Sogou](https://en.wikipedia.org/wiki/Sogou)|MediaTech|| +|[Chunyu Yisheng](https://www.crunchbase.com/organization/chunyu)|HealthTech|| +|[Meituan](https://en.wikipedia.org/wiki/Meituan-Dianping)|Food Delivery|| +|[Qutoutiao](https://www.crunchbase.com/organization/qutoutiao)|Social Network|| +|[QuantGroup](https://www.crunchbase.com/organization/quantgroup)|FinTech|| +|[FINUP](https://www.crunchbase.com/organization/finup)|FinTech|| +[Meili Finance](https://www.crunchbase.com/organization/meili-jinrong)|FinTech|| +|[Guolian Securities](https://www.crunchbase.com/organization/guolian-securities)|Financial Services|| +|[Founder Securities](https://www.linkedin.com/company/founder-securities-co-ltd-/)|Financial Services|| +|[China Telecom Shanghai](http://sh.189.cn/en/index.html)|Telecom|| +|[State Administration of Taxation](https://en.wikipedia.org/wiki/State_Administration_of_Taxation)|Finance|| +|[Wuhan Antian Information Technology](https://www.avlsec.com/)|Enterprise Technology|| +|[Ausnutria Dairy](https://www.crunchbase.com/organization/ausnutria-dairy)|FoodTech|| +|[Qingdao Telaidian](https://www.teld.cn/)|Electric Car Charger|| \ No newline at end of file diff --git a/v2.0/benchmark/sysbench-v2.md b/v2.0/benchmark/sysbench-v2.md new file mode 100755 index 0000000000000..8bb15fd03c3c9 --- /dev/null +++ b/v2.0/benchmark/sysbench-v2.md @@ -0,0 +1,133 @@ +--- +title: TiDB Sysbench Performance Test Report -- v2.0.0 vs. v1.0.0 +category: benchmark +--- + +# TiDB Sysbench Performance Test Report -- v2.0.0 vs. v1.0.0 + +## Test purpose + +This test aims to compare the performances of TiDB 1.0 and TiDB 2.0. + +## Test version, time, and place + +TiDB version: v1.0.8 vs. v2.0.0-rc6 + +Time: April 2018 + +Place: Beijing, China + +## Test environment + +IDC machine + +| Type | Name | +| -------- | --------- | +| OS | linux (CentOS 7.3.1611) | +| CPU | 40 vCPUs, Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz | +| RAM | 128GB | +| DISK | Optane 500GB SSD * 1 | + +Sysbench test script: +https://github.com/pingcap/tidb-bench/tree/master/sysbench + + +## Test plan + +### TiDB version information + +### v1.0.8 + +| Component | GitHash | +| -------- | --------- | +| TiDB | 571f0bbd28a0b8155a5ee831992c986b90d21ab7 | +| TiKV | 4ef5889947019e3cb55cc744f487aa63b42540e7 | +| PD | 776bcd940b71d295a2c7ed762582bc3aff7d3c0e | + +### v2.0.0-rc6 + +| Component | GitHash | +| :--------: | :---------: | +| TiDB | 82d35f1b7f9047c478f4e1e82aa0002abc8107e7 | +| TiKV | 7ed4f6a91f92cad5cd5323aaebe7d9f04b77cc79 | +| PD | 2c8e7d7e33b38e457169ce5dfb2f461fced82d65 | + +### TiKV parameter configuration + +- v1.0.8 + + ``` + sync-log = false + grpc-concurrency = 8 + grpc-raft-conn-num = 24 + ``` + +- v2.0.0-rc6 + + ``` + sync-log = false + grpc-concurrency = 8 + grpc-raft-conn-num = 24 + use-delete-range: false + ``` + +### Cluster topology + +| Machine IP | Deployment instance | +|--------------|------------| +| 172.16.21.1 | 1*tidb 1*pd 1*sysbench | +| 172.16.21.2 | 1*tidb 1*pd 1*sysbench | +| 172.16.21.3 | 1*tidb 1*pd 1*sysbench | +| 172.16.11.4 | 1*tikv | +| 172.16.11.5 | 1*tikv | +| 172.16.11.6 | 1*tikv | +| 172.16.11.7 | 1*tikv | +| 172.16.11.8 | 1*tikv | +| 172.16.11.9 | 1*tikv | + +## Test result + +### Standard `Select` test + +| Version | Table count | Table size | Sysbench threads |QPS | Latency (avg/.95) | +| :---: | :---: | :---: | :---: | :---: | :---: | +| v2.0.0-rc6 | 32 | 10 million | 128 * 3 | 201936 | 1.9033 ms/5.67667 ms | +| v2.0.0-rc6 | 32 | 10 million | 256 * 3 | 208130 | 3.69333 ms/8.90333 ms | +| v2.0.0-rc6 | 32 | 10 million | 512 * 3 | 211788 | 7.23333 ms/15.59 ms | +| v2.0.0-rc6 | 32 | 10 million | 1024 * 3 | 212868 | 14.5933 ms/43.2133 ms | +| v1.0.8 | 32 | 10 million | 128 * 3 | 188686 | 2.03667 ms/5.99 ms | +| v1.0.8 | 32 | 10 million | 256 * 3 | 195090 |3.94 ms/9.12 ms | +| v1.0.8 | 32 | 10 million | 512 * 3 | 203012 | 7.57333 ms/15.3733 ms | +| v1.0.8 | 32 | 10 million | 1024 * 3 | 205932 | 14.9267 ms/40.7633 ms | + +According to the statistics above, the `Select` query performance of TiDB 2.0 GA has increased by about 10% at most than that of TiDB 1.0 GA. + +### Standard OLTP test + +| Version | Table count | Table size | Sysbench threads | TPS | QPS | Latency (avg/.95) | +| :---: | :---: | :---: | :---: | :---: | :---: | :---:| +| v2.0.0-rc6 | 32 | 10 million | 128 * 3 | 5404.22 | 108084.4 | 87.2033 ms/110 ms | +| v2.0.0-rc6 | 32 | 10 million | 256 * 3 | 5578.165 | 111563.3 | 167.673 ms/275.623 ms | +| v2.0.0-rc6 | 32 | 10 million | 512 * 3 | 5874.045 | 117480.9 | 315.083 ms/674.017 ms | +| v2.0.0-rc6 | 32 | 10 million | 1024 * 3 | 6290.7 | 125814 | 529.183 ms/857.007 ms | +| v1.0.8 | 32 | 10 million | 128 * 3 | 5523.91 | 110478 | 69.53 ms/88.6333 ms | +| v1.0.8 | 32 | 10 million | 256 * 3 | 5969.43 | 119389 |128.63 ms/162.58 ms | +| v1.0.8 | 32 | 10 million | 512 * 3 | 6308.93 | 126179 | 243.543 ms/310.913 ms | +| v1.0.8 | 32 | 10 million | 1024 * 3 | 6444.25 | 128885 | 476.787ms/635.143 ms | + +According to the statistics above, the OLTP performance of TiDB 2.0 GA and TiDB 1.0 GA is almost the same. + +### Standard `Insert` test + +| Version | Table count | Table size | Sysbench threads | QPS | Latency (avg/.95) | +| :---: | :---: | :---: | :---: | :---: | :---: | +| v2.0.0-rc6 | 32 | 10 million | 128 * 3 | 31707.5 | 12.11 ms/21.1167 ms | +| v2.0.0-rc6 | 32 | 10 million | 256 * 3 | 38741.2 | 19.8233 ms/39.65 ms | +| v2.0.0-rc6 | 32 | 10 million | 512 * 3 | 45136.8 | 34.0267 ms/66.84 ms | +| v2.0.0-rc6 | 32 | 10 million | 1024 * 3 | 48667 | 63.1167 ms/121.08 ms | +| v1.0.8 | 32 | 10 million | 128 * 3 | 31125.7 | 12.3367 ms/19.89 ms | +| v1.0.8 | 32 | 10 million | 256 * 3 | 36800 | 20.8667 ms/35.3767 ms | +| v1.0.8 | 32 | 10 million | 512 * 3 | 44123 | 34.8067 ms/63.32 ms | +| v1.0.8 | 32 | 10 million | 1024 * 3 | 48496 | 63.3333 ms/118.92 ms | + +According to the statistics above, the `Insert` query performance of TiDB 2.0 GA has increased slightly than that of TiDB 1.0 GA. diff --git a/v2.0/benchmark/sysbench.md b/v2.0/benchmark/sysbench.md new file mode 100755 index 0000000000000..a47d2e96dab1d --- /dev/null +++ b/v2.0/benchmark/sysbench.md @@ -0,0 +1,210 @@ +--- +title: Performace test result for TiDB using Sysbench +category: benchmark +draft: true +--- + +# Performace test result for TiDB using Sysbench + +## Test purpose + +The purpose of this test is to test the performance and horizontal scalability of TiDB in OLTP scenarios. + +> **Note**: The results of the testing might vary based on different environmental dependencies. + +## Test version, date and place + +TiDB version: v1.0.0 + +Date: October 20, 2017 + +Place: Beijing + +## Test environment + +- IDC machines: + + | Category | Detail | + | :--------| :---------| + | OS | Linux (CentOS 7.3.1611) | + | CPU | 40 vCPUs, Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz | + | RAM | 128GB | + | DISK | 1.5T SSD * 2 + Optane SSD * 1 | + +- Sysbench version: 1.0.6 + +- Test script: https://github.com/pingcap/tidb-bench/tree/cwen/not_prepared_statement/sysbench. + +## Test scenarios + +### Scenario one: RW performance test using Sysbench + +The structure of the table used for the test: + +``` sql +CREATE TABLE `sbtest` ( + `id` int(10) unsigned NOT NULL AUTO_INCREMENT, + `k` int(10) unsigned NOT NULL DEFAULT '0', + `c` char(120) NOT NULL DEFAULT '', + `pad` char(60) NOT NULL DEFAULT '', + PRIMARY KEY (`id`), + KEY `k_1` (`k`) +) ENGINE=InnoDB +``` + +The deployment and configuration details: + +``` +// TiDB deployment +172.16.20.4 4*tikv 1*tidb 1*sysbench +172.16.20.6 4*tikv 1*tidb 1*sysbench +172.16.20.7 4*tikv 1*tidb 1*sysbench +172.16.10.8 1*tidb 1*pd 1*sysbench + +// Each physical node has three disks. +data3: 2 tikv (Optane SSD) +data2: 1 tikv +data1: 1 tikv + +// TiKV configuration +sync-log = false +grpc-concurrency = 8 +grpc-raft-conn-num = 24 +[defaultcf] +block-cache-size = "12GB" +[writecf] +block-cache-size = "5GB" +[raftdb.defaultcf] +block-cache-size = "2GB" + +// MySQL deployment +// Use the semi-synchronous replication and asynchronous replication to deploy two replicas respectively. +172.16.20.4 master +172.16.20.6 slave +172.16.20.7 slave +172.16.10.8 1*sysbench +Mysql version: 5.6.37 + +// MySQL configuration +thread_cache_size = 64 +innodb_buffer_pool_size = 64G +innodb_file_per_table = 1 +innodb_flush_log_at_trx_commit = 0 +datadir = /data3/mysql +max_connections = 2000 +``` + +- OLTP RW test + + | - | Table count | Table size | Sysbench threads | TPS | QPS | Latency(avg / .95) | + | :---: | :---: | :---: | :---: | :---: | :---: | :---: | + | TiDB | 32 | 1 million | 64 * 4 | 3834 | 76692 | 67.04 ms / 110.88 ms | + | TiDB | 32 | 1 million | 128 * 4 | 4172 | 83459 | 124.00 ms / 194.21 ms | + | TiDB | 32 | 1 million | 256 * 4 | 4577 | 91547 | 228.36 ms / 334.02 ms | + | TiDB | 32 | 5 million | 256 * 4 | 4032 | 80657 | 256.62 ms / 443.88 ms | + | TiDB | 32 | 10 million | 256 * 4 | 3811 | 76233 | 269.46 ms / 505.20 ms | + | Mysql | 32 | 1 million | 64 | 2392 | 47845 | 26.75 ms / 73.13 ms | + | Mysql | 32 | 1 million | 128 | 2493 | 49874 | 51.32 ms / 173.58 ms | + | Mysql | 32 | 1 million | 256 | 2561 | 51221 | 99.95 ms / 287.38 ms | + | Mysql | 32 | 5 million | 256 | 1902 | 38045 | 134.56 ms / 363.18 ms | + | Mysql | 32 | 10 million | 256 | 1770 | 35416 | 144.55 ms / 383.33 ms | + +![](../media/sysbench-01.png) + +![](../media/sysbench-02.png) + +- `Select` RW test + + | - | Table count | Table size | Sysbench threads | QPS | Latency(avg / .95) | + | :---: | :---: | :---: | :---: | :---: | :---: | + | TiDB | 32 | 1 million | 64 * 4 | 160299 | 1.61ms / 50.06 ms | + | TiDB | 32 | 1 million | 128 * 4 | 183347 | 2.85 ms / 8.66 ms | + | TiDB | 32 | 1 million | 256 * 4 | 196515 | 5.42 ms / 14.43 ms | + | TiDB | 32 | 5 million | 256 * 4 | 187628 | 5.66 ms / 15.04 ms | + | TiDB | 32 | 10 million | 256 * 4 | 187440 | 5.65 ms / 15.37 ms | + | Mysql | 32 | 1 million | 64 | 359572 | 0.18 ms / 0.45 ms | + | Mysql | 32 | 1 million | 128 | 410426 |0.31 ms / 0.74 ms | + | Mysql | 32 | 1 million | 256 | 396867 | 0.64 ms / 1.58 ms | + | Mysql | 32 | 5 million | 256 | 386866 | 0.66 ms / 1.64 ms | + | Mysql | 32 | 10 million | 256 | 388273 | 0.66 ms / 1.64 ms | + +![](../media/sysbench-03.png) + +![](../media/sysbench-04.png) + +- `Insert` RW test + + | - | Table count | Table size | Sysbench threads | QPS | Latency(avg / .95) | + | :---: | :---: | :---: | :---: | :---: | :---: | + | TiDB | 32 | 1 million | 64 * 4 | 25308 | 10.12 ms / 25.40 ms | + | TiDB | 32 | 1 million | 128 * 4 | 28773 | 17.80 ms / 44.58 ms | + | TiDB | 32 | 1 million | 256 * 4 | 32641 | 31.38 ms / 73.47 ms | + | TiDB | 32 | 5 million | 256 * 4 | 30430 | 33.65 ms / 79.32 ms | + | TiDB | 32 | 10 million | 256 * 4 | 28925 | 35.41 ms / 78.96 ms | + | Mysql | 32 | 1 million | 64 | 14806 | 4.32 ms / 9.39 ms | + | Mysql | 32 | 1 million | 128 | 14884 | 8.58 ms / 21.11 ms | + | Mysql | 32 | 1 million | 256 | 14508 | 17.64 ms / 44.98 ms | + | Mysql | 32 | 5 million | 256 | 10593 | 24.16 ms / 82.96 ms | + | Mysql | 32 | 10 million | 256 | 9813 | 26.08 ms / 94.10 ms | + +![](../media/sysbench-05.png) + +![](../media/sysbench-06.png) + +### Scenario two: TiDB horizontal scalability test + +The deployment and configuration details: + +``` +// TiDB deployment +172.16.20.3 4*tikv +172.16.10.2 1*tidb 1*pd 1*sysbench + +// Each physical node has three disks. +data3: 2 tikv (Optane SSD) +data2: 1 tikv +data1: 1 tikv + +// TiKV configuration +sync-log = false +grpc-concurrency = 8 +grpc-raft-conn-num = 24 +[defaultcf] +block-cache-size = "12GB" +[writecf] +block-cache-size = "5GB" +[raftdb.defaultcf] +block-cache-size = "2GB" +``` + +- OLTP RW test + + | - | Table count | Table size | Sysbench threads | TPS | QPS | Latency(avg / .95) | + | :---: | :---: | :---: | :---: | :---: | :---: | :---: | + | 1 TiDB physical node | 32 | 1 million | 256 * 1 | 2495 | 49902 | 102.42 ms / 125.52 ms | + | 2 TiDB physical nodes | 32 | 1 million | 256 * 2 | 5007 | 100153 | 102.23 ms / 125.52 ms | + | 4 TiDB physical nodes | 32 | 1 million | 256 * 4 | 8984 | 179692 | 114.96 ms / 176.73 ms | + | 6 TiDB physical nodes | 32 | 5 million | 256 * 6 | 12953 | 259072 | 117.80 ms / 200.47 ms | + +![](../media/sysbench-07.png) + +- `Select` RW test + + | - | Table count | Table size | Sysbench threads | QPS | Latency(avg / .95) | + | :---: | :---: | :---: | :---: | :---: | :---: | + | 1 TiDB physical node | 32 | 1 million | 256 * 1 | 71841 | 3.56 ms / 8.74 ms | + | 2 TiDB physical nodes | 32 | 1 million | 256 * 2 | 146615 | 3.49 ms / 8.74 ms | + | 4 TiDB physical nodes | 32 | 1 million | 256 * 4 | 289933 | 3.53 ms / 8.74 ms | + | 6 TiDB physical nodes | 32 | 5 million | 256 * 6 | 435313 | 3.55 ms / 9.17 ms | + +![](../media/sysbench-08.png) + +- `Insert` RW test + + | - | Table count | Table size | Sysbench threads | QPS | Latency(avg / .95) | + | :---: | :---: | :---: | :---: | :---: | :---: | + | 3 TiKV physical node | 32 | 1 million |256 * 3 | 40547 | 18.93 ms / 38.25 ms | + | 5 TiKV physical nodes | 32 | 1 million | 256 * 3 | 60689 | 37.96 ms / 29.9 ms | + | 7 TiKV physical nodes | 32 | 1 million | 256 * 3 | 80087 | 9.62 ms / 21.37 ms | + +![](../media/sysbench-09.png) diff --git a/v2.0/benchmark/tpch.md b/v2.0/benchmark/tpch.md new file mode 100755 index 0000000000000..322fc387a0a07 --- /dev/null +++ b/v2.0/benchmark/tpch.md @@ -0,0 +1,106 @@ +--- +title: TiDB TPC-H 50G Performance Test Report V2.0 +category: benchmark +--- + +# TiDB TPC-H 50G Performance Test Report + +## Test purpose + +This test aims to compare the performances of TiDB 1.0 and TiDB 2.0 in the OLAP scenario. + +> **Note**: Different test environments might lead to different test results. + +## Test environment + +### Machine information + +System information: + +| Machine IP | Operation system | Kernel version | File system type | +|--------------|------------------------|------------------------------|--------------| +| 172.16.31.2 | Ubuntu 17.10 64bit | 4.13.0-16-generic | ext4 | +| 172.16.31.3 | Ubuntu 17.10 64bit | 4.13.0-16-generic | ext4 | +| 172.16.31.4 | Ubuntu 17.10 64bit | 4.13.0-16-generic | ext4 | +| 172.16.31.6 | CentOS 7.4.1708 64bit | 3.10.0-693.11.6.el7.x86\_64 | ext4 | +| 172.16.31.8 | CentOS 7.4.1708 64bit | 3.10.0-693.11.6.el7.x86\_64 | ext4 | +| 172.16.31.10 | CentOS 7.4.1708 64bit | 3.10.0-693.11.6.el7.x86\_64 | ext4 | + +Hardware information: + +| Type | Name | +|------------|------------------------------------------------------| +| CPU | 40 vCPUs, Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz | +| RAM | 128GB, 16GB RDIMM * 8, 2400MT/s, dual channel, x8 bitwidth | +| DISK | Intel P4500 4T SSD * 2 | +| Network Card | 10 Gigabit Ethernet | + +### TPC-H + +[tidb-bench/tpch](https://github.com/pingcap/tidb-bench/tree/master/tpch) + +### Cluster topology + +| Machine IP | Deployment Instance | +|--------------|---------------------| +| 172.16.31.2 | TiKV \* 2 | +| 172.16.31.3 | TiKV \* 2 | +| 172.16.31.6 | TiKV \* 2 | +| 172.16.31.8 | TiKV \* 2 | +| 172.16.31.10 | TiKV \* 2 | +| 172.16.31.10 | PD \* 1 | +| 172.16.31.4 | TiDB \* 1 | + +### Corresponding TiDB version information + +TiDB 1.0: + +| Component | Version | Commit Hash | +|--------|-------------|--------------------------------------------| +| TiDB | v1.0.9 | 4c7ee3580cd0a69319b2c0c08abdc59900df7344 | +| TiKV | v1.0.8 | 2bb923a4cd23dbf68f0d16169fd526dc5c1a9f4a | +| PD | v1.0.8 | 137fa734472a76c509fbfd9cb9bc6d0dc804a3b7 | + +TiDB 2.0: + +| Component | Version | Commit Hash | +|--------|-------------|--------------------------------------------| +| TiDB | v2.0.0-rc.6 | 82d35f1b7f9047c478f4e1e82aa0002abc8107e7 | +| TiKV | v2.0.0-rc.6 | 8bd5c54966c6ef42578a27519bce4915c5b0c81f | +| PD | v2.0.0-rc.6 | 9b824d288126173a61ce7d51a71fc4cb12360201 | + +## Test result + +| Query ID | TiDB 2.0 | TiDB 1.0 | +|-----------|--------------------|------------------| +| 1 | 33.915s | 215.305s | +| 2 | 25.575s | Nan | +| 3 | 59.631s | 196.003s | +| 4 | 30.234s | 249.919s | +| 5 | 31.666s | OOM | +| 6 | 13.111s | 118.709s | +| 7 | 31.710s | OOM | +| 8 | 31.734s | 800.546s | +| 9 | 34.211s | 630.639s | +| 10 | 30.774s | 133.547s | +| 11 | 27.692s | 78.026s | +| 12 | 27.962s | 124.641s | +| 13 | 27.676s | 174.695s | +| 14 | 19.676s | 110.602s | +| 15 | NaN | Nan | +| 16 | 24.890s | 40.529s | +| 17 | 245.796s | NaN | +| 18 | 91.256s | OOM | +| 19 | 37.615s | NaN | +| 20 | 44.167s | 212.201s | +| 21 | 31.466s | OOM | +| 22 | 31.539s | 125.471s | + +![TPC-H Query Result](../media/tpch-query-result.png) + +It should be noted that: + +- In the diagram above, the orange bars represent the query results of Release 1.0 and the blue bars represent the query results of Release 2.0. The y-axis represents the processing time of queries in seconds, the shorter the faster. +- Query 15 is tagged with "NaN" because VIEW is currently not supported in either TiDB 1.0 or 2.0. We have plans to provide VIEW support in a future release. +- Queries 2, 17, and 19 in the TiDB 1.0 column are tagged with "NaN" because TiDB 1.0 did not return results for these queries. +- Queries 5, 7, 18, and 21 in the TiDB 1.0 column are tagged with "OOM" because the memory consumption was too high. diff --git a/v2.0/circle.yml b/v2.0/circle.yml new file mode 100755 index 0000000000000..1060ea002f197 --- /dev/null +++ b/v2.0/circle.yml @@ -0,0 +1,43 @@ +version: 2 + +jobs: + build: + docker: + - image: andelf/doc-build:0.1.9 + working_directory: ~/pingcap/docs + branches: + only: + - master + - website-preview + steps: + - checkout + + - run: + name: "Special Check for Golang User - YOUR TAB SUCKS" + command: grep -RP '\t' * | tee | grep '.md' && exit 1; echo ok + + - run: + name: "Merge Makedown Files" + command: python3 scripts/merge_by_toc.py + + - run: + name: "Generate PDF" + command: scripts/generate_pdf.sh + + - deploy: + name: "Publish PDF" + command: | + sudo bash -c 'echo "119.188.128.5 uc.qbox.me" >> /etc/hosts'; + if [ "${CIRCLE_BRANCH}" == "master" ]; then + python3 scripts/upload.py output.pdf tidb-manual-en.pdf; + fi + if [ "${CIRCLE_BRANCH}" == "website-preview" ]; then + python3 scripts/upload.py output.pdf tidb-manual-en-preview.pdf; + fi + + - run: + name: "Copy Generated PDF" + command: mkdir /tmp/artifacts && cp output.pdf doc.md /tmp/artifacts + + - store_artifacts: + path: /tmp/artifacts diff --git a/v2.0/community.md b/v2.0/community.md new file mode 100755 index 0000000000000..6d1192048eaaf --- /dev/null +++ b/v2.0/community.md @@ -0,0 +1,12 @@ +--- +title: Connect with us +summary: Learn about how to connect with us. +category: community +--- + +# Connect with us + +- **Twitter**: [@PingCAP](https://twitter.com/PingCAP) +- **Reddit**: https://www.reddit.com/r/TiDB/ +- **Stack Overflow**: https://stackoverflow.com/questions/tagged/tidb +- **Mailing list**: [Google Group](https://groups.google.com/forum/#!forum/tidb-user) diff --git a/v2.0/dev-guide/deployment.md b/v2.0/dev-guide/deployment.md new file mode 100755 index 0000000000000..8afbbe8313341 --- /dev/null +++ b/v2.0/dev-guide/deployment.md @@ -0,0 +1,13 @@ +# Build for deployment + +## Overview + +Note: **The easiest way to deploy TiDB is to use TiDB Ansible, see [Ansible Deployment](../op-guide/ansible-deployment.md).** + +Before you start, check the [supported platforms](./requirements.md#supported-platforms) and [prerequisites](./requirements.md#prerequisites) first. + +## Building and installing TiDB components + +You can use the [build script](../scripts/build.sh) to build and install TiDB components in the `bin` directory. + +You can use the [update script](../scripts/update.sh) to update all the TiDB components to the latest version. \ No newline at end of file diff --git a/v2.0/dev-guide/development.md b/v2.0/dev-guide/development.md new file mode 100755 index 0000000000000..bfcedbc31648e --- /dev/null +++ b/v2.0/dev-guide/development.md @@ -0,0 +1,68 @@ +# Build For Development + +## Overview + +If you want to develop the TiDB project, you can follow this guide. + +Before you begin, check the [supported platforms](./requirements.md#supported-platforms) and [prerequisites](./requirements.md#prerequisites) first. + +## Build TiKV + ++ Get TiKV source code from GitHub + + ```bash + git clone https://github.com/pingcap/tikv.git + cd tikv + ``` + ++ Run all unit tests: + + ```bash + make test + ``` + ++ Build in release mode: + + ```bash + make release + ``` + +## Build TiDB + ++ Make sure the GOPATH environment is set correctly. + ++ Get the TiDB source code. + + ```bash + git clone https://github.com/pingcap/tidb.git $GOPATH/src/github.com/pingcap/tidb + ``` + ++ Enter `$GOPATH/src/github.com/pingcap/tidb` to build and install the binary in the `bin` directory. + + ```bash + make + ``` ++ Run unit test. + + ```bash + make test + ``` + +## Build PD + ++ Get the PD source code. + + ```bash + git clone https://github.com/pingcap/pd.git $GOPATH/src/github.com/pingcap/pd + ``` + ++ Enter `$GOPATH/src/github.com/pingcap/pd` to build and install the binary in the `bin` directory. + + ```bash + make + ``` ++ Run unit test. + + ```bash + make test + ``` diff --git a/v2.0/dev-guide/requirements.md b/v2.0/dev-guide/requirements.md new file mode 100755 index 0000000000000..d59e945b78e47 --- /dev/null +++ b/v2.0/dev-guide/requirements.md @@ -0,0 +1,29 @@ +# Build requirements + +## Supported platforms + +The following table lists TiDB support for common architectures and operating systems. + +|Architecture|Operating System|Status| +|------------|----------------|------| +|AMD64|Linux Ubuntu (14.04+)|Stable| +|AMD64|Linux CentOS (7+)|Stable| +|AMD64|Mac OSX|Experimental| + +## Prerequisites + ++ Go [1.9+](https://golang.org/doc/install) ++ Rust [nightly version](https://www.rust-lang.org/downloads.html) ++ GCC 4.8+ with static library ++ CMake 3.1+ + +The [check requirement script](../scripts/check_requirement.sh) can help you check prerequisites and +install the missing ones automatically. + + +TiKV is well tested in a certain Rust version by us, and the exact version can be found in the `RUST_VERSION` file in TiKV's root directory. We recommend you to use the same version as we do. To set Rust version, execute following command in your TiKV project directory: + +```bash +rustup override set nightly-2018-01-12 # For example if our current version is `nightly-2018-01-12` +cargo +nightly-2018-01-12 install rustfmt-nightly --version 0.3.4 +``` diff --git a/v2.0/etc/DiskPerformance.json b/v2.0/etc/DiskPerformance.json new file mode 100755 index 0000000000000..95921b5f4fe08 --- /dev/null +++ b/v2.0/etc/DiskPerformance.json @@ -0,0 +1,935 @@ +{ + "__inputs": [ + { + "name": "DS_USER-CREDITS", + "label": "user-credits", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + }, + { + "type": "panel", + "id": "text", + "name": "Text", + "version": "" + } + ], + "annotations": { + "list": [] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 1, + "hideControls": true, + "id": null, + "links": [], + "refresh": false, + "rows": [ + { + "collapse": false, + "height": "250px", + "panels": [ + { + "content": "You can click on an individual disk device on the legend to filter on it or multiple ones by holding Alt button.", + "datasource": "${DS_USER-CREDITS}", + "editable": true, + "error": false, + "height": "50px", + "id": 8, + "links": [], + "mode": "text", + "span": 12, + "style": {}, + "title": "", + "transparent": true, + "type": "text" + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows average latency for Reads and Writes IO Devices. Higher than typical latency for highly loaded storage indicates saturation (overload) and is frequent cause of performance problems. Higher than normal latency also can indicate internal storage problems.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 11, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": true, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "(rate(node_disk_read_time_ms{device=~\"$device\", instance=\"$host\"}[$interval]) / rate(node_disk_reads_completed{device=~\"$device\", instance=\"$host\"}[$interval])) or (irate(node_disk_read_time_ms{device=~\"$device\", instance=\"$host\"}[5m]) / irate(node_disk_reads_completed{device=~\"$device\", instance=\"$host\"}[5m]))", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Read: {{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "(rate(node_disk_write_time_ms{device=~\"$device\", instance=\"$host\"}[$interval]) / rate(node_disk_writes_completed{device=~\"$device\", instance=\"$host\"}[$interval])) or (irate(node_disk_write_time_ms{device=~\"$device\", instance=\"$host\"}[5m]) / irate(node_disk_writes_completed{device=~\"$device\", instance=\"$host\"}[5m]))", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Write: {{ device }}", + "metric": "", + "refId": "B", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk Latency", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ms", + "label": "", + "logBase": 2, + "max": null, + "min": 0, + "show": true + }, + { + "format": "ms", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows amount of physical IOs (reads and writes) different devices are serving. Spikes in number of IOs served often corresponds to performance problems due to IO subsystem overload.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 15, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_reads_completed{device=~\"$device\", instance=\"$host\"}[$interval]) or irate(node_disk_reads_completed{device=~\"$device\", instance=\"$host\"}[5m])", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Read: {{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_writes_completed{device=~\"$device\", instance=\"$host\"}[$interval]) or irate(node_disk_writes_completed{device=~\"$device\", instance=\"$host\"}[5m])", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Write: {{ device }}", + "metric": "", + "refId": "B", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk Operations", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "iops", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows volume of reads and writes the storage is handling. This can be better measure of IO capacity usage for network attached and SSD storage as it is often bandwidth limited. Amount of data being written to the disk can be used to estimate Flash storage life time.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 16, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_bytes_read{device=~\"$device\", instance=\"$host\"}[$interval]) or irate(node_disk_bytes_read{device=~\"$device\", instance=\"$host\"}[5m])", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Read: {{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_bytes_written{device=~\"$device\", instance=\"$host\"}[$interval]) or irate(node_disk_bytes_written{device=~\"$device\", instance=\"$host\"}[5m])", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Write: {{ device }}", + "metric": "", + "refId": "B", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk Bandwidth", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows how much disk was loaded for reads or writes as average number of outstanding requests at different period of time. High disk load is a good measure of actual storage utilization. Different storage types handle load differently - some will show latency increases on low loads others can handle higher load with no problems.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 14, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": true, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_read_time_ms{device=~\"$device\", instance=\"$host\"}[$interval])/1000 or irate(node_disk_read_time_ms{device=~\"$device\", instance=\"$host\"}[5m])/1000", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Read: {{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_write_time_ms{device=~\"$device\", instance=\"$host\"}[$interval])/1000 or irate(node_disk_write_time_ms{device=~\"$device\", instance=\"$host\"}[5m])/1000", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Write: {{ device }}", + "metric": "", + "refId": "B", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk Load", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows disk Utilization as percent of the time when there was at least one IO request in flight. It is designed to match utilization available in iostat tool. It is not very good measure of true IO Capacity Utilization. Consider looking at IO latency and Disk Load Graphs instead.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 17, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "sort": "avg", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_io_time_ms{device=~\"$device\", instance=\"$host\"}[$interval])/1000 or irate(node_disk_io_time_ms{device=~\"$device\", instance=\"$host\"}[5m])/1000", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "{{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk IO Utilization", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows how effectively Operating System is able to merge logical IO requests into physical requests. This is a good measure of the IO locality which can be used for workload characterization.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 18, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": true, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "(1 + rate(node_disk_reads_merged{device=~\"$device\", instance=\"$host\"}[$interval]) / rate(node_disk_reads_completed{device=~\"$device\", instance=\"$host\"}[$interval])) or (1 + irate(node_disk_reads_merged{device=~\"$device\", instance=\"$host\"}[5m]) / irate(node_disk_reads_completed{device=~\"$device\", instance=\"$host\"}[5m]))", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Read Ratio: {{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "(1 + rate(node_disk_writes_merged{device=~\"$device\", instance=\"$host\"}[$interval]) / rate(node_disk_writes_completed{device=~\"$device\", instance=\"$host\"}[$interval])) or (1 + irate(node_disk_writes_merged{device=~\"$device\", instance=\"$host\"}[5m]) / irate(node_disk_writes_completed{device=~\"$device\", instance=\"$host\"}[5m]))", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Write Ratio: {{ device }}", + "metric": "", + "refId": "B", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk Operations Merge Ratio", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": { + "Read IO size: sdb": "#2F575E", + "Read: sdb": "#3F6833" + }, + "bars": false, + "datasource": "${DS_USER-CREDITS}", + "decimals": 2, + "description": "Shows average size of a single disk operation.", + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 20, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 1, + "points": true, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_sectors_read{instance=\"$host\", device=~\"$device\"}[$interval]) * 512 / rate(node_disk_reads_completed{instance=\"$host\", device=~\"$device\"}[$interval]) or irate(node_disk_sectors_read{instance=\"$host\", device=~\"$device\"}[5m]) * 512 / irate(node_disk_reads_completed{instance=\"$host\", device=~\"$device\"}[5m]) ", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Read size: {{ device }}", + "metric": "", + "refId": "A", + "step": 300, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_sectors_written{instance=\"$host\", device=~\"$device\"}[$interval]) * 512 / rate(node_disk_writes_completed{instance=\"$host\", device=~\"$device\"}[$interval]) or irate(node_disk_sectors_written{instance=\"$host\", device=~\"$device\"}[5m]) * 512 / irate(node_disk_writes_completed{instance=\"$host\", device=~\"$device\"}[5m]) ", + "interval": "$interval", + "intervalFactor": 1, + "legendFormat": "Write size: {{ device }}", + "metric": "", + "refId": "B", + "step": 300, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Disk IO Size", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Disk Stats", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [ + { + "allFormat": "glob", + "auto": true, + "auto_count": 200, + "auto_min": "1s", + "current": { + "text": "auto", + "value": "$__auto_interval" + }, + "datasource": "Prometheus", + "hide": 0, + "includeAll": false, + "label": "Interval", + "multi": false, + "multiFormat": "glob", + "name": "interval", + "options": [ + { + "selected": true, + "text": "auto", + "value": "$__auto_interval" + }, + { + "selected": false, + "text": "1s", + "value": "1s" + }, + { + "selected": false, + "text": "5s", + "value": "5s" + }, + { + "selected": false, + "text": "1m", + "value": "1m" + }, + { + "selected": false, + "text": "5m", + "value": "5m" + }, + { + "selected": false, + "text": "1h", + "value": "1h" + }, + { + "selected": false, + "text": "6h", + "value": "6h" + }, + { + "selected": false, + "text": "1d", + "value": "1d" + } + ], + "query": "1s,5s,1m,5m,1h,6h,1d", + "refresh": 2, + "type": "interval" + }, + { + "allFormat": "glob", + "allValue": null, + "current": {}, + "datasource": "${DS_USER-CREDITS}", + "hide": 0, + "includeAll": false, + "label": "Host", + "multi": false, + "multiFormat": "regex values", + "name": "host", + "options": [], + "query": "label_values(node_disk_reads_completed, instance)", + "refresh": 1, + "refresh_on_load": false, + "regex": "", + "sort": 1, + "tagValuesQuery": "instance", + "tags": [], + "tagsQuery": "up", + "type": "query", + "useTags": false + }, + { + "allFormat": "glob", + "allValue": null, + "current": {}, + "datasource": "${DS_USER-CREDITS}", + "hide": 0, + "includeAll": true, + "label": "Device", + "multi": true, + "multiFormat": "regex values", + "name": "device", + "options": [], + "query": "label_values(node_disk_reads_completed{instance=\"$host\", device!~\"dm-.+\"}, device)", + "refresh": 1, + "refresh_on_load": false, + "regex": "", + "sort": 1, + "tagValuesQuery": "instance", + "tags": [], + "tagsQuery": "up", + "type": "query", + "useTags": false + } + ] + }, + "time": { + "from": "now-12h", + "to": "now" + }, + "timepicker": { + "collapse": false, + "enable": true, + "notice": false, + "now": true, + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "status": "Stable", + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ], + "type": "timepicker" + }, + "timezone": "browser", + "title": "Disk Performance", + "version": 1 +} \ No newline at end of file diff --git a/v2.0/etc/Drainer.json b/v2.0/etc/Drainer.json new file mode 100755 index 0000000000000..7185065829ceb --- /dev/null +++ b/v2.0/etc/Drainer.json @@ -0,0 +1,1070 @@ +{ + "__inputs": [ + { + "name": "DS_Drainer", + "label": "Drainer", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + }, + { + "type": "panel", + "id": "singlestat", + "name": "Singlestat", + "version": "" + } + ], + "annotations": { + "list": [] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "hideControls": false, + "id": null, + "links": [], + "refresh": "5s", + "rows": [ + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 7, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "irate(binlog_pump_rpc_counter[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} : {{method}}", + "metric": "binlog_cistern_rpc_duration_seconds_bucket", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "RPC QPS(pump)", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 3, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, rate(binlog_pump_rpc_duration_seconds_bucket[1m]))", + "intervalFactor": 2, + "legendFormat": "{{instance}} : {{method}}", + "refId": "B", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% RPC Latency(pump)", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "New row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "format": "none", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 34, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "binlog_drainer_window{marker=\"upper\", }/(2^18*10^3)", + "intervalFactor": 2, + "legendFormat": "", + "metric": "binlog_drainer_window", + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "slave upper boundary", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "avg" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "format": "none", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 40, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "binlog_drainer_window{marker=\"lower\", }/(2^18*10^3)", + "intervalFactor": 2, + "legendFormat": "", + "metric": "binlog_drainer_window", + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "slave lower boundary", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "avg" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "format": "none", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 37, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 2, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "binlog_drainer_position{}/((2^18)*1000)", + "intervalFactor": 2, + "legendFormat": "", + "metric": "binlog_drainer_position", + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "slave position", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "avg" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "format": "none", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 28, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 2, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "binlog_drainer_error_binlog_count{}", + "intervalFactor": 2, + "legendFormat": "", + "metric": "binlog_drainer_error_binlog_count", + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "error binlogs", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "avg" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "format": "none", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 29, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 2, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "binlog_drainer_query_tikv_count{}", + "intervalFactor": 2, + "metric": "binlog_drainer_query_tikv_count", + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "slave tikv query", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "avg" + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "New row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 38, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "(binlog_drainer_window{marker=\"upper\", } - ignoring(marker)binlog_drainer_position{})/(2^18*10^3)", + "intervalFactor": 2, + "legendFormat": "", + "metric": "binlog_drainer_position", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "synchronization delay", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "绉�", + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "New row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 6, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(binlog_drainer_event{}[1m])", + "intervalFactor": 2, + "legendFormat": "", + "metric": "binlog_drainer_event", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Drainer Event", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 10, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 15, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, rate(binlog_drainer_txn_duration_time_bucket[1m]))", + "intervalFactor": 2, + "legendFormat": "{{instance}}:{{job}}", + "metric": "binlog_drainer_txn_duration_time_bucket", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% drainer txn latency", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "New row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 9, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "go_goroutines{job=\"binlog\"}", + "intervalFactor": 2, + "metric": "go_goroutines", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Goroutine", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_Drainer}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 39, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "go_memstats_heap_inuse_bytes{job=\"binlog\"}", + "intervalFactor": 2, + "metric": "go_goroutines", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Memory", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bits", + "label": "", + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "New row", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [] + }, + "time": { + "from": "now-5m", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ] + }, + "timezone": "browser", + "title": "Drainer", + "version": 1 +} \ No newline at end of file diff --git a/v2.0/etc/Syncer.json b/v2.0/etc/Syncer.json new file mode 100755 index 0000000000000..db8ef34108afe --- /dev/null +++ b/v2.0/etc/Syncer.json @@ -0,0 +1,791 @@ +{ + "__inputs": [ + { + "name": "DS_BIGDATA-CLUSTER", + "label": "bigdata-cluster", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + } + ], + "annotations": { + "list": [] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "hideControls": false, + "id": null, + "links": [], + "refresh": "5s", + "rows": [ + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "fill": 1, + "id": 1, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "irate(syncer_binlog_events_total[1m])", + "intervalFactor": 2, + "legendFormat": "{{job}} - {{type}}", + "metric": "syncer_binlog_events_total", + "refId": "A", + "step": 20 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "binlog events", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Dashboard Row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "fill": 1, + "id": 2, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": false, + "show": true, + "sort": "current", + "sortDesc": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "syncer_binlog_pos{node=\"syncer\"}", + "intervalFactor": 2, + "legendFormat": "{{job}} {{node}}", + "metric": "", + "refId": "A", + "step": 30 + }, + { + "expr": "syncer_binlog_pos{node=\"master\"}", + "intervalFactor": 2, + "legendFormat": "{{job}} {{node}}", + "refId": "B", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "binlog pos", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "fill": 1, + "id": 4, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "syncer_binlog_file{node=\"master\"}", + "intervalFactor": 2, + "legendFormat": "{{job}} {{node}}", + "refId": "A", + "step": 30 + }, + { + "expr": "syncer_binlog_file{node=\"syncer\"}", + "intervalFactor": 2, + "legendFormat": "{{job}} {{node}}", + "refId": "B", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "syncer_binlog_file", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Dashboard Row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "decimals": null, + "fill": 1, + "id": 5, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "syncer_gtid", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 20 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "syncer_gtid", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Dashboard Row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 2 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": " syncer_binlog_file{node=\"master\"} - ON(instance, job) syncer_binlog_file{node=\"syncer\"} ", + "intervalFactor": 10, + "legendFormat": "{{job}}", + "refId": "A", + "step": 50 + }, + "params": [ + "A", + "5m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "name": "syncer_binlog_file alert", + "noDataState": "no_data", + "notifications": [ + { + "id": 1 + } + ] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "fill": 1, + "id": 6, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": " syncer_binlog_file{node=\"master\"} - ON(instance, job) syncer_binlog_file{node=\"syncer\"} ", + "intervalFactor": 10, + "legendFormat": "{{job}}", + "refId": "A", + "step": 100 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 2 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "syncer_binlog_file", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Binlog file", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "fill": 1, + "id": 3, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "irate(syncer_binlog_skipped_events_total[1m])", + "intervalFactor": 2, + "legendFormat": "{{job}} {{type}}", + "metric": "syncer_binlog_skipped_events_total", + "refId": "A", + "step": 20 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "binlog skipped events", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Dashboard Row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 20 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "params": [ + "A", + "5m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "name": "syncer_txn_costs_gauge_in_second alert", + "noDataState": "no_data", + "notifications": [ + { + "id": 1 + } + ] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_BIGDATA-CLUSTER}", + "fill": 1, + "id": 7, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "syncer_txn_costs_gauge_in_second", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 20 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 20 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "syncer_txn_costs_gauge_in_second", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Dashboard Row", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [] + }, + "time": { + "from": "now-3h", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ] + }, + "timezone": "browser", + "title": "Syncer", + "version": 24 +} \ No newline at end of file diff --git a/v2.0/etc/node.json b/v2.0/etc/node.json new file mode 100755 index 0000000000000..5444feb20b4d1 --- /dev/null +++ b/v2.0/etc/node.json @@ -0,0 +1,2490 @@ +{ + "__inputs": [ + { + "name": "DS_TEST-CLUSTER", + "label": "test-cluster", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + }, + { + "type": "panel", + "id": "singlestat", + "name": "Singlestat", + "version": "" + } + ], + "annotations": { + "list": [ + { + "datasource": "${DS_TEST-CLUSTER}", + "enable": true, + "expr": "ALERTS{instance=\"$host\", alertstate=\"firing\"}", + "iconColor": "rgb(252, 5, 0)", + "name": "Alert", + "tagKeys": "severity", + "textFormat": "{{ instance }} : {{alertstate}}", + "titleFormat": "{{ alertname }}" + }, + { + "datasource": "${DS_TEST-CLUSTER}", + "enable": true, + "expr": "ALERTS{instance=\"$host\",alertstate=\"pending\"}", + "iconColor": "rgb(228, 242, 9)", + "name": "Warning", + "tagKeys": "severity", + "textFormat": "{{ instance }} : {{ alertstate }}", + "titleFormat": "{{ alertname }}" + } + ] + }, + "description": "Prometheus for system metrics. \r\nLoad, CPU, RAM, network, process ... ", + "editable": true, + "gnetId": 159, + "graphTooltip": 1, + "hideControls": false, + "id": null, + "links": [ + { + "asDropdown": false, + "icon": "external link", + "tags": [], + "type": "dashboards" + } + ], + "refresh": "30s", + "rows": [ + { + "collapse": false, + "height": "250px", + "panels": [ + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": true, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "format": "s", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "height": "50px", + "id": 19, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "s", + "postfixFontSize": "80%", + "prefix": "", + "prefixFontSize": "80%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "calculatedInterval": "10m", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_time{instance=\"$host\"} - node_boot_time{instance=\"$host\"}", + "interval": "5m", + "intervalFactor": 1, + "legendFormat": "", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_time%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20node_boot_time%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%2243200s%22%2C%22end_input%22%3A%222015-9-18%2013%3A25%22%2C%22step_input%22%3A%22%22%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 300 + } + ], + "thresholds": "300,3600", + "title": "System Uptime", + "transparent": false, + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [], + "valueName": "current" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "format": "none", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "height": "55px", + "id": 25, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "80%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "count(node_cpu{mode=\"user\", instance=\"$host\"})", + "interval": "5m", + "intervalFactor": 1, + "refId": "A", + "step": 300 + } + ], + "thresholds": "", + "title": "Virtual CPUs", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "current" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "format": "bytes", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "height": "55px", + "id": 26, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "80%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "node_memory_MemAvailable{instance=\"$host\"}", + "interval": "", + "intervalFactor": 1, + "legendFormat": "", + "metric": "node_memory_MemAvailable", + "refId": "A", + "step": 30 + } + ], + "thresholds": "", + "title": "RAM available", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "current" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": true, + "colors": [ + "rgba(50, 172, 45, 0.97)", + "rgba(237, 129, 40, 0.89)", + "rgba(245, 54, 54, 0.9)" + ], + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 0, + "editable": true, + "error": false, + "format": "percent", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "height": "50px", + "id": 9, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "80%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": true, + "lineColor": "rgb(31, 120, 193)", + "show": true + }, + "targets": [ + { + "calculatedInterval": "10m", + "datasourceErrors": {}, + "errors": {}, + "expr": "(node_memory_MemAvailable{instance=\"$host\"} or (node_memory_MemFree{instance=\"$host\"} + node_memory_Buffers{instance=\"$host\"} + node_memory_Cached{instance=\"$host\"})) / node_memory_MemTotal{instance=\"$host\"} * 100", + "interval": "5m", + "intervalFactor": 1, + "legendFormat": "", + "metric": "node_mem", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%20%2F%20node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20*%20100%22%2C%22range_input%22%3A%2243201s%22%2C%22end_input%22%3A%222015-9-15%2013%3A54%22%2C%22step_input%22%3A%22%22%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 300 + } + ], + "thresholds": "90,95", + "title": "Memory Available", + "transparent": false, + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [], + "valueName": "current" + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 6, + "grid": {}, + "height": "260px", + "id": 2, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "sum(rate(node_cpu{instance=\"$host\"}[$interval])) by (mode) * 100 / count_scalar(node_cpu{mode=\"user\", instance=\"$host\"}) or sum(irate(node_cpu{instance=\"$host\"}[5m])) by (mode) * 100 / count_scalar(node_cpu{mode=\"user\", instance=\"$host\"})", + "intervalFactor": 1, + "legendFormat": "{{ mode }}", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22sum(rate(node_cpu%7Binstance%3D%5C%22%24host%5C%22%7D%5B%24interval%5D))%20by%20(mode)%20*%20100%22%2C%22range_input%22%3A%223600s%22%2C%22end_input%22%3A%222015-10-22%2015%3A27%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 1 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "CPU Usage", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percent", + "label": "", + "logBase": 1, + "max": 100, + "min": 0, + "show": true + }, + { + "format": "short", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 18, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": true, + "show": true, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "color": "#E24D42", + "instance": "Load 1m" + }, + { + "color": "#E0752D", + "instance": "Load 5m" + }, + { + "color": "#E5AC0E", + "instance": "Load 15m" + } + ], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "10s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_load1{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Load 1m", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_load1%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%223601s%22%2C%22end_input%22%3A%222015-10-22%2015%3A27%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Afalse%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 1, + "target": "" + }, + { + "calculatedInterval": "10s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_load5{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Load 5m", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_load5%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%223600s%22%2C%22end_input%22%3A%222015-10-22%2015%3A27%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Afalse%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 1, + "target": "" + }, + { + "calculatedInterval": "10s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_load15{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Load 15m", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_load15%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%223600s%22%2C%22end_input%22%3A%222015-10-22%2015%3A27%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Afalse%2C%22tab%22%3A0%7D%5D", + "refId": "C", + "step": 1, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Load Average", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "none", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "System Stats", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "300px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 6, + "grid": {}, + "height": "", + "id": 6, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "color": "#0A437C", + "instance": "Used" + }, + { + "color": "#5195CE", + "instance": "Available" + }, + { + "color": "#052B51", + "instance": "Total", + "legend": false, + "stack": false + } + ], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_MemTotal{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Total", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "C", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_MemTotal{instance=\"$host\"} - (node_memory_MemAvailable{instance=\"$host\"} or (node_memory_MemFree{instance=\"$host\"} + node_memory_Buffers{instance=\"$host\"} + node_memory_Cached{instance=\"$host\"}))", + "intervalFactor": 1, + "legendFormat": "Used", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_MemAvailable{instance=\"$host\"} or (node_memory_MemFree{instance=\"$host\"} + node_memory_Buffers{instance=\"$host\"} + node_memory_Cached{instance=\"$host\"})", + "intervalFactor": 1, + "legendFormat": "Available", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Memory", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 6, + "grid": {}, + "height": "", + "id": 29, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_MemTotal{instance=\"$host\"} - (node_memory_MemFree{instance=\"$host\"} + node_memory_Buffers{instance=\"$host\"} + node_memory_Cached{instance=\"$host\"})", + "intervalFactor": 1, + "legendFormat": "Used", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_MemFree{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Free", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_Buffers{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Buffers", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "D", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_Cached{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Cached", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "E", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Memory Distribution", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": true, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 24, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "color": "#EF843C", + "instance": "Forks" + } + ], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_forks{instance=\"$host\"}[$interval]) or irate(node_forks{instance=\"$host\"}[5m])", + "intervalFactor": 1, + "legendFormat": "Forks", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_procs_running%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%2243200s%22%2C%22end_input%22%3A%222015-9-18%2013%3A46%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Forks", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "none", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": true, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 20, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "color": "#E24D42", + "instance": "Processes blocked waiting for I/O to complete" + }, + { + "color": "#6ED0E0", + "instance": "Processes in runnable state" + } + ], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_procs_running{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Processes in runnable state", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_procs_running%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%2243200s%22%2C%22end_input%22%3A%222015-9-18%2013%3A46%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_procs_blocked{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Processes blocked waiting for I/O to complete", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_procs_blocked%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%2243200s%22%2C%22end_input%22%3A%222015-9-18%2013%3A46%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Processes", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "none", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 27, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_context_switches{instance=\"$host\"}[$interval]) or irate(node_context_switches{instance=\"$host\"}[5m])", + "intervalFactor": 1, + "legendFormat": "Context Switches", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_procs_running%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%2243200s%22%2C%22end_input%22%3A%222015-9-18%2013%3A46%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Context Switches", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "none", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 28, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "color": "#D683CE", + "instance": "Interrupts" + } + ], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2m", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_intr{instance=\"$host\"}[$interval]) or irate(node_intr{instance=\"$host\"}[5m])", + "intervalFactor": 1, + "legendFormat": "Interrupts", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_procs_running%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%2243200s%22%2C%22end_input%22%3A%222015-9-18%2013%3A46%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Interrupts", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "none", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 6, + "grid": {}, + "id": 21, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_network_receive_bytes{instance=\"$host\", device!=\"lo\"}[$interval]) or irate(node_network_receive_bytes{instance=\"$host\", device!=\"lo\"}[5m])", + "intervalFactor": 1, + "legendFormat": "Inbound: {{ device }}", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_network_transmit_bytes{instance=\"$host\", device!=\"lo\"}[$interval]) or irate(node_network_transmit_bytes{instance=\"$host\", device!=\"lo\"}[5m])", + "intervalFactor": 1, + "legendFormat": "Outbound: {{ device }}", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Network Traffic", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": true, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 6, + "grid": {}, + "id": 22, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "sort": "min", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": false, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "sum(increase(node_network_receive_bytes{instance=\"$host\", device!=\"lo\"}[1h]))", + "interval": "1h", + "intervalFactor": 1, + "legendFormat": "Received", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 3600, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "sum(increase(node_network_transmit_bytes{instance=\"$host\", device!=\"lo\"}[1h]))", + "interval": "1h", + "intervalFactor": 1, + "legendFormat": "Sent", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 3600, + "target": "" + } + ], + "thresholds": [], + "timeFrom": "24h", + "timeShift": null, + "title": "Network Utilization Hourly", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 6, + "grid": {}, + "id": 23, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "color": "#584477", + "instance": "Used" + }, + { + "color": "#AEA2E0", + "instance": "Free" + } + ], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_SwapTotal{instance=\"$host\"} - node_memory_SwapFree{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Used", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "node_memory_SwapFree{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "Free", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Swap", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 30, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_vmstat_pswpin{instance=\"$host\"}[$interval]) * 4096 or irate(node_vmstat_pswpin{instance=\"$host\"}[5m]) * 4096", + "intervalFactor": 1, + "legendFormat": "Swap In", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_vmstat_pswpout{instance=\"$host\"}[$interval]) * 4096 or irate(node_vmstat_pswpout{instance=\"$host\"}[5m]) * 4096", + "intervalFactor": 1, + "legendFormat": "Swap Out", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Swap Activity", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "description": "Number of TCP sockets in state inuse.", + "fill": 1, + "id": 32, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "node_sockstat_TCP_inuse{instance=\"$host\"}", + "intervalFactor": 1, + "legendFormat": "TCP In Use", + "metric": "", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "TCP In Use", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "New row", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 31, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "hideEmpty": false, + "max": true, + "min": true, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_vmstat_pgpgin{instance=\"$host\"}[$interval]) * 1024 or irate(node_vmstat_pgpgin{instance=\"$host\"}[5m]) * 1024", + "intervalFactor": 1, + "legendFormat": "Read", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 1, + "target": "" + }, + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_vmstat_pgpgout{instance=\"$host\"}[$interval]) * 1024 or irate(node_vmstat_pgpgout{instance=\"$host\"}[5m]) * 1024", + "intervalFactor": 1, + "legendFormat": "Write", + "metric": "", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "B", + "step": 1, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "I/O Throughput", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 35, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 200, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_io_time_ms{instance=\"$host\"}[1m]) / 1000", + "intervalFactor": 1, + "legendFormat": "{{ device }}", + "metric": "node_disk_io_time_ms", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "I/O Util", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 36, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 200, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_io_now{instance=\"$host\"}[1m])", + "intervalFactor": 1, + "legendFormat": "{{ device }}", + "metric": "node_disk_io_time_ms", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "I/O in Progress", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 37, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 200, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_read_time_ms{instance=\"$host\"}[1m]) / rate(node_disk_reads_completed{instance=\"$host\"}[1m])", + "intervalFactor": 1, + "legendFormat": "{{ device }}", + "metric": "node_disk_io_time_ms", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "I/O Average Read Time", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 2, + "grid": {}, + "id": 38, + "instanceColors": {}, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 200, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "calculatedInterval": "2s", + "datasourceErrors": {}, + "errors": {}, + "expr": "rate(node_disk_write_time_ms{instance=\"$host\"}[1m]) / rate(node_disk_writes_completed{instance=\"$host\"}[1m])", + "intervalFactor": 1, + "legendFormat": "{{ device }}", + "metric": "node_disk_io_time_ms", + "prometheusLink": "/api/datasources/proxy/1/graph#%5B%7B%22expr%22%3A%22node_memory_MemTotal%7Binstance%3D%5C%22%24host%5C%22%7D%20-%20(node_memory_MemFree%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Buffers%7Binstance%3D%5C%22%24host%5C%22%7D%20%2B%20node_memory_Cached%7Binstance%3D%5C%22%24host%5C%22%7D)%22%2C%22range_input%22%3A%22900s%22%2C%22end_input%22%3A%222015-10-22%2015%3A25%22%2C%22step_input%22%3A%22%22%2C%22stacked%22%3Atrue%2C%22tab%22%3A0%7D%5D", + "refId": "A", + "step": 2, + "target": "" + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "I/O Average Write Time", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "", + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "bytes", + "logBase": 1, + "max": null, + "min": 0, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "I/O", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "fill": 1, + "id": 33, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "node_filefd_allocated{instance=\"$host\"}", + "intervalFactor": 2, + "legendFormat": "Allocated File Descriptor", + "metric": "node_filefd_allocated", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Allocated File Descriptor", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "fill": 1, + "id": 34, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "node_filefd_maximum{instance=\"$host\"}", + "intervalFactor": 2, + "legendFormat": "Maximum File Descriptor", + "metric": "node_filefd_maximum", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Maximum File Descriptor", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Dashboard Row", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [ + { + "allFormat": "glob", + "auto": true, + "auto_count": 200, + "auto_min": "1s", + "current": { + "text": "5s", + "value": "5s" + }, + "datasource": "test-cluster", + "hide": 0, + "includeAll": false, + "label": "Interval", + "multi": false, + "multiFormat": "glob", + "name": "interval", + "options": [ + { + "selected": false, + "text": "auto", + "value": "$__auto_interval" + }, + { + "selected": false, + "text": "1s", + "value": "1s" + }, + { + "selected": true, + "text": "5s", + "value": "5s" + }, + { + "selected": false, + "text": "1m", + "value": "1m" + }, + { + "selected": false, + "text": "5m", + "value": "5m" + }, + { + "selected": false, + "text": "1h", + "value": "1h" + }, + { + "selected": false, + "text": "6h", + "value": "6h" + }, + { + "selected": false, + "text": "1d", + "value": "1d" + } + ], + "query": "1s,5s,1m,5m,1h,6h,1d", + "refresh": 2, + "type": "interval" + }, + { + "allFormat": "glob", + "allValue": null, + "current": {}, + "datasource": "${DS_TEST-CLUSTER}", + "hide": 0, + "includeAll": false, + "label": "Host", + "multi": false, + "multiFormat": "regex values", + "name": "host", + "options": [], + "query": "label_values(node_boot_time,instance)", + "refresh": 1, + "refresh_on_load": false, + "regex": "", + "sort": 3, + "tagValuesQuery": "instance", + "tags": [], + "tagsQuery": "up", + "type": "query", + "useTags": false + } + ] + }, + "time": { + "from": "now-1h", + "to": "now" + }, + "timepicker": { + "collapse": false, + "enable": true, + "notice": false, + "now": true, + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "status": "Stable", + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ], + "type": "timepicker" + }, + "timezone": "browser", + "title": "TiDB Cluster - node", + "version": 0 +} diff --git a/v2.0/etc/overview.json b/v2.0/etc/overview.json new file mode 100755 index 0000000000000..c00ce7401ee46 --- /dev/null +++ b/v2.0/etc/overview.json @@ -0,0 +1,2747 @@ +{ + "__inputs": [ + { + "name": "DS_TIDB-CLUSTER", + "label": "${DS_TIDB-CLUSTER}", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + }, + { + "type": "panel", + "id": "singlestat", + "name": "Singlestat", + "version": "" + }, + { + "type": "panel", + "id": "table", + "name": "Table", + "version": "" + } + ], + "annotations": { + "list": [] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "hideControls": false, + "id": null, + "links": [], + "refresh": "30s", + "rows": [ + { + "collapse": false, + "height": 250, + "panels": [ + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": null, + "editable": true, + "error": false, + "format": "bytes", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": false + }, + "id": 27, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "null", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 4, + "sparkline": { + "fillColor": "rgba(77, 135, 25, 0.18)", + "full": true, + "lineColor": "rgb(21, 179, 65)", + "show": true + }, + "targets": [ + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"storage_capacity\"}", + "intervalFactor": 2, + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "Storage Capacity", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "current" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "format": "bytes", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "hideTimeOverride": false, + "id": 28, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "null", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 4, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": true, + "lineColor": "rgb(31, 120, 193)", + "show": true + }, + "targets": [ + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"storage_size\"}", + "intervalFactor": 2, + "refId": "A", + "step": 4 + } + ], + "thresholds": "", + "title": "Current Storage Size", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "current" + }, + { + "columns": [ + { + "text": "Current", + "value": "current" + } + ], + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fontSize": "120%", + "hideTimeOverride": false, + "id": 18, + "links": [], + "pageSize": null, + "repeat": null, + "scroll": false, + "showHeader": true, + "sort": { + "col": null, + "desc": false + }, + "span": 4, + "styles": [ + { + "dateFormat": "YYYY-MM-DD HH:mm:ss", + "pattern": "Metric", + "sanitize": false, + "type": "string" + }, + { + "colorMode": "cell", + "colors": [ + "rgba(50, 172, 45, 0.97)", + "rgba(237, 129, 40, 0.89)", + "rgba(245, 54, 54, 0.9)" + ], + "decimals": 0, + "pattern": "Current", + "thresholds": [ + "1", + "2" + ], + "type": "number", + "unit": "short" + } + ], + "targets": [ + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"store_up_count\"}", + "interval": "", + "intervalFactor": 2, + "legendFormat": "Up Stores", + "metric": "pd_cluster_status", + "refId": "A", + "step": 2 + }, + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"store_down_count\"}", + "intervalFactor": 2, + "legendFormat": "Down Stores", + "refId": "B", + "step": 2 + }, + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"store_offline_count\"}", + "intervalFactor": 2, + "legendFormat": "Offline Stores", + "refId": "C", + "step": 2 + }, + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"store_tombstone_count\"}", + "intervalFactor": 2, + "legendFormat": "Tombstone Stores", + "refId": "D", + "step": 2 + } + ], + "title": "Store Status", + "transform": "timeseries_aggregations", + "transparent": false, + "type": "table" + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0.8 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "avg(pd_cluster_status{type=\"storage_size\"}) / avg(pd_cluster_status{type=\"storage_capacity\"})", + "hide": false, + "intervalFactor": 4, + "legendFormat": "used ratio", + "refId": "B", + "step": 4 + }, + "params": [ + "B", + "5m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Storage used space is above 80%.", + "name": "Current Storage Usage alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 22, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "used ratio", + "yaxis": 2 + } + ], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"storage_size\"}", + "hide": false, + "intervalFactor": 2, + "legendFormat": "strage size", + "refId": "A", + "step": 2 + }, + { + "expr": "avg(pd_cluster_status{type=\"storage_size\"}) / avg(pd_cluster_status{type=\"storage_capacity\"})", + "hide": false, + "intervalFactor": 4, + "legendFormat": "used ratio", + "refId": "B", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0.8 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Current Storage Usage", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "decbytes", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 23, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{instance=\"$instance\"}[5m])) by (grpc_method, le))", + "intervalFactor": 2, + "legendFormat": "{{grpc_method}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% completed_cmds_duration_seconds", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 24, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": true, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(grpc_server_handling_seconds_sum{instance=\"$instance\"}[5m])) by (grpc_method, le))", + "hide": true, + "intervalFactor": 2, + "legendFormat": "{{grpc_method}}", + "refId": "A", + "step": 4 + }, + { + "expr": "rate(grpc_server_handling_seconds_sum{instance=\"$instance\"}[30s]) / rate(grpc_server_handling_seconds_count{instance=\"$instance\"}[30s])", + "intervalFactor": 2, + "legendFormat": "{{grpc_method}}", + "refId": "B", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "average completed_cmds_duration_seconds", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0.2 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "min(pd_cluster_status{type=\"region_balance_ratio\"})", + "hide": true, + "intervalFactor": 2, + "legendFormat": "ratio", + "refId": "B", + "step": 4 + }, + "params": [ + "B", + "5m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Store balance ratio is high", + "name": "Region Balance Ratio alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 4, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 26, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"region_balance_ratio\"}", + "hide": false, + "intervalFactor": 2, + "legendFormat": "\bstore balance ratio", + "refId": "A", + "step": 2 + }, + { + "expr": "min(pd_cluster_status{type=\"region_balance_ratio\"})", + "hide": true, + "intervalFactor": 2, + "legendFormat": "ratio", + "refId": "B", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0.2 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Region Balance Ratio", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0.2 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "min(pd_cluster_status{type=\"leader_balance_ratio\"})", + "hide": true, + "intervalFactor": 2, + "legendFormat": "ratio", + "refId": "B", + "step": 4 + }, + "params": [ + "B", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Region leader balance ratio is high.", + "name": "Leader Banlace Ratio alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 4, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 25, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "pd_cluster_status{instance=\"$instance\",type=\"leader_balance_ratio\"}", + "intervalFactor": 2, + "legendFormat": "leader max diff ratio", + "refId": "A", + "step": 2 + }, + { + "expr": "min(pd_cluster_status{type=\"leader_balance_ratio\"})", + "hide": true, + "intervalFactor": 2, + "legendFormat": "ratio", + "refId": "B", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0.2 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Leader Balance Ratio", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "PD", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 1, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "sort": "current", + "sortDesc": false, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.98, sum(rate(pd_client_request_handle_requests_duration_seconds_bucket[30s])) by (type, le))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{type}} 98th percentile", + "refId": "A", + "step": 2 + }, + { + "expr": "avg(rate(pd_client_request_handle_requests_duration_seconds_sum[30s])) by (type) / avg(rate(pd_client_request_handle_requests_duration_seconds_count[30s])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}} average", + "refId": "B", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "handle_requests_duration_seconds", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 2, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 12, + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "total", + "lines": false + } + ], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(tidb_server_query_total[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}} {{status}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "QPS", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 2, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 4, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "total", + "fill": 0, + "lines": false + } + ], + "span": 12, + "stack": true, + "steppedLine": true, + "targets": [ + { + "expr": "tidb_server_connections", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 2 + }, + { + "expr": "sum(tidb_server_connections)", + "intervalFactor": 2, + "legendFormat": "total", + "refId": "B", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Connection Count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": null, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 3, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(irate(tidb_executor_statement_node_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Statement Count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 10 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 2 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Query duration for 99th percentile is high.", + "name": "Query Duration 99th percentile alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 5, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "B", + "step": 2 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 10 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Query Duration 99th percentile", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "rate(tidb_server_schema_lease_error_counter[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "metric": "tidb_server_", + "refId": "A", + "step": 2 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Schema lease error.", + "name": "Schema Lease Error alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 6, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(tidb_server_schema_lease_error_counter[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "metric": "tidb_server_", + "refId": "A", + "step": 2 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Schema Lease Error Rate", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "TiDB", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 299, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 7, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_scheduler_command_duration_seconds_bucket[1m])) by (le,type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_scheduler_command_duration_seconds_bucket", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% scheduler command duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 8, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tikv_scheduler_command_duration_seconds_bucket[1m])) by (le,type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99.99% scheduler command duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 9, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_storage_engine_async_request_duration_seconds_bucket", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% storage async request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 10, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% storage async request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 11, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_server_report_failure_msg_total[1m])) by (type,instance,job,store_id)", + "intervalFactor": 2, + "legendFormat": "{{job}}-{{type}}-to-{{store_id}}", + "metric": "tikv_server_raft_store_msg_total", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "server report failure msg", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 12, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_raft_sent_message_total{type=\"vote\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}-vote", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "vote", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 13, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_duration_seconds_bucket{req!=\"\"}[1m])) by (le,type,req))", + "intervalFactor": 2, + "legendFormat": "{{type}}-{{req}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% coprocessor request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 14, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_duration_seconds_bucket{req!=\"\"}[1m])) by (le,type,req))", + "intervalFactor": 2, + "legendFormat": "{{type}}-{{req}}", + "metric": "tikv_coprocessor_request_duration_seconds_bucket", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% coprocessor request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 15, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 8, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_worker_pending_task_total[1m])) by (name)", + "intervalFactor": 2, + "legendFormat": "{{name}}", + "metric": "tikv_pd_heartbeat_tick_total", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Pending Task", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "fill": 1, + "id": 16, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_stall_micro_seconds[30s])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "stall", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ms", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "sum(rate(tikv_channel_full_total[1m])) by (type, job)", + "intervalFactor": 2, + "legendFormat": "{{job}}-{{type}}", + "metric": "", + "refId": "A", + "step": 2 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiKV channel full", + "name": "TiKV channel full alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 3, + "grid": {}, + "id": 17, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 5, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_channel_full_total[1m])) by (type, job)", + "intervalFactor": 2, + "legendFormat": "{{job}}-{{type}}", + "metric": "", + "refId": "A", + "step": 2 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "channel full", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 20, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "total", + "lines": false + } + ], + "span": 7, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_pd_heartbeat_tick_total{type=\"leader\"}) by (instance,job)", + "intervalFactor": 2, + "legendFormat": "{{instance}}-{{job}}", + "metric": "tikv_pd_heartbeat_tick_total", + "refId": "A", + "step": 2 + }, + { + "expr": "sum(tikv_pd_heartbeat_tick_total{type=\"leader\"}) ", + "hide": true, + "intervalFactor": 2, + "legendFormat": "total", + "refId": "B", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "leader", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 19, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 5, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_pd_msg_send_duration_seconds_bucket[30s])) by (le))", + "intervalFactor": 2, + "legendFormat": "", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% send_message_duration_seconds", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 21, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 7, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_pd_heartbeat_tick_total{type=\"region\"}) by (job,instance)", + "intervalFactor": 2, + "legendFormat": "{{instance}}-{{job}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "region", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "TiKV", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [ + { + "allValue": null, + "current": {}, + "datasource": "${DS_TIDB-CLUSTER}", + "hide": 0, + "includeAll": false, + "label": null, + "multi": false, + "name": "instance", + "options": [], + "query": "label_values(pd_cluster_status, instance)", + "refresh": 1, + "regex": "", + "sort": 0, + "tagValuesQuery": "", + "tags": [], + "tagsQuery": "", + "type": "query", + "useTags": false + } + ] + }, + "time": { + "from": "now-5m", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ] + }, + "timezone": "browser", + "title": "TiDB Cluster - Overview", + "version": 1 +} diff --git a/v2.0/etc/pd.json b/v2.0/etc/pd.json new file mode 100755 index 0000000000000..02da3d1317660 --- /dev/null +++ b/v2.0/etc/pd.json @@ -0,0 +1,3556 @@ +{ + "style": "dark", + "rows": [ + { + "repeat": null, + "titleSize": "h4", + "repeatIteration": null, + "title": "Cluster", + "height": "300px", + "repeatRowId": null, + "panels": [ + { + "id": 55, + "title": "PD Role", + "span": 2, + "type": "singlestat", + "targets": [ + { + "refId": "A", + "expr": "delta(pd_server_tso{type=\"save\",instance=\"$instance\"}[15s])", + "intervalFactor": 2, + "metric": "pd_server_tso", + "step": 60, + "legendFormat": "" + } + ], + "links": [], + "datasource": "${DS_TIDB-CLUSTER}", + "maxDataPoints": 100, + "interval": null, + "cacheTimeout": null, + "format": "none", + "prefix": "", + "postfix": "", + "nullText": null, + "valueMaps": [ + { + "value": "null", + "op": "=", + "text": "N/A" + } + ], + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "rangeMaps": [ + { + "from": "1", + "to": "100000", + "text": "Leader" + }, + { + "from": "0", + "to": "1", + "text": "Follower" + } + ], + "mappingType": 2, + "nullPointMode": "connected", + "valueName": "current", + "prefixFontSize": "50%", + "valueFontSize": "50%", + "postfixFontSize": "50%", + "thresholds": "", + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "sparkline": { + "show": false, + "full": false, + "lineColor": "rgb(31, 120, 193)", + "fillColor": "rgba(31, 118, 189, 0.18)" + }, + "gauge": { + "show": false, + "minValue": 0, + "maxValue": 100, + "thresholdMarkers": true, + "thresholdLabels": false + } + }, + { + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "links": [], + "valueMaps": [ + { + "text": "N/A", + "value": "null", + "op": "=" + } + ], + "thresholds": "", + "rangeMaps": [ + { + "text": "N/A", + "from": "null", + "to": "null" + } + ], + "nullPointMode": "null", + "prefix": "", + "gauge": { + "thresholdLabels": false, + "show": false, + "thresholdMarkers": false, + "maxValue": 100, + "minValue": 0 + }, + "id": 10, + "maxDataPoints": 100, + "mappingType": 1, + "span": 2, + "colorBackground": false, + "title": "Storage Capacity", + "sparkline": { + "full": true, + "fillColor": "rgba(77, 135, 25, 0.18)", + "lineColor": "rgb(21, 179, 65)", + "show": true + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\",namespace=~\"$namespace\",type=\"storage_capacity\"})", + "step": 60, + "refId": "A" + } + ], + "prefixFontSize": "50%", + "valueName": "current", + "type": "singlestat", + "valueFontSize": "80%", + "format": "decbytes", + "editable": true, + "cacheTimeout": null, + "postfix": "", + "decimals": null, + "interval": null, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TIDB-CLUSTER}", + "error": false, + "nullText": null, + "postfixFontSize": "50%", + "colorValue": false + }, + { + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "links": [], + "valueMaps": [ + { + "text": "N/A", + "value": "null", + "op": "=" + } + ], + "thresholds": "", + "rangeMaps": [ + { + "text": "N/A", + "from": "null", + "to": "null" + } + ], + "nullPointMode": "null", + "prefix": "", + "gauge": { + "thresholdLabels": false, + "show": false, + "thresholdMarkers": true, + "maxValue": 100, + "minValue": 0 + }, + "id": 38, + "maxDataPoints": 100, + "mappingType": 1, + "span": 2, + "colorBackground": false, + "title": "Current Storage Size", + "sparkline": { + "full": true, + "fillColor": "rgba(31, 118, 189, 0.18)", + "lineColor": "rgb(31, 120, 193)", + "show": true + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\",namespace=~\"$namespace\",type=\"storage_size\"})", + "step": 60, + "refId": "A" + } + ], + "prefixFontSize": "50%", + "valueName": "current", + "type": "singlestat", + "valueFontSize": "80%", + "format": "decbytes", + "editable": true, + "hideTimeOverride": false, + "postfix": "", + "decimals": 1, + "interval": null, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "cacheTimeout": null, + "datasource": "${DS_TIDB-CLUSTER}", + "error": false, + "nullText": null, + "postfixFontSize": "50%", + "colorValue": false + }, + { + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "links": [], + "valueMaps": [ + { + "text": "N/A", + "value": "null", + "op": "=" + } + ], + "thresholds": "", + "rangeMaps": [ + { + "text": "N/A", + "from": "null", + "to": "null" + } + ], + "nullPointMode": "null", + "prefix": "", + "gauge": { + "thresholdLabels": false, + "show": false, + "thresholdMarkers": false, + "maxValue": 100, + "minValue": 0 + }, + "id": 20, + "maxDataPoints": 100, + "mappingType": 1, + "span": 2, + "colorBackground": false, + "title": "Number of Regions", + "sparkline": { + "full": true, + "fillColor": "rgba(31, 118, 189, 0.18)", + "lineColor": "rgb(31, 120, 193)", + "show": true + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\",namespace=~\"$namespace\",type=\"region_count\"})", + "step": 60, + "refId": "A" + } + ], + "prefixFontSize": "50%", + "valueName": "current", + "type": "singlestat", + "valueFontSize": "80%", + "format": "none", + "editable": true, + "cacheTimeout": null, + "postfix": "", + "interval": null, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TIDB-CLUSTER}", + "error": false, + "nullText": null, + "postfixFontSize": "50%", + "colorValue": false + }, + { + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "links": [], + "valueMaps": [ + { + "text": "N/A", + "value": "null", + "op": "=" + } + ], + "thresholds": "0.01,0.5", + "rangeMaps": [ + { + "text": "N/A", + "from": "null", + "to": "null" + } + ], + "nullPointMode": "null", + "prefix": "", + "gauge": { + "thresholdLabels": false, + "show": false, + "thresholdMarkers": true, + "maxValue": 1, + "minValue": 0 + }, + "id": 37, + "maxDataPoints": 100, + "mappingType": 1, + "span": 1, + "colorBackground": false, + "title": "Leader Balance Ratio", + "sparkline": { + "full": true, + "fillColor": "rgba(31, 118, 189, 0.18)", + "lineColor": "rgb(31, 120, 193)", + "show": true + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "1 - min(pd_scheduler_balance_score{instance=\"$instance\", namespace=~\"$namespace\", type=\"leader\"}) / max(pd_scheduler_balance_score{instance=\"$instance\", namespace=~\"$namespace\", type=\"leader\"})", + "step": 60, + "refId": "A" + } + ], + "prefixFontSize": "50%", + "valueName": "current", + "type": "singlestat", + "valueFontSize": "80%", + "format": "percentunit", + "editable": true, + "hideTimeOverride": false, + "postfix": "", + "interval": null, + "colors": [ + "rgba(50, 172, 45, 0.97)", + "rgba(237, 129, 40, 0.89)", + "rgba(245, 54, 54, 0.9)" + ], + "cacheTimeout": null, + "datasource": "${DS_TIDB-CLUSTER}", + "error": false, + "nullText": null, + "postfixFontSize": "50%", + "colorValue": true + }, + { + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "links": [], + "valueMaps": [ + { + "text": "N/A", + "value": "null", + "op": "=" + } + ], + "thresholds": "0.05,0.5", + "rangeMaps": [ + { + "text": "N/A", + "from": "null", + "to": "null" + } + ], + "nullPointMode": "null", + "prefix": "", + "gauge": { + "thresholdLabels": false, + "show": false, + "thresholdMarkers": true, + "maxValue": 1, + "minValue": 0 + }, + "id": 36, + "maxDataPoints": 100, + "mappingType": 1, + "span": 1, + "colorBackground": false, + "title": "Region Balance Ratio", + "sparkline": { + "full": true, + "fillColor": "rgba(31, 118, 189, 0.18)", + "lineColor": "rgb(31, 120, 193)", + "show": true + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "1 - min(pd_scheduler_balance_score{instance=\"$instance\", namespace=~\"$namespace\", type=\"region\"}) / max(pd_scheduler_balance_score{instance=\"$instance\", namespace=~\"$namespace\", type=\"region\"})", + "step": 60, + "refId": "A", + "legendFormat": "" + } + ], + "prefixFontSize": "50%", + "valueName": "current", + "type": "singlestat", + "valueFontSize": "80%", + "format": "percentunit", + "editable": true, + "cacheTimeout": null, + "postfix": "", + "decimals": null, + "interval": null, + "colors": [ + "rgba(50, 172, 45, 0.97)", + "rgba(237, 129, 40, 0.89)", + "rgba(245, 54, 54, 0.9)" + ], + "datasource": "${DS_TIDB-CLUSTER}", + "error": false, + "nullText": null, + "postfixFontSize": "50%", + "colorValue": true + }, + { + "sort": { + "col": null, + "desc": false + }, + "styles": [ + { + "pattern": "Metric", + "type": "string", + "sanitize": false, + "dateFormat": "YYYY-MM-DD HH:mm:ss" + }, + { + "colorMode": "cell", + "thresholds": [ + "1", + "2" + ], + "colors": [ + "rgba(50, 172, 45, 0.97)", + "rgba(237, 129, 40, 0.89)", + "rgba(245, 54, 54, 0.9)" + ], + "type": "number", + "pattern": "Current", + "decimals": 0, + "unit": "short" + } + ], + "repeat": null, + "span": 2, + "pageSize": null, + "links": [], + "title": "Store Status", + "editable": true, + "transform": "timeseries_aggregations", + "showHeader": true, + "scroll": false, + "targets": [ + { + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\", type=\"store_up_count\"})", + "metric": "pd_cluster_status", + "interval": "", + "step": 20, + "legendFormat": "Up Stores", + "intervalFactor": 2, + "refId": "A" + }, + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\",type=\"store_disconnected_count\"})", + "step": 20, + "refId": "B", + "legendFormat": "Disconnect Stores" + }, + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\",type=\"store_low_space_count\"})", + "step": 20, + "refId": "C", + "legendFormat": "LowSpace Stores" + }, + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\",type=\"store_down_count\"})", + "step": 20, + "refId": "D", + "legendFormat": "Down Stores" + }, + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\",type=\"store_offline_count\"})", + "step": 20, + "refId": "E", + "legendFormat": "Offline Stores" + }, + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\",type=\"store_tombstone_count\"})", + "step": 20, + "refId": "F", + "legendFormat": "Tombstone Stores" + } + ], + "transparent": false, + "hideTimeOverride": false, + "fontSize": "100%", + "datasource": "${DS_TIDB-CLUSTER}", + "error": false, + "type": "table", + "id": 39, + "columns": [ + { + "text": "Current", + "value": "current" + } + ] + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [ + { + "colorMode": "critical", + "line": true, + "op": "gt", + "value": 0.8, + "fill": true + } + ], + "nullPointMode": "null", + "renderer": "flot", + "linewidth": 1, + "steppedLine": false, + "id": 9, + "fill": 0, + "span": 4, + "title": "Current Storage Usage", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "total": false, + "show": true, + "max": false, + "min": false, + "current": false, + "values": false, + "avg": false + }, + "targets": [ + { + "hide": false, + "expr": "pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\", type=\"storage_size\"}", + "step": 10, + "legendFormat": "strage size", + "intervalFactor": 2, + "refId": "A" + }, + { + "hide": false, + "expr": "avg(pd_cluster_status{type=\"storage_size\", namespace=~\"$namespace\"}) / avg(pd_cluster_status{type=\"storage_capacity\", namespace=~\"$namespace\"})", + "step": 20, + "legendFormat": "used ratio", + "intervalFactor": 4, + "refId": "B" + } + ], + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "decbytes", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "percentunit", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [ + { + "alias": "used ratio", + "yaxis": 2 + } + ], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "alert": { + "noDataState": "no_data", + "name": "Current Storage Usage alert", + "frequency": "60s", + "notifications": [], + "handler": 1, + "executionErrorState": "alerting", + "message": "Storage used space is above 80%.", + "conditions": [ + { + "operator": { + "type": "and" + }, + "query": { + "params": [ + "B", + "5m", + "now" + ], + "model": { + "hide": false, + "expr": "avg(pd_cluster_status{type=\"storage_size\"}) / avg(pd_cluster_status{type=\"storage_capacity\"})", + "step": 20, + "legendFormat": "used ratio", + "intervalFactor": 4, + "refId": "B" + }, + "datasourceId": 1 + }, + "evaluator": { + "type": "gt", + "params": [ + 0.8 + ] + }, + "reducer": { + "type": "avg", + "params": [] + }, + "type": "query" + } + ] + }, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": 2 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "linewidth": 2, + "steppedLine": false, + "id": 18, + "fill": 1, + "span": 4, + "title": "Current Regions Count", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "total": false, + "show": false, + "max": false, + "min": false, + "current": false, + "values": false, + "avg": false + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(pd_cluster_status{instance=\"$instance\", namespace=~\"$namespace\", type=\"region_count\"})", + "step": 10, + "refId": "A", + "legendFormat": "count" + } + ], + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "none", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "none", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": null + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "id": 27, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(delta(pd_schedule_operators_count{instance=\"$instance\"}[1m])) by (type)", + "step": 10, + "refId": "A", + "legendFormat": "{{type}}" + } + ], + "fill": 1, + "span": 4, + "title": "Schedule operators count", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": false, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "values": true, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "opm", + "min": "0", + "label": "operation/minute" + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [ + { + "colorMode": "critical", + "line": true, + "op": "gt", + "value": 0.2, + "fill": true + } + ], + "nullPointMode": "null", + "renderer": "flot", + "linewidth": 2, + "steppedLine": false, + "id": 40, + "fill": 1, + "span": 6, + "title": "Store leader score", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "alignAsTable": true, + "total": false, + "show": false, + "max": true, + "min": true, + "current": true, + "values": false, + "avg": false + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "pd_scheduler_balance_score{instance=\"$instance\", namespace=~\"$namespace\", type=\"leader\"}", + "step": 10, + "refId": "A", + "legendFormat": "tikv-{{store}}" + } + ], + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": "0", + "label": null + }, + { + "logBase": 1, + "show": false, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": 4 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [ + { + "colorMode": "critical", + "line": true, + "op": "gt", + "value": 0.2, + "fill": true + } + ], + "nullPointMode": "null", + "renderer": "flot", + "linewidth": 2, + "steppedLine": false, + "id": 41, + "fill": 1, + "span": 6, + "title": "Store region score", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "alignAsTable": true, + "total": false, + "show": false, + "max": false, + "min": false, + "current": false, + "values": false, + "avg": false + }, + "targets": [ + { + "hide": false, + "expr": "pd_scheduler_balance_score{instance=\"$instance\", namespace=~\"$namespace\", type=\"region\"}", + "step": 10, + "legendFormat": "\bstore balance ratio", + "intervalFactor": 2, + "refId": "A" + } + ], + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": 4 + } + ], + "showTitle": true, + "collapse": false + }, + { + "repeat": null, + "titleSize": "h4", + "repeatIteration": null, + "title": "Scheduler", + "height": 288, + "repeatRowId": null, + "panels": [ + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 45, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(delta(pd_schedule_operators_count{instance=\"$instance\"}[1m])) by (type,state)", + "step": 10, + "refId": "A", + "legendFormat": "{{type}}-{{state}}" + } + ], + "fill": 1, + "span": 4, + "title": "Schedule operators count with state", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "values": false, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 47, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "expr": "pd_scheduler_status{type=\"limit\",instance=\"$instance\"}", + "metric": "pd_scheduler_status", + "step": 10, + "legendFormat": "{{kind}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 0, + "span": 4, + "title": "Scheduler limit", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "values": false, + "alignAsTable": true, + "avg": false, + "sortDesc": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 46, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "expr": "pd_scheduler_status{type=\"allow\",instance=\"$instance\"}", + "metric": "pd_scheduler_status", + "step": 10, + "legendFormat": "{{kind}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 0, + "span": 4, + "title": "Scheduler allow", + "tooltip": { + "sort": 1, + "shared": true, + "value_type": "individual" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "hideEmpty": true, + "values": false, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 1 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 50, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "pd_hotspot_status{instance=\"$instance\",type=\"hot_write_region_as_leader\"}", + "step": 10, + "refId": "A", + "legendFormat": "{{store}}" + } + ], + "fill": 0, + "span": 6, + "title": "Hot region's leader distribution", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "values": false, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 51, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "pd_hotspot_status{instance=\"$instance\",type=\"hot_write_region_as_peer\"}", + "step": 10, + "refId": "A", + "legendFormat": "{{store}}" + } + ], + "fill": 0, + "span": 6, + "title": "Hot region's peer distribution", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "values": false, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 48, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "expr": "pd_hotspot_status{instance=\"$instance\",type=\"total_written_bytes_as_leader\"}", + "metric": "pd_hotspot_status", + "step": 10, + "legendFormat": "{{store}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 1, + "span": 6, + "title": "Hot region's leader written bytes", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "values": false, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "bytes", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 49, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "pd_hotspot_status{instance=\"$instance\",type=\"total_written_bytes_as_peer\"}", + "step": 10, + "refId": "A", + "legendFormat": "{{store}}" + } + ], + "fill": 1, + "span": 6, + "title": "Hot region's peer written bytes", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": false, + "show": true, + "current": false, + "values": false, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "decbytes", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "fill": 1, + "id": 52, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(delta(pd_scheduler_event_count{instance=\"$instance\", type=\"balance-leader-scheduler\"}[1m])) by (name)", + "intervalFactor": 2, + "legendFormat": "{{name}}", + "metric": "pd_scheduler_event_count", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Balance leader scheduler", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TIDB-CLUSTER}", + "fill": 1, + "id": 53, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(delta(pd_scheduler_event_count{instance=\"$instance\", type=\"balance-region-scheduler\"}[1m])) by (name)", + "intervalFactor": 2, + "legendFormat": "{{name}}", + "metric": "pd_scheduler_event_count", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Balance region scheduler", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "showTitle": true, + "collapse": false + }, + { + "repeat": null, + "titleSize": "h4", + "repeatIteration": null, + "title": "PD", + "height": "300px", + "repeatRowId": null, + "panels": [ + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "linewidth": 1, + "steppedLine": false, + "id": 1, + "fill": 1, + "span": 6, + "title": "completed commands rate", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(rate(grpc_server_handling_seconds_count{instance=\"$instance\"}[1m])) by (grpc_method)", + "step": 10, + "refId": "A", + "legendFormat": "{{grpc_method}}" + } + ], + "yaxes": [ + { + "logBase": 10, + "show": true, + "max": null, + "format": "ops", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": null + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "id": 2, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{instance=\"$instance\"}[5m])) by (grpc_method, le))", + "step": 10, + "refId": "A", + "legendFormat": "{{grpc_method}}" + } + ], + "fill": 0, + "span": 6, + "title": "99% completed_cmds_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "sortDesc": true, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "id": 23, + "linewidth": 1, + "steppedLine": true, + "targets": [ + { + "hide": true, + "expr": "histogram_quantile(0.9999, sum(rate(grpc_server_handling_seconds_sum{instance=\"$instance\"}[5m])) by (grpc_method, le))", + "step": 4, + "legendFormat": "{{grpc_method}}", + "intervalFactor": 2, + "refId": "A" + }, + { + "intervalFactor": 2, + "expr": "rate(grpc_server_handling_seconds_sum{instance=\"$instance\"}[30s]) / rate(grpc_server_handling_seconds_count{instance=\"$instance\"}[30s])", + "step": 10, + "refId": "B", + "legendFormat": "{{grpc_method}}" + } + ], + "fill": 0, + "span": 6, + "title": "average completed_cmds_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "sortDesc": true, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": "0", + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [ + { + "colorMode": "critical", + "line": true, + "op": "lt", + "value": 0.1, + "fill": true + } + ], + "nullPointMode": "null", + "renderer": "flot", + "id": 44, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "delta(etcd_disk_wal_fsync_duration_seconds_count[1m])", + "step": 10, + "refId": "A", + "legendFormat": "{{instance}} etch disk wal fsync rate" + } + ], + "fill": 1, + "span": 6, + "title": "etch disk wal fsync rate", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "total": false, + "show": true, + "max": false, + "min": false, + "current": false, + "values": false, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "opm", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "alert": { + "noDataState": "no_data", + "name": "etch disk fsync", + "frequency": "60s", + "notifications": [], + "handler": 1, + "executionErrorState": "alerting", + "message": "PD etcd disk fsync is down", + "conditions": [ + { + "operator": { + "type": "and" + }, + "query": { + "params": [ + "A", + "1m", + "now" + ], + "model": { + "intervalFactor": 2, + "expr": "delta(etcd_disk_wal_fsync_duration_seconds_count[1m])", + "step": 10, + "refId": "A", + "legendFormat": "{{instance}} etch disk wal fsync rate" + }, + "datasourceId": 1 + }, + "evaluator": { + "type": "lt", + "params": [ + 0.1 + ] + }, + "reducer": { + "type": "min", + "params": [] + }, + "type": "query" + } + ] + }, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": 1 + } + ], + "showTitle": true, + "collapse": false + }, + { + "repeat": null, + "titleSize": "h4", + "repeatIteration": null, + "title": "Etcd", + "height": "300px", + "repeatRowId": null, + "panels": [ + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 5, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(rate(pd_txn_handle_txns_duration_seconds_count[5m])) by (instance, result)", + "step": 4, + "refId": "A", + "legendFormat": "{{instance}} : {{result}}" + } + ], + "fill": 1, + "span": 12, + "title": "handle_txns_count", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "values": true, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 6, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "histogram_quantile(0.99, sum(rate(pd_txn_handle_txns_duration_seconds_bucket[5m])) by (instance, result, le))", + "step": 10, + "refId": "A", + "legendFormat": "{{instance}} {{result}}" + } + ], + "fill": 1, + "span": 6, + "title": "99% handle_txns_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "values": true, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "connected", + "renderer": "flot", + "id": 24, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "hide": true, + "expr": "histogram_quantile(0.9999, sum(rate(pd_txn_handle_txns_duration_seconds_bucket[1m])) by (instance, result, le))", + "step": 4, + "legendFormat": "{{instance}} : {{result}}", + "intervalFactor": 2, + "refId": "A" + }, + { + "hide": false, + "expr": "rate(pd_txn_handle_txns_duration_seconds_sum[30s]) / rate(pd_txn_handle_txns_duration_seconds_count[30s])", + "interval": "", + "step": 10, + "legendFormat": "{{instance}} average", + "intervalFactor": 2, + "refId": "B" + } + ], + "fill": 1, + "span": 6, + "title": "average handle_txns_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 7, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m])) by (instance, le))", + "metric": "", + "step": 10, + "legendFormat": "{{instance}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 1, + "span": 6, + "title": "99% wal_fsync_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "transparent": false, + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "connected", + "renderer": "flot", + "id": 25, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "hide": true, + "expr": "histogram_quantile(0.9999, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[1m])) by (instance, le))", + "metric": "", + "step": 4, + "legendFormat": "{{instance}}", + "intervalFactor": 2, + "refId": "A" + }, + { + "intervalFactor": 2, + "expr": "rate(etcd_disk_wal_fsync_duration_seconds_sum[30s]) / rate(etcd_disk_wal_fsync_duration_seconds_count[30s])", + "step": 10, + "refId": "B", + "legendFormat": "{{instance}} average" + } + ], + "fill": 1, + "span": 6, + "title": "average wal_fsync_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "transparent": false, + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 34, + "linewidth": 2, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(etcd_network_peer_round_trip_time_seconds_bucket[5m])) by (instance, le))", + "metric": "", + "step": 10, + "legendFormat": "{{instance}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 1, + "span": 6, + "title": "99% peer_round_trip_time_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "values": true, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "transparent": false, + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 35, + "linewidth": 2, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(etcd_network_peer_round_trip_time_seconds_bucket[5m])) by (instance, le))", + "metric": "", + "step": 10, + "legendFormat": "{{instance}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 1, + "span": 6, + "title": "99.99% peer_round_trip_time_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "cumulative", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "values": true, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "transparent": false, + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + } + ], + "showTitle": true, + "collapse": false + }, + { + "repeat": null, + "titleSize": "h4", + "repeatIteration": null, + "title": "TiDB", + "height": "300", + "repeatRowId": null, + "panels": [ + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 28, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(rate(pd_client_request_handle_requests_duration_seconds_count[1m])) by (type)", + "step": 4, + "refId": "A", + "legendFormat": "{{type}}" + } + ], + "fill": 1, + "span": 12, + "title": "handle_requests_count", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "values": true, + "alignAsTable": true, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "id": 29, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "hide": false, + "expr": "histogram_quantile(0.98, sum(rate(pd_client_request_handle_requests_duration_seconds_bucket[30s])) by (type, le))", + "step": 4, + "legendFormat": "{{type}} 98th percentile", + "intervalFactor": 2, + "refId": "A" + }, + { + "intervalFactor": 2, + "expr": "avg(rate(pd_client_request_handle_requests_duration_seconds_sum[30s])) by (type) / avg(rate(pd_client_request_handle_requests_duration_seconds_count[30s])) by (type)", + "step": 4, + "refId": "B", + "legendFormat": "{{type}} average" + } + ], + "fill": 1, + "span": 12, + "title": "handle_requests_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual", + "msResolution": false + }, + "legend": { + "sort": "current", + "rightSide": true, + "total": false, + "sideWidth": 300, + "min": false, + "max": true, + "show": true, + "current": true, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "sortDesc": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": "0", + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + } + ], + "showTitle": true, + "collapse": false + }, + { + "repeat": null, + "titleSize": "h4", + "repeatIteration": null, + "title": "TiKV", + "height": "300", + "repeatRowId": null, + "panels": [ + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 31, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "sum(rate(tikv_pd_msg_send_duration_seconds_count[1m]))", + "step": 4, + "refId": "A", + "legendFormat": "" + } + ], + "fill": 1, + "span": 12, + "title": "send_message_count", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual", + "msResolution": false + }, + "legend": { + "rightSide": false, + "total": false, + "min": false, + "max": false, + "show": false, + "current": false, + "values": false, + "alignAsTable": false, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null", + "renderer": "flot", + "id": 32, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "histogram_quantile(0.95, sum(rate(tikv_pd_msg_send_duration_seconds_bucket[30s])) by (le))", + "step": 10, + "refId": "A", + "legendFormat": "" + } + ], + "fill": 1, + "span": 6, + "title": "95% send_message_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual", + "msResolution": false + }, + "legend": { + "rightSide": false, + "total": false, + "min": false, + "max": false, + "show": false, + "current": false, + "values": false, + "alignAsTable": false, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "id": 33, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "hide": true, + "expr": "histogram_quantile(0.98, sum(rate(tikv_pd_msg_send_duration_seconds_bucket[60s])) by (type, le))", + "step": 4, + "legendFormat": "98th percentile", + "intervalFactor": 2, + "refId": "A" + }, + { + "intervalFactor": 2, + "expr": "rate(tikv_pd_msg_send_duration_seconds_sum[30s]) / rate(tikv_pd_msg_send_duration_seconds_count[30s])", + "step": 10, + "refId": "B", + "legendFormat": "{{job}}" + } + ], + "fill": 0, + "span": 6, + "title": "send_message_duration_seconds", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": true, + "show": true, + "current": false, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": "0", + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [], + "nullPointMode": "null as zero", + "renderer": "flot", + "id": 54, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "hide": false, + "expr": "sum(rate(pd_scheduler_region_heartbeat{instance=\"$instance\"}[1m])) by (store, type, status)", + "step": 4, + "legendFormat": "store{{store}}-{{type}}-{{status}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 0, + "span": 6, + "title": "Region heartbeat", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual", + "msResolution": false + }, + "legend": { + "rightSide": true, + "total": false, + "min": false, + "max": true, + "show": true, + "current": false, + "hideEmpty": true, + "values": true, + "alignAsTable": true, + "avg": false, + "hideZero": true + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "ops", + "min": "0", + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "s", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "error": false, + "editable": true, + "grid": {}, + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + } + ], + "showTitle": true, + "collapse": false + }, + { + "repeat": null, + "titleSize": "h6", + "repeatIteration": null, + "title": "Nodes", + "height": "", + "repeatRowId": null, + "panels": [ + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [ + { + "colorMode": "critical", + "line": true, + "op": "gt", + "value": 4, + "fill": true + } + ], + "nullPointMode": "null", + "renderer": "flot", + "id": 42, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "expr": "node_load1{job=\"tikv-node\"}", + "metric": "", + "step": 10, + "legendFormat": "{{instance}}", + "intervalFactor": 2, + "refId": "A" + } + ], + "fill": 1, + "span": 6, + "title": "TiKV Node Load", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "total": false, + "show": true, + "max": false, + "min": false, + "current": false, + "values": false, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "alert": { + "noDataState": "no_data", + "name": "TiKV Node Load alert", + "frequency": "60s", + "notifications": [], + "handler": 1, + "executionErrorState": "alerting", + "message": "TiKV is under high load", + "conditions": [ + { + "operator": { + "type": "and" + }, + "query": { + "params": [ + "A", + "5m", + "now" + ], + "model": { + "expr": "node_load1{job=\"tikv-node\"}", + "metric": "", + "step": 10, + "legendFormat": "{{instance}}", + "intervalFactor": 2, + "refId": "A" + }, + "datasourceId": 1 + }, + "evaluator": { + "type": "gt", + "params": [ + 4 + ] + }, + "reducer": { + "type": "avg", + "params": [] + }, + "type": "query" + } + ] + }, + "stack": true, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5, + "decimals": 2 + }, + { + "bars": false, + "timeFrom": null, + "links": [], + "thresholds": [ + { + "colorMode": "critical", + "line": true, + "op": "gt", + "value": 8, + "fill": true + }, + { + "colorMode": "warning", + "line": true, + "op": "gt", + "value": 4, + "fill": true + } + ], + "nullPointMode": "null", + "renderer": "flot", + "id": 43, + "linewidth": 1, + "steppedLine": false, + "targets": [ + { + "intervalFactor": 2, + "expr": "node_load1{job=\"tidb-node\"}", + "step": 10, + "refId": "A", + "legendFormat": "{{instance}}" + } + ], + "fill": 1, + "span": 6, + "title": "TiDB Node Load", + "tooltip": { + "sort": 0, + "shared": true, + "value_type": "individual" + }, + "legend": { + "total": false, + "show": true, + "max": false, + "min": false, + "current": false, + "values": false, + "avg": false + }, + "yaxes": [ + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + }, + { + "logBase": 1, + "show": true, + "max": null, + "format": "short", + "min": null, + "label": null + } + ], + "xaxis": { + "show": true, + "values": [], + "mode": "time", + "name": null + }, + "seriesOverrides": [], + "percentage": false, + "type": "graph", + "stack": false, + "timeShift": null, + "aliasColors": {}, + "lines": true, + "points": false, + "datasource": "${DS_TIDB-CLUSTER}", + "pointradius": 5 + } + ], + "showTitle": true, + "collapse": true + } + ], + "editMode": false, + "links": [ + { + "tags": [], + "type": "dashboards", + "icon": "external link" + } + ], + "tags": [], + "graphTooltip": 1, + "hideControls": false, + "title": "TiDB Cluster - pd", + "editable": true, + "refresh": "30s", + "id": null, + "gnetId": null, + "timepicker": { + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ], + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ] + }, + "__inputs": [ + { + "description": "", + "pluginName": "Prometheus", + "label": "tidb-cluster", + "pluginId": "prometheus", + "type": "datasource", + "name": "DS_TIDB-CLUSTER" + } + ], + "version": 18, + "time": { + "to": "now", + "from": "now-1h" + }, + "__requires": [ + { + "version": "4.0.1", + "type": "grafana", + "id": "grafana", + "name": "Grafana" + }, + { + "version": "1.0.0", + "type": "datasource", + "id": "prometheus", + "name": "Prometheus" + } + ], + "timezone": "browser", + "schemaVersion": 14, + "annotations": { + "list": [] + }, + "templating": { + "list": [ + { + "regex": "", + "sort": 0, + "multi": false, + "hide": 0, + "name": "instance", + "tags": [], + "allValue": null, + "tagValuesQuery": null, + "refresh": 1, + "label": null, + "current": {}, + "datasource": "${DS_TIDB-CLUSTER}", + "type": "query", + "query": "label_values(pd_cluster_status, instance)", + "useTags": false, + "tagsQuery": null, + "options": [], + "includeAll": false + }, + { + "allValue": ".*", + "current": {}, + "datasource": "${DS_TIDB-CLUSTER}", + "hide": 0, + "includeAll": true, + "label": "Namespace", + "multi": false, + "name": "namespace", + "options": [], + "query": "label_values(pd_cluster_status{instance=\"$instance\"}, namespace)", + "refresh": 1, + "regex": "", + "sort": 1, + "tagValuesQuery": "", + "tags": [], + "tagsQuery": "", + "type": "query", + "useTags": false + } + ] + } +} diff --git a/v2.0/etc/tidb.json b/v2.0/etc/tidb.json new file mode 100755 index 0000000000000..7d722d5d49b68 --- /dev/null +++ b/v2.0/etc/tidb.json @@ -0,0 +1,3629 @@ +{ + "__inputs": [ + { + "name": "DS_TEST-CLUSTER", + "label": "test-cluster", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + } + ], + "annotations": { + "list": [] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "hideControls": false, + "id": null, + "links": [ + { + "icon": "external link", + "tags": [], + "type": "dashboards" + } + ], + "refresh": "30s", + "rows": [ + { + "collapse": false, + "height": "240", + "panels": [ + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "histogram_quantile(0.80, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "A", + "step": 60 + }, + "params": [ + "A", + "10s", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Query 处理时间异常!", + "name": "Query Seconds 80 alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 23, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.80, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "A", + "step": 60 + }, + { + "expr": "histogram_quantile(0.80, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "B", + "step": 60 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Query Duration 80th percentile", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "histogram_quantile(0.95, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "A", + "step": 60 + }, + "params": [ + "A", + "10s", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Query duration at 95th percentile is high.", + "name": "Query Duration 95th percentile alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 1, + "legend": { + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "A", + "step": 60 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{ instance }}", + "refId": "B", + "step": 60 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Query Duration 95th percentile", + "tooltip": { + "msResolution": true, + "shared": false, + "sort": 0, + "value_type": "cumulative" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [ + "max" + ] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 10 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 60 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Query duration for 99th percentile is high.", + "name": "Query Duration 99th percentile alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 25, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "B", + "step": 60 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 60 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 10 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Query Duration 99th percentile", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "irate(tidb_server_handle_query_duration_seconds_sum[30s]) / irate(tidb_server_handle_query_duration_seconds_count[30s])", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 60 + }, + "params": [ + "A", + "5m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Average query duration is high.", + "name": "Average Query Duration alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 37, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": false, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "irate(tidb_server_handle_query_duration_seconds_sum[30s]) / irate(tidb_server_handle_query_duration_seconds_count[30s])", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 60 + }, + { + "expr": "sum(irate(tidb_server_handle_query_duration_seconds_sum[30s])) / sum(irate(tidb_server_handle_query_duration_seconds_count[30s]))", + "intervalFactor": 2, + "legendFormat": "average", + "refId": "B", + "step": 60 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Average Query Duration", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 2, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 12, + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "total", + "lines": false + } + ], + "span": 8, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(tidb_server_query_total[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}}-{{job}} {{type}} {{status}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "QPS", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 2, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 42, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": false, + "show": true, + "sideWidth": 250, + "sort": "max", + "sortDesc": false, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 12, + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_server_query_total[1m])) by (status)", + "intervalFactor": 2, + "legendFormat": "query {{status}}", + "refId": "A", + "step": 60 + }, + { + "expr": "sum(rate(tidb_server_query_total{status=\"OK\"}[1m] offset 1d))", + "intervalFactor": 3, + "legendFormat": "yesterday", + "refId": "B", + "step": 90 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "QPS Total", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": null, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 21, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 8, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(irate(tidb_executor_statement_node_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Statement Count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Query", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 8, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "total", + "fill": 0, + "lines": false + } + ], + "span": 6, + "stack": true, + "steppedLine": true, + "targets": [ + { + "expr": "tidb_server_connections", + "intervalFactor": 2, + "legendFormat": "{{instance}}-{{job}}", + "refId": "A", + "step": 30 + }, + { + "expr": "sum(tidb_server_connections)", + "intervalFactor": 2, + "legendFormat": "total", + "refId": "B", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Connection Count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1000000000 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "go_memstats_heap_inuse_bytes{job=~\"tidb.*\"}", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{instance}}-{{job}}", + "metric": "go_memstats_heap_inuse_bytes", + "refId": "B", + "step": 30 + }, + "params": [ + "B", + "10s", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiDB mem heap is over 1GiB", + "name": "TiDB Heap Memory Usage alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 3, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "go_memstats_heap_inuse_bytes{job=~\"tidb.*\"}", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{instance}}-{{job}}", + "metric": "go_memstats_heap_inuse_bytes", + "refId": "B", + "step": 30 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1000000000 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Heap Memory Usage", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": "", + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Query", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 12, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "max": false, + "min": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [ + { + "type": "dashboard" + } + ], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tidb_distsql_handle_query_duration_seconds_bucket[1m])) by (le))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "", + "metric": "tidb_distsql_handle_query_duration_seconds_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Distsql Seconds 99", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 14, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_distsql_query_total [1m]))", + "intervalFactor": 2, + "legendFormat": "", + "metric": "tidb_distsql_query_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Distsql QPS", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Distsql", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 40, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "sort": "total", + "sortDesc": true, + "total": true, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_cop_count[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Coprocessor Count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 41, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.999, sum(rate(tidb_tikvclient_cop_seconds_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Coprocessor Seconds 999", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Coprocessor", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 5, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_txn_cmd_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Cmd Count", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 4, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_txn_total[1m])) by (instance)", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Txn Count", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 6, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tidb_tikvclient_backoff_seconds_bucket[1m])) by (instance, le))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Retry Seconds 9999", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 30, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tidb_tikvclient_request_seconds_bucket[1m])) by (le, instance, type))", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Request Seconds 9999", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 18, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tidb_tikvclient_txn_cmd_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Cmd Seconds 99", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 22, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.999, sum(rate(tidb_tikvclient_txn_cmd_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Cmd Seconds 9999", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 44, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.90, sum(rate(tidb_tikvclient_txn_regions_num_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "90 Txn regions count", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "KV", + "titleSize": "h6" + }, + { + "collapse": false, + "height": 250, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 33, + "legend": { + "alignAsTable": true, + "avg": true, + "current": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "avg", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(1, sum(rate(tidb_tikvclient_txn_write_kv_count_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "B", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Count Per Txn", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 34, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "avg", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(1, sum(rate(tidb_tikvclient_txn_write_size_bucket[1m])) by (le, instance))", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Size Per Txn", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 10, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "sum(rate(tidb_tikvclient_region_err_total[1m])) by (type, instance)", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "metric": "tidb_server_session_execute_parse_duration_count", + "refId": "A", + "step": 30 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiDB report 'server is busy'", + "name": "TiDB TiClient Region Error alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 11, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_region_err_total[1m])) by (type, instance)", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "metric": "tidb_server_session_execute_parse_duration_count", + "refId": "A", + "step": 30 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "TiClient Region Error", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 32, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": false, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_lock_resolver_actions_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tidb_tikvclient_lock_resolver_actions_total", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "LockResolve", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "KV 2", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 20, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 3, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(pd_client_cmd_handle_cmds_duration_seconds_bucket[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD Client cmd count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 35, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 3, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.999, sum(rate(pd_client_cmd_handle_cmds_duration_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD Client cmd duration 999", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 45, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 3, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.999, sum(rate(pd_client_request_handle_requests_duration_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD Client request duration 999", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 43, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 3, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(pd_client_cmd_handle_failed_cmds_duration_seconds_count[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD Client cmd fail", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "PD Client", + "titleSize": "h6" + }, + { + "collapse": true, + "height": 250, + "panels": [ + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 5 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "tidb_domain_load_schema_duration_sum / tidb_domain_load_schema_duration_count", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "metric": "", + "refId": "A", + "step": 10 + }, + "params": [ + "A", + "5m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiDB load schema latency is over 5s", + "name": "Load Schema Duration alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 27, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "tidb_domain_load_schema_duration_sum / tidb_domain_load_schema_duration_count", + "intervalFactor": 2, + "legendFormat": "{{instance}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 5 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Load Schema Duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "rate(tidb_domain_load_schema_total{type='failed'}[1m])", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{instance}} failed", + "refId": "B", + "step": 10 + }, + "params": [ + "B", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiDB load schema fails", + "name": "Load schema alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 28, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "/.*failed/", + "bars": true + } + ], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(tidb_domain_load_schema_total{type='succ'}[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} succ", + "metric": "tidb_domain_load_schema_duration_count", + "refId": "A", + "step": 10 + }, + { + "expr": "rate(tidb_domain_load_schema_total{type='failed'}[1m])", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{instance}} failed", + "refId": "B", + "step": 10 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Load Schema QPS", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "rate(tidb_server_schema_lease_error_counter[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "metric": "tidb_server_", + "refId": "A", + "step": 10 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "Schema lease error.", + "name": "Schema Lease Error alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 29, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(tidb_server_schema_lease_error_counter[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} {{type}}", + "metric": "tidb_server_", + "refId": "A", + "step": 10 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Schema Lease Error Rate", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Schema Load", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 9, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tidb_ddl_handle_job_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "ddl handle job duration", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "DDL Seconds 95", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 7, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": false, + "show": true, + "sortDesc": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tidb_ddl_batch_add_or_del_data_succ_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "ddl batch", + "metric": "tidb_ddl_ba", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "DDL Batch Seconds 95", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 36, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_server_session_retry_count[1m]))", + "intervalFactor": 2, + "legendFormat": "session retry", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Session Retry", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 38, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sort": "max", + "sortDesc": true, + "total": true, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_backoff_count[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 2 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "KV Backoff Count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "DDL", + "titleSize": "h6" + }, + { + "collapse": true, + "height": 250, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "fill": 1, + "id": 46, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tidb_statistics_auto_analyze_duration_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "auto analyze duration", + "refId": "A", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Auto Analyze Seconds 95", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "fill": 1, + "id": 47, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(tidb_statistics_auto_analyze_total{type='succ'}[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} succ", + "refId": "A", + "step": 30 + }, + { + "expr": "rate(tidb_statistics_auto_analyze_total{type='failed'}[1m])", + "intervalFactor": 2, + "legendFormat": "{{instance}} failed", + "refId": "B", + "step": 30 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Auto Analyze QPS", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": false, + "title": "Statistics", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [] + }, + "time": { + "from": "now-6h", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ] + }, + "timezone": "browser", + "title": "TiDB Cluster - tidb", + "version": 0 +} \ No newline at end of file diff --git a/v2.0/etc/tikv.json b/v2.0/etc/tikv.json new file mode 100755 index 0000000000000..09666f8d1c25d --- /dev/null +++ b/v2.0/etc/tikv.json @@ -0,0 +1,11914 @@ +{ + "__inputs": [ + { + "name": "DS_TEST-CLUSTER", + "label": "test-cluster", + "description": "", + "type": "datasource", + "pluginId": "prometheus", + "pluginName": "Prometheus" + } + ], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "4.1.2" + }, + { + "type": "panel", + "id": "graph", + "name": "Graph", + "version": "" + }, + { + "type": "datasource", + "id": "prometheus", + "name": "Prometheus", + "version": "1.0.0" + }, + { + "type": "panel", + "id": "singlestat", + "name": "Singlestat", + "version": "" + } + ], + "annotations": { + "list": [] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "hideControls": false, + "id": null, + "links": [ + { + "icon": "external link", + "tags": [], + "type": "dashboards" + } + ], + "refresh": "1m", + "rows": [ + { + "collapse": false, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 34, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "total", + "lines": false + } + ], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_pd_heartbeat_tick_total{type=\"leader\"}) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_pd_heartbeat_tick_total", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "leader", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 37, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_pd_heartbeat_tick_total{type=\"region\"}) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "region", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 3, + "grid": {}, + "id": 33, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_engine_size_bytes) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "cf size", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "decbytes", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 5, + "grid": {}, + "id": 56, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 0, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_engine_size_bytes) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "store size", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "decbytes", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "sum(rate(tikv_channel_full_total[1m])) by (job, type)", + "intervalFactor": 2, + "legendFormat": "{{job}} - {{type}}", + "metric": "", + "refId": "A", + "step": 10 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "avg" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiKV channel full", + "name": "TiKV channel full alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 3, + "grid": {}, + "id": 22, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_channel_full_total[1m])) by (job, type)", + "intervalFactor": 2, + "legendFormat": "{{job}} - {{type}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "channel full", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 18, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_server_report_failure_msg_total[1m])) by (type,instance,job,store_id)", + "intervalFactor": 2, + "legendFormat": "{{job}} - {{type}} - to - {{store_id}}", + "metric": "tikv_server_raft_store_msg_total", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "server report failures", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 57, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_region_written_keys_sum[1m])) by (job) / sum(rate(tikv_region_written_keys_count[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_region_written_keys_bucket", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "region average written keys", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 58, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_region_written_bytes_sum[1m])) by (job) / sum(rate(tikv_region_written_bytes_count[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_regi", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "region average written bytes", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 75, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_region_written_keys_count[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_region_written_keys_bucket", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "active written leaders", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 1481, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(1.0, sum(rate(tikv_raftstore_region_size_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "max", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_region_size_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_region_size_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_raftstore_region_size_sum[1m])) / sum(rate(tikv_raftstore_region_size_count[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "approximate region size", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Server", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 1164, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='tick'}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='tick'}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 4 + }, + { + "expr": "sum(rate(tikv_raftstore_raft_process_duration_secs_sum{type='tick'}[1m])) / sum(rate(tikv_raftstore_raft_process_duration_secs_count{type='tick'}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "B", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "raft process tick duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 1165, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='tick'}[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "C", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "95% raft process tick duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 1 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='ready'}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "max" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiKV raft process ready duration 99th percentile is above 1s", + "name": "TiKV raft process ready duration 99th percentile alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 12, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='ready'}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='ready'}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 4 + }, + { + "expr": "sum(rate(tikv_raftstore_raft_process_duration_secs_sum{type='ready'}[1m])) / sum(rate(tikv_raftstore_raft_process_duration_secs_count{type='ready'}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "B", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "raft process ready duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 118, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "sort": "max", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='ready'}[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "C", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "95% raft process ready duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 5, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_raft_ready_handled_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_raftstore_raft_ready_handled_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft ready handled", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 108, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_apply_proposal_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft proposals per ready", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 76, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_proposal_total{type=~\"conf_change|transfer_leader\"}[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_raftstore_proposal_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft admin proposals", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 7, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_proposal_total{type=~\"local_read|normal|read_index\"}[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_raftstore_proposal_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft read/write proposals", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 119, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_proposal_total{type=~\"local_read|read_index\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft read proposals per server", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 120, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_proposal_total{type=\"normal\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_raftstore_proposal_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft write proposals per server", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 72, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tikv_raftstore_log_lag_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_raftstore_log_lag_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99.99% raft log lag", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 73, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tikv_raftstore_propose_log_size_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_raftstore_propose_log_size_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99.99% raft log size", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 77, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_admin_cmd_total{status=\"success\", type!=\"compact\"}[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_raftstore_admin_cmd_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft admin commands", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 21, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_admin_cmd_total{status=\"success\", type=\"compact\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_raftstore_admin_cmd_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft compact commands", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 70, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_check_split_total{type!=\"ignore\"}[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_raftstore_check_split_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "check split", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 71, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tikv_raftstore_check_split_duration_seconds_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_raftstore_check_split_duration_seconds_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99.99% check split duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 11, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_raft_sent_message_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft sent messages", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 106, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_server_raft_message_recv_total[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft recv messages per server", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 25, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_raft_sent_message_total{type=\"vote\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "vote", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 1309, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_raftstore_raft_dropped_message_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "raft dropped messages", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Raft", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 31, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_apply_log_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": " 99%", + "metric": "", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_apply_log_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_raftstore_apply_log_duration_seconds_sum[1m])) / sum(rate(tikv_raftstore_apply_log_duration_seconds_count[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "apply log duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 32, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_apply_log_duration_seconds_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": " {{job}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% apply log duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 39, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_append_log_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": " 99%", + "metric": "", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_append_log_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_raftstore_append_log_duration_seconds_sum[1m])) / sum(rate(tikv_raftstore_append_log_duration_seconds_count[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "append log duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 40, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_append_log_duration_seconds_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}} ", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% append log duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 41, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_request_wait_time_duration_secs_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "tikv_raftstore_request_wait_time_duration_secs_bucket", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_request_wait_time_duration_secs_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_raftstore_request_wait_time_duration_secs_sum[1m])) / sum(rate(tikv_raftstore_request_wait_time_duration_secs_count[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% request wait duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 42, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_request_wait_time_duration_secs_bucket[1m])) by (le, job))", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% request wait duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Raft Ready", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 2, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_storage_command_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "storage command total", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 8, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_storage_engine_async_request_total{status!~\"all|success\"}[1m])) by (status)", + "intervalFactor": 2, + "legendFormat": "{{status}}", + "metric": "tikv_raftstore_raft_process_duration_secs_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "storage async request error", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 15, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket{type=\"snapshot\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket{type=\"snapshot\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_storage_engine_async_request_duration_seconds_sum{type=\"snapshot\"}[1m])) / sum(rate(tikv_storage_engine_async_request_duration_seconds_count{type=\"snapshot\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "storage async snapshot duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 109, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket{type=\"write\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket{type=\"write\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_storage_engine_async_request_duration_seconds_sum{type=\"write\"}[1m])) / sum(rate(tikv_storage_engine_async_request_duration_seconds_count{type=\"write\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "storage async write duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 1310, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "raft-95%", + "yaxis": 2 + } + ], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_storage_batch_commands_total_bucket[30s])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_storage_batch_commands_total_bucket[30s])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_storage_batch_commands_total_sum[30s])) / sum(rate(tikv_storage_batch_commands_total_count[30s]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_raftstore_batch_snapshot_commands_total_bucket[30s])) by (le))", + "intervalFactor": 2, + "legendFormat": "raft-95%", + "refId": "D", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "storage async batch snapshot", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "Storage Batch Size", + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": "Raftstore Batch Size", + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Storage", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "height": "400", + "id": 167, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 12, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_too_busy_total[1m])) by (stage)", + "intervalFactor": 2, + "legendFormat": "busy", + "refId": "A", + "step": 20 + }, + { + "expr": "sum(rate(tikv_scheduler_stage_total[1m])) by (stage)", + "intervalFactor": 2, + "legendFormat": "{{stage}}", + "refId": "B", + "step": 20 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler stage total", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "height": "", + "id": 1, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": 6, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_commands_pri_total[1m])) by (priority)", + "intervalFactor": 2, + "legendFormat": "{{priority}}", + "metric": "", + "refId": "A", + "step": 40 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler priority commands", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "height": "", + "id": 193, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": 6, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_scheduler_contex_total) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "", + "refId": "A", + "step": 40 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler pending commands", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Scheduler", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "height": "400", + "id": 168, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 12, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 12, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_too_busy_total{type=\"$command\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "busy", + "refId": "A", + "step": 4 + }, + { + "expr": "sum(rate(tikv_scheduler_stage_total{type=\"$command\"}[1m])) by (stage)", + "intervalFactor": 2, + "legendFormat": "{{stage}}", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler stage total", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 3, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_scheduler_command_duration_seconds_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_scheduler_command_duration_seconds_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_scheduler_command_duration_seconds_sum{type=\"$command\"}[1m])) / sum(rate(tikv_scheduler_command_duration_seconds_count{type=\"$command\"}[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler command duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 194, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_scheduler_latch_wait_duration_seconds_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_scheduler_latch_wait_duration_seconds_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_scheduler_latch_wait_duration_seconds_sum{type=\"$command\"}[1m])) / sum(rate(tikv_scheduler_latch_wait_duration_seconds_count{type=\"$command\"}[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler latch wait duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 195, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_scheduler_kv_command_key_read_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "kv_command_key", + "refId": "A", + "step": 10 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_scheduler_kv_command_key_read_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_scheduler_kv_command_key_read_sum{type=\"$command\"}[1m])) / sum(rate(tikv_scheduler_kv_command_key_read_count{type=\"$command\"}[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler keys read", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 373, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_scheduler_kv_command_key_write_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "kv_command_key", + "refId": "A", + "step": 10 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_scheduler_kv_command_key_write_bucket{type=\"$command\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_scheduler_kv_command_key_write_sum{type=\"$command\"}[1m])) / sum(rate(tikv_scheduler_kv_command_key_write_count{type=\"$command\"}[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler keys written", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 560, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_kv_scan_details{req=\"$command\"}[1m])) by (tag)", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler scan details", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 675, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_kv_scan_details{req=\"$command\", cf=\"lock\"}[1m])) by (tag)", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler scan details [lock]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 829, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_kv_scan_details{req=\"$command\", cf=\"write\"}[1m])) by (tag)", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler scan details [write]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 830, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_scheduler_kv_scan_details{req=\"$command\", cf=\"default\"}[1m])) by (tag)", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler scan details [default]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": "command", + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Scheduler - $command", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 16, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_duration_seconds_bucket{req=\"select\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_duration_seconds_bucket{req=\"select\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": " sum(rate(tikv_coprocessor_request_duration_seconds_sum{req=\"select\"}[1m])) / sum(rate(tikv_coprocessor_request_duration_seconds_count{req=\"select\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 13, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_duration_seconds_bucket{req=\"index\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "tikv_coprocessor_request_duration_seconds_bucket", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_duration_seconds_bucket{req=\"index\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": "sum(rate(tikv_coprocessor_request_duration_seconds_sum{req=\"index\"}[1m])) / sum(rate(tikv_coprocessor_request_duration_seconds_count{req=\"index\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 115, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_duration_seconds_bucket[1m])) by (le, job,req))", + "intervalFactor": 2, + "legendFormat": "{{job}}-{{req}}", + "metric": "tikv_coprocessor_request_duration_seconds_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% coprocessor request duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 111, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_wait_seconds_bucket{req=\"select\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_wait_seconds_bucket{req=\"select\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": " sum(rate(tikv_coprocessor_request_wait_seconds_sum{req=\"select\"}[1m])) / sum(rate(tikv_coprocessor_request_wait_seconds_count{req=\"select\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table wait duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 112, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_wait_seconds_bucket{req=\"index\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_wait_seconds_bucket{req=\"index\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": " sum(rate(tikv_coprocessor_request_wait_seconds_sum{req=\"index\"}[1m])) / sum(rate(tikv_coprocessor_request_wait_seconds_count{req=\"index\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index wait duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 116, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_wait_seconds_bucket[1m])) by (le, job,req))", + "intervalFactor": 2, + "legendFormat": "{{job}}-{{req}}", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% coprocessor wait duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 113, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_handle_seconds_bucket{req=\"select\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_handle_seconds_bucket{req=\"select\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": " sum(rate(tikv_coprocessor_request_handle_seconds_sum{req=\"select\"}[1m])) / sum(rate(tikv_coprocessor_request_handle_seconds_count{req=\"select\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table handle duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 114, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_coprocessor_request_handle_seconds_bucket{req=\"index\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_handle_seconds_bucket{req=\"index\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "B", + "step": 4 + }, + { + "expr": " sum(rate(tikv_coprocessor_request_handle_seconds_sum{req=\"index\"}[1m])) / sum(rate(tikv_coprocessor_request_handle_seconds_count{req=\"index\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "C", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index handle duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 5, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 117, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_coprocessor_request_handle_seconds_bucket[1m])) by (le, job,req))", + "intervalFactor": 2, + "legendFormat": "{{job}}-{{req}}", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "95% coprocessor handle duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 52, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_keys_bucket[1m])) by (req)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{req}}", + "metric": "tikv_coprocessor_scan_keys_bucket", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor scan keys", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 551, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_executor_count[1m])) by (type)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor executor count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 74, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_request_error[1m])) by (reason)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{reason}}", + "metric": "tikv_coprocessor_request_error", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor request errors", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 550, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_pending_request[1m])) by (req, priority)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{ req }} - {{priority}}", + "metric": "tikv_coprocessor_request_error", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor pending requests", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 552, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{req=\"select\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table scan details", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 122, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "repeat": null, + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{cf=\"lock\", req=\"select\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table scan details [lock]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 555, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{cf=\"write\", req=\"select\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table scan details [write]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 556, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{cf=\"default\", req=\"select\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor table scan details [default]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 553, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{req=\"index\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index scan details", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 554, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "repeat": "cf", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{cf=\"lock\", req=\"index\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index scan details - [lock]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 557, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{cf=\"write\", req=\"index\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index scan details - [write]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 558, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_coprocessor_scan_details{cf=\"default\", req=\"index\"}[1m])) by (tag)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{tag}}", + "metric": "scan_details", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor index scan details - [default]", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Coprocessor", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 26, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(1.0, sum(rate(tikv_storage_mvcc_versions_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": " max", + "metric": "", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_storage_mvcc_versions_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "B", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_storage_mvcc_versions_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": " 95%", + "metric": "", + "refId": "C", + "step": 4 + }, + { + "expr": "sum(rate(tikv_storage_mvcc_versions_sum[1m])) / sum(rate(tikv_storage_mvcc_versions_count[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "D", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "MVCC Versions", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 559, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(1.0, sum(rate(tikv_storage_mvcc_gc_delete_versions_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": " max", + "metric": "", + "refId": "A", + "step": 4 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_storage_mvcc_gc_delete_versions_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "B", + "step": 4 + }, + { + "expr": "histogram_quantile(0.95, sum(rate(tikv_storage_mvcc_gc_delete_versions_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": " 95%", + "metric": "", + "refId": "C", + "step": 4 + }, + { + "expr": "sum(rate(tikv_storage_mvcc_gc_delete_versions_sum[1m])) / sum(rate(tikv_storage_mvcc_gc_delete_versions_count[1m])) ", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "D", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "MVCC Delete Versions", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 121, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_storage_command_total{type=\"gc\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "total", + "metric": "tikv_storage_command_total", + "refId": "A", + "step": 4 + }, + { + "expr": "sum(rate(tikv_storage_gc_skipped_counter[1m]))", + "intervalFactor": 2, + "legendFormat": "skipped", + "metric": "tikv_storage_gc_skipped_counter", + "refId": "B", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "GC Commands", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 2, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 966, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tidb_tikvclient_gc_worker_actions_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "GC Worker Actions", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": "", + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 0, + "editable": true, + "error": false, + "format": "s", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 27, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "null", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "max(tidb_tikvclient_gc_config{type=\"tikv_gc_life_time\"})", + "interval": "", + "intervalFactor": 2, + "refId": "A", + "step": 60 + } + ], + "thresholds": "", + "title": "GC LifeTime", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "current" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": false, + "colors": [ + "rgba(245, 54, 54, 0.9)", + "rgba(237, 129, 40, 0.89)", + "rgba(50, 172, 45, 0.97)" + ], + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 0, + "editable": true, + "error": false, + "format": "s", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "id": 28, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "null", + "nullText": null, + "postfix": "", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "span": 3, + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": false + }, + "targets": [ + { + "expr": "max(tidb_tikvclient_gc_config{type=\"tikv_gc_run_interval\"})", + "intervalFactor": 2, + "refId": "A", + "step": 60 + } + ], + "thresholds": "", + "title": "GC interval", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "current" + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "GC", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 35, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(delta(tikv_raftstore_raft_sent_message_total{type=\"snapshot\"}[1m]))", + "intervalFactor": 2, + "legendFormat": " ", + "refId": "A", + "step": 60 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "rate snapshot message", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "opm", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 36, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_server_send_snapshot_duration_seconds_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "send", + "refId": "A", + "step": 60 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_snapshot_duration_seconds_bucket{type=\"apply\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "apply", + "refId": "B", + "step": 60 + }, + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_raftstore_snapshot_duration_seconds_bucket{type=\"generate\"}[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "generate", + "refId": "C", + "step": 60 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% handle snapshot duration", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 38, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 4, + "stack": false, + "steppedLine": true, + "targets": [ + { + "expr": "sum(tikv_raftstore_snapshot_traffic_total) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "", + "refId": "A", + "step": 60 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "snapshot state count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 44, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tikv_snapshot_size_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "size", + "metric": "tikv_snapshot_size_bucket", + "refId": "A", + "step": 40 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99.99% snapshot size", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 43, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.9999, sum(rate(tikv_snapshot_kv_count_bucket[1m])) by (le))", + "intervalFactor": 2, + "legendFormat": "count", + "metric": "tikv_snapshot_kv_count_bucket", + "refId": "A", + "step": 40 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99.99% snapshot kv count", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Snapshot", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 59, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 400, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_worker_handled_task_total[1m])) by (name)", + "intervalFactor": 2, + "legendFormat": "{{name}}", + "metric": "tikv_pd_heartbeat_tick_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Worker Handled Tasks", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 1395, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 400, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_worker_pending_task_total[1m])) by (name)", + "intervalFactor": 2, + "legendFormat": "{{name}}", + "metric": "tikv_pd_heartbeat_tick_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Worker Pending Tasks", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Task", + "titleSize": "h6" + }, + { + "collapse": true, + "height": 250, + "panels": [ + { + "alert": { + "conditions": [ + { + "evaluator": { + "params": [ + 0.8 + ], + "type": "gt" + }, + "operator": { + "type": "and" + }, + "query": { + "datasourceId": 1, + "model": { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"raftstore_.*\"}[1m])) by (job, name)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 20 + }, + "params": [ + "A", + "1m", + "now" + ] + }, + "reducer": { + "params": [], + "type": "max" + }, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "60s", + "handler": 1, + "message": "TiKV raftstore thread CPU usage is high", + "name": "TiKV raft store CPU alert", + "noDataState": "no_data", + "notifications": [] + }, + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 61, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"raftstore_.*\"}[1m])) by (job, name)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 0.8 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "raft store CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 1, + "grid": {}, + "id": 79, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=\"apply_worker\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "async apply CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 63, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"storage_schedul.*\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 64, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"sched_worker.*\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "scheduler worker CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 78, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"endpoint.*\"}[1m])) by (job)", + "interval": "", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "coprocessor CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 67, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "hideEmpty": true, + "hideZero": false, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"snapshot_worker.*\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "snapshot worker CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 68, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"split_check.*\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "split check CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "editable": true, + "error": false, + "fill": 0, + "grid": {}, + "id": 69, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "sort": null, + "sortDesc": null, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "max(rate(tikv_thread_cpu_seconds_total{name=~\"rocksdb.*\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_thread_cpu_seconds_total", + "refId": "A", + "step": 4 + } + ], + "thresholds": [ + { + "colorMode": "warning", + "fill": true, + "line": true, + "op": "gt", + "value": 1 + }, + { + "colorMode": "critical", + "fill": true, + "line": true, + "op": "gt", + "value": 4 + } + ], + "timeFrom": null, + "timeShift": null, + "title": "rocksdb CPU", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 105, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 250, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_thread_cpu_seconds_total{name=~\"grpc.*\"}[1m])) by (job)", + "intervalFactor": 2, + "legendFormat": "{{job}}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "grpc poll CPU", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Thread CPU", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 138, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_memtable_efficiency{db=\"$db\", type=\"memtable_hit\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "memtable", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=~\"block_cache_data_hit|block_cache_filter_hit\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "block_cache", + "metric": "", + "refId": "E", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_get_served{db=\"$db\", type=\"get_hit_l0\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "l0", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_get_served{db=\"$db\", type=\"get_hit_l1\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "l1", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_get_served{db=\"$db\", type=\"get_hit_l2_and_up\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "l2_and_up", + "refId": "F", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Get Operations", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 82, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_get_micro_seconds{db=\"$db\",type=\"get_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_get_micro_seconds{db=\"$db\",type=\"get_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_get_micro_seconds{db=\"$db\",type=\"get_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_get_micro_seconds{db=\"$db\",type=\"get_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Get Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "µs", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 129, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_locate{db=\"$db\", type=\"number_db_seek\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "seek", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_locate{db=\"$db\", type=\"number_db_seek_found\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "seek_found", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_locate{db=\"$db\", type=\"number_db_next\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "next", + "metric": "", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_locate{db=\"$db\", type=\"number_db_next_found\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "next_found", + "metric": "", + "refId": "D", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_locate{db=\"$db\", type=\"number_db_prev\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "prev", + "metric": "", + "refId": "E", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_locate{db=\"$db\", type=\"number_db_prev_found\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "prev_found", + "metric": "", + "refId": "F", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Seek Operations", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 125, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_seek_micro_seconds{db=\"$db\",type=\"seek_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_seek_micro_seconds{db=\"$db\",type=\"seek_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_seek_micro_seconds{db=\"$db\",type=\"seek_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_seek_micro_seconds{db=\"$db\",type=\"seek_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Seek Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "µs", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 139, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_write_served{db=\"$db\", type=~\"write_done_by_self|write_done_by_other\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "done", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_write_served{db=\"$db\", type=\"write_timeout\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "timeout", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_write_served{db=\"$db\", type=\"write_with_wal\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "with_wal", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Write Operations", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 126, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_write_micro_seconds{db=\"$db\",type=\"write_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_write_micro_seconds{db=\"$db\",type=\"write_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_write_micro_seconds{db=\"$db\",type=\"write_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_write_micro_seconds{db=\"$db\",type=\"write_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Write Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "µs", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 137, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_wal_file_synced{db=\"$db\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "sync", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "WAL Sync Operations", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 135, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 6, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_wal_file_sync_micro_seconds{db=\"$db\",type=\"wal_file_sync_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_wal_file_sync_micro_seconds{db=\"$db\",type=\"wal_file_sync_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_wal_file_sync_micro_seconds{db=\"$db\",type=\"wal_file_sync_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_wal_file_sync_micro_seconds{db=\"$db\",type=\"wal_file_sync_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "WAL Sync Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "µs", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 128, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_event_total{db=\"$db\"}[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_engine_event_total", + "refId": "B", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Compaction Operations", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 136, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": null, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_compaction_time{db=\"$db\",type=\"compaction_time_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_compaction_time{db=\"$db\",type=\"compaction_time_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_compaction_time{db=\"$db\",type=\"compaction_time_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_compaction_time{db=\"$db\",type=\"compaction_time_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Compaction Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "µs", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 140, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(rate(tikv_engine_sst_read_micros{db=\"$db\", type=\"sst_read_micros_max\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "max", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(rate(tikv_engine_sst_read_micros{db=\"$db\", type=\"sst_read_micros_percentile99\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(rate(tikv_engine_sst_read_micros{db=\"$db\", type=\"sst_read_micros_percentile95\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(rate(tikv_engine_sst_read_micros{db=\"$db\", type=\"sst_read_micros_average\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "SST Read Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ms", + "label": null, + "logBase": 10, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 87, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(rate(tikv_engine_write_stall{db=\"$db\", type=\"write_stall_max\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "max", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(rate(tikv_engine_write_stall{db=\"$db\", type=\"write_stall_percentile99\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "99%", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(rate(tikv_engine_write_stall{db=\"$db\", type=\"write_stall_percentile95\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "95%", + "metric": "", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(rate(tikv_engine_write_stall{db=\"$db\", type=\"write_stall_average\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "avg", + "metric": "", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Write Stall Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ms", + "label": null, + "logBase": 10, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 103, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_memory_bytes{db=\"$db\", type=\"mem-tables\"}) by (cf)", + "intervalFactor": 2, + "legendFormat": "{{cf}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Memtable Size", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 88, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": null, + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_memtable_efficiency{db=\"$db\", type=\"memtable_hit\"}[1m])) / (sum(rate(tikv_engine_memtable_efficiency{db=\"$db\", type=\"memtable_hit\"}[1m])) + sum(rate(tikv_engine_memtable_efficiency{db=\"$db\", type=\"memtable_miss\"}[1m])))", + "intervalFactor": 2, + "legendFormat": "hit", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Memtable Hit", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 102, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_block_cache_size_bytes{db=\"$db\"}) by(cf)", + "intervalFactor": 2, + "legendFormat": "{{cf}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Block Cache Size", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 80, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 6, + "nullPointMode": "connected", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_hit\"}[1m])) / (sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_hit\"}[1m])) + sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_miss\"}[1m])))", + "intervalFactor": 2, + "legendFormat": "all", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_data_hit\"}[1m])) / (sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_data_hit\"}[1m])) + sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_data_miss\"}[1m])))", + "intervalFactor": 2, + "legendFormat": "data", + "metric": "", + "refId": "D", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_filter_hit\"}[1m])) / (sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_filter_hit\"}[1m])) + sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_filter_miss\"}[1m])))", + "intervalFactor": 2, + "legendFormat": "filter", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_index_hit\"}[1m])) / (sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_index_hit\"}[1m])) + sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_index_miss\"}[1m])))", + "intervalFactor": 2, + "legendFormat": "index", + "metric": "", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_bloom_efficiency{db=\"$db\", type=\"bloom_prefix_useful\"}[1m])) / sum(rate(tikv_engine_bloom_efficiency{db=\"$db\", type=\"bloom_prefix_checked\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "bloom prefix", + "metric": "", + "refId": "E", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Block Cache Hit", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "transparent": false, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percentunit", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "height": "", + "id": 467, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"block_cache_byte_read\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "total_read", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"block_cache_byte_write\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "total_written", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_data_bytes_insert\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "data_insert", + "metric": "", + "refId": "D", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_filter_bytes_insert\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "filter_insert", + "metric": "", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_filter_bytes_evict\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "filter_evict", + "metric": "", + "refId": "E", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_index_bytes_insert\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "index_insert", + "metric": "", + "refId": "F", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_index_bytes_evict\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "index_evict", + "metric": "", + "refId": "G", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Block Cache Flow", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "none", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 468, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_add\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "total_add", + "metric": "", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_data_add\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "data_add", + "metric": "", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_filter_add\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "filter_add", + "metric": "", + "refId": "D", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_index_add\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "index_add", + "metric": "", + "refId": "E", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_cache_efficiency{db=\"$db\", type=\"block_cache_add_failures\"}[1m]))", + "intervalFactor": 2, + "legendFormat": "add_failures", + "metric": "", + "refId": "B", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Block Cache Operations", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "height": "", + "id": 132, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"keys_read\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "read", + "refId": "B", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"keys_written\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "written", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_compaction_num_corrupt_keys{db=\"$db\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "corrupt", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Keys Flow", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 131, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(tikv_engine_estimate_num_keys{db=\"$db\"}) by (cf)", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{cf}}", + "metric": "tikv_engine_estimate_num_keys", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Total Keys", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "height": "", + "id": 85, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"bytes_read\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "get", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"iter_bytes_read\"}[1m]))", + "hide": false, + "interval": "", + "intervalFactor": 2, + "legendFormat": "scan", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Read Flow", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 133, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 6, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_bytes_per_read{db=\"$db\",type=\"bytes_per_read_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_bytes_per_read{db=\"$db\",type=\"bytes_per_read_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_bytes_per_read{db=\"$db\",type=\"bytes_per_read_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_bytes_per_read{db=\"$db\",type=\"bytes_per_read_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Bytes / Read", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "decbytes", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "height": "", + "id": 86, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"wal_file_bytes\"}[1m]))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "wal", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"bytes_written\"}[1m]))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "write", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Write Flow", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 0, + "id": 134, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "minSpan": 6, + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_bytes_per_write{db=\"$db\",type=\"bytes_per_write_max\"})", + "intervalFactor": 2, + "legendFormat": "max", + "refId": "A", + "step": 10 + }, + { + "expr": "avg(tikv_engine_bytes_per_write{db=\"$db\",type=\"bytes_per_write_percentile99\"})", + "intervalFactor": 2, + "legendFormat": "99%", + "refId": "B", + "step": 10 + }, + { + "expr": "avg(tikv_engine_bytes_per_write{db=\"$db\",type=\"bytes_per_write_percentile95\"})", + "intervalFactor": 2, + "legendFormat": "95%", + "refId": "C", + "step": 10 + }, + { + "expr": "avg(tikv_engine_bytes_per_write{db=\"$db\",type=\"bytes_per_write_average\"})", + "intervalFactor": 2, + "legendFormat": "avg", + "refId": "D", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Bytes / Write", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "decbytes", + "label": null, + "logBase": 10, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 90, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_compaction_flow_bytes{db=\"$db\", type=\"bytes_read\"}[1m]))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "read", + "refId": "A", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_compaction_flow_bytes{db=\"$db\", type=\"bytes_written\"}[1m]))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "written", + "refId": "C", + "step": 10 + }, + { + "expr": "sum(rate(tikv_engine_flow_bytes{db=\"$db\", type=\"flush_write_bytes\"}[1m]))", + "hide": false, + "intervalFactor": 2, + "legendFormat": "flushed", + "refId": "B", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Compaction Flow", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "Bps", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 127, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_pending_compaction_bytes{db=\"$db\"}[1m])) by (cf)", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{cf}}", + "metric": "tikv_engine_pending_compaction_bytes", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Compaction Pending Bytes", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "Bps", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 518, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_engine_read_amp_flow_bytes{db=\"$db\", type=\"read_amp_total_read_bytes\"}[1m])) by (job) / sum(rate(tikv_engine_read_amp_flow_bytes{db=\"$db\", type=\"read_amp_estimate_useful_bytes\"}[1m])) by (job)", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Read Amplication", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 863, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "avg(tikv_engine_compression_ratio{db=\"$db\"}) by (level)", + "hide": false, + "intervalFactor": 2, + "legendFormat": "level - {{level}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Compression Ratio", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 516, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "tikv_engine_num_snapshots{db=\"$db\"}", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Number of Snapshots", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 517, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "tikv_engine_oldest_snapshot_duration{db=\"$db\"}", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{job}}", + "metric": "tikv_engine_oldest_snapshot_duration", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Oldest Snapshots Duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": "0", + "show": true + } + ] + } + ], + "repeat": "db", + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Rocksdb - $db", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 95, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_grpc_msg_duration_seconds_count[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_grpc_msg_duration_seconds_bucket", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "grpc message count", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 107, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_grpc_msg_fail_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "metric": "tikv_grpc_msg_fail_total", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "grpc message failed", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 97, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.8, sum(rate(tikv_grpc_msg_duration_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "80% grpc messge duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 10, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 98, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 300, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "histogram_quantile(0.99, sum(rate(tikv_grpc_msg_duration_seconds_bucket[1m])) by (le, type))", + "intervalFactor": 2, + "legendFormat": "{{type}}", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "99% grpc messge duration", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 10, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Grpc", + "titleSize": "h6" + }, + { + "collapse": true, + "height": "300", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 1069, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 350, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_pd_request_duration_seconds_count[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{ type }}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD requests", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 1070, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 350, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_pd_request_duration_seconds_sum[1m])) by (type) / sum(rate(tikv_pd_request_duration_seconds_count[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{ type }}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD request duration (average)", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 1215, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 350, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_pd_heartbeat_message_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{ type }}", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD heartbeats", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "datasource": "${DS_TEST-CLUSTER}", + "decimals": 1, + "fill": 1, + "id": 1396, + "legend": { + "alignAsTable": true, + "avg": false, + "current": true, + "max": true, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 350, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(tikv_pd_validate_peer_total[1m])) by (type)", + "intervalFactor": 2, + "legendFormat": "{{ type }}", + "metric": "", + "refId": "A", + "step": 4 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "PD validate peers", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ops", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "PD", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": [], + "templating": { + "list": [ + { + "allValue": null, + "current": {}, + "datasource": "${DS_TEST-CLUSTER}", + "hide": 0, + "includeAll": true, + "label": "db", + "multi": true, + "name": "db", + "options": [], + "query": "label_values(tikv_engine_block_cache_size_bytes, db)", + "refresh": 1, + "regex": "", + "sort": 1, + "tagValuesQuery": "", + "tags": [], + "tagsQuery": "", + "type": "query", + "useTags": false + }, + { + "allValue": null, + "current": {}, + "datasource": "${DS_TEST-CLUSTER}", + "hide": 0, + "includeAll": true, + "label": "command", + "multi": true, + "name": "command", + "options": [], + "query": "label_values(tikv_storage_command_total, type)", + "refresh": 1, + "regex": "", + "sort": 1, + "tagValuesQuery": "", + "tags": [], + "tagsQuery": "", + "type": "query", + "useTags": false + } + ] + }, + "time": { + "from": "now-1h", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ] + }, + "timezone": "browser", + "title": "Test-Cluster-TiKV", + "version": 2 +} diff --git a/v2.0/media/architecture.jpeg b/v2.0/media/architecture.jpeg new file mode 100755 index 0000000000000..f37b6cc495bbd Binary files /dev/null and b/v2.0/media/architecture.jpeg differ diff --git a/v2.0/media/explain_dot.png b/v2.0/media/explain_dot.png new file mode 100755 index 0000000000000..9ec5d1e566dc0 Binary files /dev/null and b/v2.0/media/explain_dot.png differ diff --git a/v2.0/media/grafana-screenshot.png b/v2.0/media/grafana-screenshot.png new file mode 100755 index 0000000000000..2e442f4e5cb13 Binary files /dev/null and b/v2.0/media/grafana-screenshot.png differ diff --git a/v2.0/media/monitor-architecture.png b/v2.0/media/monitor-architecture.png new file mode 100755 index 0000000000000..22b6f9ef07ab1 Binary files /dev/null and b/v2.0/media/monitor-architecture.png differ diff --git a/v2.0/media/overview.png b/v2.0/media/overview.png new file mode 100755 index 0000000000000..8a665fb4d82e8 Binary files /dev/null and b/v2.0/media/overview.png differ diff --git a/v2.0/media/pingcap-logo-1.png b/v2.0/media/pingcap-logo-1.png new file mode 100755 index 0000000000000..adc261932c354 Binary files /dev/null and b/v2.0/media/pingcap-logo-1.png differ diff --git a/v2.0/media/pingcap-logo.png b/v2.0/media/pingcap-logo.png new file mode 100755 index 0000000000000..3cc65eec06d21 Binary files /dev/null and b/v2.0/media/pingcap-logo.png differ diff --git a/v2.0/media/prometheus-in-tidb.png b/v2.0/media/prometheus-in-tidb.png new file mode 100755 index 0000000000000..757c5a6d2e474 Binary files /dev/null and b/v2.0/media/prometheus-in-tidb.png differ diff --git a/v2.0/media/syncer_architecture.png b/v2.0/media/syncer_architecture.png new file mode 100755 index 0000000000000..c3180b9c62aa0 Binary files /dev/null and b/v2.0/media/syncer_architecture.png differ diff --git a/v2.0/media/syncer_monitor_scheme.png b/v2.0/media/syncer_monitor_scheme.png new file mode 100755 index 0000000000000..c965622e5a88c Binary files /dev/null and b/v2.0/media/syncer_monitor_scheme.png differ diff --git a/v2.0/media/syncer_sharding.png b/v2.0/media/syncer_sharding.png new file mode 100755 index 0000000000000..a9f50f9abba55 Binary files /dev/null and b/v2.0/media/syncer_sharding.png differ diff --git a/v2.0/media/sysbench-01.png b/v2.0/media/sysbench-01.png new file mode 100755 index 0000000000000..ca256377e4f1a Binary files /dev/null and b/v2.0/media/sysbench-01.png differ diff --git a/v2.0/media/sysbench-02.png b/v2.0/media/sysbench-02.png new file mode 100755 index 0000000000000..9e708370271b0 Binary files /dev/null and b/v2.0/media/sysbench-02.png differ diff --git a/v2.0/media/sysbench-03.png b/v2.0/media/sysbench-03.png new file mode 100755 index 0000000000000..04eb0b36bf741 Binary files /dev/null and b/v2.0/media/sysbench-03.png differ diff --git a/v2.0/media/sysbench-04.png b/v2.0/media/sysbench-04.png new file mode 100755 index 0000000000000..cadd75e9831e8 Binary files /dev/null and b/v2.0/media/sysbench-04.png differ diff --git a/v2.0/media/sysbench-05.png b/v2.0/media/sysbench-05.png new file mode 100755 index 0000000000000..7842f60a4f0d8 Binary files /dev/null and b/v2.0/media/sysbench-05.png differ diff --git a/v2.0/media/sysbench-06.png b/v2.0/media/sysbench-06.png new file mode 100755 index 0000000000000..14bb2196ab72a Binary files /dev/null and b/v2.0/media/sysbench-06.png differ diff --git a/v2.0/media/sysbench-07.png b/v2.0/media/sysbench-07.png new file mode 100755 index 0000000000000..bd3313a11b744 Binary files /dev/null and b/v2.0/media/sysbench-07.png differ diff --git a/v2.0/media/sysbench-08.png b/v2.0/media/sysbench-08.png new file mode 100755 index 0000000000000..c3c218af4ab7c Binary files /dev/null and b/v2.0/media/sysbench-08.png differ diff --git a/v2.0/media/sysbench-09.png b/v2.0/media/sysbench-09.png new file mode 100755 index 0000000000000..fce27b6b59dcd Binary files /dev/null and b/v2.0/media/sysbench-09.png differ diff --git a/v2.0/media/tidb-architecture.png b/v2.0/media/tidb-architecture.png new file mode 100755 index 0000000000000..51d4f57aa7dd2 Binary files /dev/null and b/v2.0/media/tidb-architecture.png differ diff --git a/v2.0/media/tidb_binlog_kafka_architecture.png b/v2.0/media/tidb_binlog_kafka_architecture.png new file mode 100755 index 0000000000000..79790eb436466 Binary files /dev/null and b/v2.0/media/tidb_binlog_kafka_architecture.png differ diff --git a/v2.0/media/tidb_pump_deployment.jpeg b/v2.0/media/tidb_pump_deployment.jpeg new file mode 100755 index 0000000000000..177a72f6253eb Binary files /dev/null and b/v2.0/media/tidb_pump_deployment.jpeg differ diff --git a/v2.0/media/tikv_stack.png b/v2.0/media/tikv_stack.png new file mode 100755 index 0000000000000..4f8b1b6d4d45e Binary files /dev/null and b/v2.0/media/tikv_stack.png differ diff --git a/v2.0/media/tispark-architecture.png b/v2.0/media/tispark-architecture.png new file mode 100755 index 0000000000000..6d8f0849fa90c Binary files /dev/null and b/v2.0/media/tispark-architecture.png differ diff --git a/v2.0/media/tpch-query-result.png b/v2.0/media/tpch-query-result.png new file mode 100755 index 0000000000000..c9e6a51b6415c Binary files /dev/null and b/v2.0/media/tpch-query-result.png differ diff --git a/v2.0/op-guide/ansible-deployment-rolling-update.md b/v2.0/op-guide/ansible-deployment-rolling-update.md new file mode 100755 index 0000000000000..3b711d2df4e1c --- /dev/null +++ b/v2.0/op-guide/ansible-deployment-rolling-update.md @@ -0,0 +1,137 @@ +--- +title: Upgrade TiDB Using TiDB-Ansible +summary: Use TiDB-Ansible to perform a rolling update for a TiDB cluster. +category: operations +--- + +# Upgrade TiDB Using TiDB-Ansible + +When you perform a rolling update for a TiDB cluster, the service is shut down serially and is started after you update the service binary and the configuration file. If the load balancing is configured in the front-end, the rolling update of TiDB does not impact the running applications. Minimum requirements: `pd*3, tidb*2, tikv*3`. + +> **Note:** If the binlog is enabled, and Pump and Drainer services are deployed in the TiDB cluster, stop the Drainer service before the rolling update. The Pump service is automatically updated in the rolling update of TiDB. + +## Upgrade the component version + +- To upgrade between large versions, you need to upgrade [`tidb-ansible`](https://github.com/pingcap/tidb-ansible). If you want to upgrade the version of TiDB from 1.0 to 2.0, see [TiDB 2.0 Upgrade Guide](tidb-v2-upgrade-guide.md). + +- For a minor upgrade, it is also recommended to update `tidb-ansible` for the latest configuration file templates, features, and bug fixes. + +### Download the binary automatically + +1. Edit the value of the `tidb_version` parameter in the `/home/tidb/tidb-ansible/inventory.ini` file, and specify the version number you need to upgrade to. + + For example, to upgrade from `v2.0.2` to `v2.0.3`: + + ``` + tidb_version = v2.0.3 + ``` + +2. Delete the existing `downloads` directory `/home/tidb/tidb-ansible/downloads/`. + + ``` + $ cd /home/tidb/tidb-ansible + $ rm -rf downloads + ``` + +3. Use `playbook` to download the TiDB `v2.0.3` binary and replace the existing binary in `/home/tidb/tidb-ansible/resource/bin/` with it automatically. + + ``` + $ ansible-playbook local_prepare.yml + ``` + +### Download the binary manually + +You can also download the binary manually. Use `wget` to download the binary and replace the existing binary in `/home/tidb/tidb-ansible/resource/bin/` with it manually. + +``` +wget http://download.pingcap.org/tidb-v2.0.3-linux-amd64-unportable.tar.gz +``` + +> **Note:** Remember to replace the version number in the download link with the one you need. + +### Perform a rolling update using Ansible + +- Apply a rolling update to the PD node (only upgrade the PD service) + + ``` + $ ansible-playbook rolling_update.yml --tags=pd + ``` + + When you apply a rolling update to the PD leader instance, if the number of PD instances is not less than 3, Ansible migrates the PD leader to another node before stopping this instance. + +- Apply a rolling update to the TiKV node (only upgrade the TiKV service) + + ``` + $ ansible-playbook rolling_update.yml --tags=tikv + ``` + + When you apply a rolling update to the TiKV instance, Ansible migrates the Region leader to other nodes. The concrete logic is as follows: Call the PD API to add the `evict leader scheduler` -> Inspect the `leader_count` of this TiKV instance every 10 seconds -> Wait the `leader_count` to reduce to below 1, or until the times of inspecting the `leader_count` is more than 18 -> Start closing the rolling update of TiKV after three minutes of timeout -> Delete the `evict leader scheduler` after successful start. The operations are executed serially. + + If the rolling update fails in the process, log in to `pd-ctl` to execute `scheduler show` and check whether `evict-leader-scheduler` exists. If it does exist, delete it manually. Replace `{PD_IP}` and `{STORE_ID}` with your PD IP and the `store_id` of the TiKV instance: + + ``` + $ /home/tidb/tidb-ansible/resources/bin/pd-ctl -u "http://{PD_IP}:2379"$ /home/tidb/tidb-ansible/resources/bin/pd-ctl -u "http://{PD_IP}:2379" + » scheduler show + [ + "label-scheduler", + "evict-leader-scheduler-{STORE_ID}", + "balance-region-scheduler", + "balance-leader-scheduler", + "balance-hot-region-scheduler" + ] + » scheduler remove evict-leader-scheduler-{STORE_ID} + ``` + +- Apply a rolling update to the TiDB node (only upgrade the TiDB service) + + If the binlog is enabled in the TiDB cluster, the Pump service is automatically upgraded in the rolling update of the TiDB service. + + ``` + $ ansible-playbook rolling_update.yml --tags=tidb + ``` + +- Apply a rolling update to all services (upgrade PD, TiKV, and TiDB in sequence) + + If the binlog is enabled in the TiDB cluster, the Pump service is automatically upgraded in the rolling update of the TiDB service. + + ``` + $ ansible-playbook rolling_update.yml + ``` + +- Apply a rolling update to the monitoring component + + ``` + $ ansible-playbook rolling_update_monitor.yml + ``` + +## Modify component configuration + +This section describes how to modify component configuration using Ansible. + +1. Update the component configuration template. + + The component configuration template of the TiDB cluster is in the `/home/tidb/tidb-ansible/conf` folder. + + | Component | Template Name of the Configuration File | + | :-------- | :----------: | + | TiDB | tidb.yml | + | TiKV | tikv.yml | + | PD | pd.yml | + + The comment status if the default configuration item, which uses the default value. To modify it, you need to cancel the comment by removing `#` and then modify the corresponding parameter value. + + The configuration template uses the yaml format, so separate the parameter name and the parameter value using `:`, and indent two spaces. + + For example, modify the value of the `high-concurrency`, `normal-concurrency` and `low-concurrency` parameters to 16 for the TiKV component: + + ```bash + readpool: + coprocessor: + # Notice: if CPU_NUM > 8, the default thread pool size for coprocessors + # will be set to CPU_NUM * 0.8. + high-concurrency: 16 + normal-concurrency: 16 + low-concurrency: 16 + ``` + +2. After modifying the component configuration, you need to perform a rolling update using Ansible. See [Perform a rolling update using Ansible](#perform-a-rolling-update-using-ansible). \ No newline at end of file diff --git a/v2.0/op-guide/ansible-deployment-scale.md b/v2.0/op-guide/ansible-deployment-scale.md new file mode 100755 index 0000000000000..778ac27f98b25 --- /dev/null +++ b/v2.0/op-guide/ansible-deployment-scale.md @@ -0,0 +1,472 @@ +--- +title: Scale the TiDB Cluster Using TiDB-Ansible +summary: Use TiDB-Ansible to increase/decrease the capacity of a TiDB/TiKV/PD node. +category: operations +--- + +# Scale the TiDB Cluster Using TiDB-Ansible + +The capacity of a TiDB cluster can be increased or decreased without affecting the online services. + +> **Warning:** In decreasing the capacity, if your cluster has a mixed deployment of other services, do not perform the following procedures. The following examples assume that the removed nodes have no mixed deployment of other services. + +Assume that the topology is as follows: + +| Name | Host IP | Services | +| ---- | ------- | -------- | +| node1 | 172.16.10.1 | PD1 | +| node2 | 172.16.10.2 | PD2 | +| node3 | 172.16.10.3 | PD3, Monitor | +| node4 | 172.16.10.4 | TiDB1 | +| node5 | 172.16.10.5 | TiDB2 | +| node6 | 172.16.10.6 | TiKV1 | +| node7 | 172.16.10.7 | TiKV2 | +| node8 | 172.16.10.8 | TiKV3 | +| node9 | 172.16.10.9 | TiKV4 | + +## Increase the capacity of a TiDB/TiKV node + +For example, if you want to add two TiDB nodes (node101, node102) with the IP addresses `172.16.10.101` and `172.16.10.102`, take the following steps: + +1. Edit the `inventory.ini` file and append the node information: + + ```ini + [tidb_servers] + 172.16.10.4 + 172.16.10.5 + 172.16.10.101 + 172.16.10.102 + + [pd_servers] + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + + [tikv_servers] + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitored_servers] + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + 172.16.10.4 + 172.16.10.5 + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + 172.16.10.101 + 172.16.10.102 + + [monitoring_servers] + 172.16.10.3 + + [grafana_servers] + 172.16.10.3 + ``` + + Now the topology is as follows: + + | Name | Host IP | Services | + | ---- | ------- | -------- | + | node1 | 172.16.10.1 | PD1 | + | node2 | 172.16.10.2 | PD2 | + | node3 | 172.16.10.3 | PD3, Monitor | + | node4 | 172.16.10.4 | TiDB1 | + | node5 | 172.16.10.5 | TiDB2 | + | **node101** | **172.16.10.101**|**TiDB3** | + | **node102** | **172.16.10.102**|**TiDB4** | + | node6 | 172.16.10.6 | TiKV1 | + | node7 | 172.16.10.7 | TiKV2 | + | node8 | 172.16.10.8 | TiKV3 | + | node9 | 172.16.10.9 | TiKV4 | + +2. Initialize the newly added node: + + ``` + ansible-playbook bootstrap.yml -l 172.16.10.101,172.16.10.102 + ``` + + > **Note:** If an alias is configured in the `inventory.ini` file, for example, `node101 ansible_host=172.16.10.101`, use `-1` to specify the alias when executing `ansible-playbook`. For example, `ansible-playbook bootstrap.yml -l node101,node102`. This also applies to the following steps. + +3. Deploy the newly added node: + + ``` + ansible-playbook deploy.yml -l 172.16.10.101,172.16.10.102 + ``` + +4. Start the newly added node: + + ``` + ansible-playbook start.yml -l 172.16.10.101,172.16.10.102 + ``` + +5. Update the Prometheus configuration and restart the cluster: + + ``` + ansible-playbook rolling_update_monitor.yml --tags=prometheus + ``` + +6. Monitor the status of the entire cluster and the newly added node by opening a browser to access the monitoring platform: `http://172.16.10.3:3000`. + +You can use the same procedure to add a TiKV node. But to add a PD node, some configuration files need to be manually updated. + +## Increase the capacity of a PD node + +For example, if you want to add a PD node (node103) with the IP address `172.16.10.103`, take the following steps: + +1. Edit the `inventory.ini` file and append the node information: + + ```ini + [tidb_servers] + 172.16.10.4 + 172.16.10.5 + + [pd_servers] + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + 172.16.10.103 + + [tikv_servers] + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitored_servers] + 172.16.10.4 + 172.16.10.5 + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + 172.16.10.103 + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitoring_servers] + 172.16.10.3 + + [grafana_servers] + 172.16.10.3 + ``` + + Now the topology is as follows: + + | Name | Host IP | Services | + | ---- | ------- | -------- | + | node1 | 172.16.10.1 | PD1 | + | node2 | 172.16.10.2 | PD2 | + | node3 | 172.16.10.3 | PD3, Monitor | + | **node103** | **172.16.10.103** | **PD4** | + | node4 | 172.16.10.4 | TiDB1 | + | node5 | 172.16.10.5 | TiDB2 | + | node6 | 172.16.10.6 | TiKV1 | + | node7 | 172.16.10.7 | TiKV2 | + | node8 | 172.16.10.8 | TiKV3 | + | node9 | 172.16.10.9 | TiKV4 | + +2. Initialize the newly added node: + + ``` + ansible-playbook bootstrap.yml -l 172.16.10.103 + ``` + +3. Deploy the newly added node: + + ``` + ansible-playbook deploy.yml -l 172.16.10.103 + ``` + +4. Login the newly added PD node and edit the starting script: + + ``` + {deploy_dir}/scripts/run_pd.sh + ``` + + 1. Remove the `--initial-cluster="xxxx" \` configuration. + 2. Add `--join="http://172.16.10.1:2379" \`. The IP address (`172.16.10.1`) can be any of the existing PD IP address in the cluster. + 3. Manually start the PD service in the newly added PD node: + + ``` + {deploy_dir}/scripts/start_pd.sh + ``` + + 4. Use `pd-ctl` to check whether the new node is added successfully: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" + ``` + + > **Note:** `pd-ctl` is a command used to check the number of PD nodes. + +5. Apply a rolling update to the entire cluster: + + ``` + ansible-playbook rolling_update.yml + ``` + +6. Update the Prometheus configuration and restart the cluster: + + ``` + ansible-playbook rolling_update_monitor.yml --tags=prometheus + ``` + +7. Monitor the status of the entire cluster and the newly added node by opening a browser to access the monitoring platform: `http://172.16.10.3:3000`. + +## Decrease the capacity of a TiDB node + +For example, if you want to remove a TiDB node (node5) with the IP address `172.16.10.5`, take the following steps: + +1. Stop all services on node5: + + ``` + ansible-playbook stop.yml -l 172.16.10.5 + ``` + +2. Edit the `inventory.ini` file and remove the node information: + + ```ini + [tidb_servers] + 172.16.10.4 + #172.16.10.5 # the removed node + + [pd_servers] + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + + [tikv_servers] + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitored_servers] + 172.16.10.4 + #172.16.10.5 # the removed node + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitoring_servers] + 172.16.10.3 + + [grafana_servers] + 172.16.10.3 + ``` + + Now the topology is as follows: + + | Name | Host IP | Services | + | ---- | ------- | -------- | + | node1 | 172.16.10.1 | PD1 | + | node2 | 172.16.10.2 | PD2 | + | node3 | 172.16.10.3 | PD3, Monitor | + | node4 | 172.16.10.4 | TiDB1 | + | **node5** | **172.16.10.5** | **TiDB2 removed** | + | node6 | 172.16.10.6 | TiKV1 | + | node7 | 172.16.10.7 | TiKV2 | + | node8 | 172.16.10.8 | TiKV3 | + | node9 | 172.16.10.9 | TiKV4 | + +3. Update the Prometheus configuration and restart the cluster: + + ``` + ansible-playbook rolling_update_monitor.yml --tags=prometheus + ``` + +4. Monitor the status of the entire cluster by opening a browser to access the monitoring platform: `http://172.16.10.3:3000`. + +## Decrease the capacity of a TiKV node + +For example, if you want to remove a TiKV node (node9) with the IP address `172.16.10.9`, take the following steps: + +1. Remove the node from the cluster using `pd-ctl`: + + 1. View the store ID of node9: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" -d store + ``` + + 2. Remove node9 from the cluster, assuming that the store ID is 10: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" -d store delete 10 + ``` + +2. Use Grafana or `pd-ctl` to check whether the node is successfully removed: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" -d store 10 + ``` + + > **Note:** It takes some time to remove the node. If the status of the node you remove becomes Tombstone, then this node is successfully removed. + +3. After the node is successfully removed, stop the services on node9: + + ``` + ansible-playbook stop.yml -l 172.16.10.9 + ``` + +4. Edit the `inventory.ini` file and remove the node information: + + ```ini + [tidb_servers] + 172.16.10.4 + 172.16.10.5 + + [pd_servers] + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + + [tikv_servers] + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + #172.16.10.9 # the removed node + + [monitored_servers] + 172.16.10.4 + 172.16.10.5 + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + #172.16.10.9 # the removed node + + [monitoring_servers] + 172.16.10.3 + + [grafana_servers] + 172.16.10.3 + ``` + + Now the topology is as follows: + + | Name | Host IP | Services | + | ---- | ------- | -------- | + | node1 | 172.16.10.1 | PD1 | + | node2 | 172.16.10.2 | PD2 | + | node3 | 172.16.10.3 | PD3, Monitor | + | node4 | 172.16.10.4 | TiDB1 | + | node5 | 172.16.10.5 | TiDB2 | + | node6 | 172.16.10.6 | TiKV1 | + | node7 | 172.16.10.7 | TiKV2 | + | node8 | 172.16.10.8 | TiKV3 | + | **node9** | **172.16.10.9** | **TiKV4 removed** | + +5. Update the Prometheus configuration and restart the cluster: + + ``` + ansible-playbook rolling_update_monitor.yml --tags=prometheus + ``` + +6. Monitor the status of the entire cluster by opening a browser to access the monitoring platform: `http://172.16.10.3:3000`. + +## Decrease the capacity of a PD node + +For example, if you want to remove a PD node (node2) with the IP address `172.16.10.2`, take the following steps: + +1. Remove the node from the cluster using `pd-ctl`: + + 1. View the name of node2: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" -d member + ``` + + 2. Remove node2 from the cluster, assuming that the name is pd2: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" -d member delete name pd2 + ``` + +2. Use Grafana or `pd-ctl` to check whether the node is successfully removed: + + ``` + ./pd-ctl -u "http://172.16.10.1:2379" -d member + ``` + +3. After the node is successfully removed, stop the services on node2: + + ``` + ansible-playbook stop.yml -l 172.16.10.2 + ``` + +4. Edit the `inventory.ini` file and remove the node information: + + ```ini + [tidb_servers] + 172.16.10.4 + 172.16.10.5 + + [pd_servers] + 172.16.10.1 + #172.16.10.2 # the removed node + 172.16.10.3 + + [tikv_servers] + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitored_servers] + 172.16.10.4 + 172.16.10.5 + 172.16.10.1 + #172.16.10.2 # the removed node + 172.16.10.3 + 172.16.10.6 + 172.16.10.7 + 172.16.10.8 + 172.16.10.9 + + [monitoring_servers] + 172.16.10.3 + + [grafana_servers] + 172.16.10.3 + ``` + + Now the topology is as follows: + + | Name | Host IP | Services | + | ---- | ------- | -------- | + | node1 | 172.16.10.1 | PD1 | + | **node2** | **172.16.10.2** | **PD2 removed** | + | node3 | 172.16.10.3 | PD3, Monitor | + | node4 | 172.16.10.4 | TiDB1 | + | node5 | 172.16.10.5 | TiDB2 | + | node6 | 172.16.10.6 | TiKV1 | + | node7 | 172.16.10.7 | TiKV2 | + | node8 | 172.16.10.8 | TiKV3 | + | node9 | 172.16.10.9 | TiKV4 | + +5. Perform a rolling update to the entire TiDB cluster: + + ``` + ansible-playbook rolling_update.yml + ``` + +6. Update the Prometheus configuration and restart the cluster: + + ``` + ansible-playbook rolling_update_monitor.yml --tags=prometheus + ``` + +7. To monitor the status of the entire cluster, open a browser to access the monitoring platform: `http://172.16.10.3:3000`. \ No newline at end of file diff --git a/v2.0/op-guide/ansible-deployment.md b/v2.0/op-guide/ansible-deployment.md new file mode 100755 index 0000000000000..d5656f4349fa9 --- /dev/null +++ b/v2.0/op-guide/ansible-deployment.md @@ -0,0 +1,774 @@ +--- +title: Deploy TiDB Using Ansible +summary: Use Ansible to deploy a TiDB cluster. +category: operations +--- + +# Deploy TiDB Using Ansible + +This guide describes how to deploy a TiDB cluster using Ansible. For the production environment, it is recommended to deploy TiDB using Ansible. + +## Overview + +Ansible is an IT automation tool that can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. + +[TiDB-Ansible](https://github.com/pingcap/tidb-ansible) is a TiDB cluster deployment tool developed by PingCAP, based on Ansible playbook. TiDB-Ansible enables you to quickly deploy a new TiDB cluster which includes PD, TiDB, TiKV, and the cluster monitoring modules. + +You can use the TiDB-Ansible configuration file to set up the cluster topology and complete all the following operation tasks: + +- Initialize operating system parameters +- Deploy the whole TiDB cluster +- [Start the TiDB cluster](ansible-operation.md#start-a-cluster) +- [Stop the TiDB cluster](ansible-operation.md#stop-a-cluster) +- [Modify component configuration](ansible-deployment-rolling-update.md#modify-component-configuration) +- [Scale the TiDB cluster](ansible-deployment-scale.md) +- [Upgrade the component version](ansible-deployment-rolling-update.md#upgrade-the-component-version) +- [Clean up data of the TiDB cluster](ansible-operation.md#clean-up-cluster-data) +- [Destroy the TiDB cluster](ansible-operation.md#destroy-a-cluster) + +## Prepare + +Before you start, make sure you have: + +1. Several target machines that meet the following requirements: + + - 4 or more machines + + A standard TiDB cluster contains 6 machines. You can use 4 machines for testing. For more details, see [Software and Hardware Requirements](recommendation.md). + + - CentOS 7.3 (64 bit) or later, x86_64 architecture (AMD64) + - Network between machines + + > **Note:** When you deploy TiDB using Ansible, **use SSD disks for the data directory of TiKV and PD nodes**. Otherwise, it cannot pass the check. If you only want to try TiDB out and explore the features, it is recommended to [deploy TiDB using Docker Compose](docker-compose.md) on a single machine. + +2. A Control Machine that meets the following requirements: + + > **Note:** The Control Machine can be one of the target machines. + + - CentOS 7.3 (64 bit) or later with Python 2.7 installed + - Access to the Internet + +## Step 1: Install system dependencies on the Control Machine + +Log in to the Control Machine using the `root` user account, and run the corresponding command according to your operating system. + +- If you use a Control Machine installed with CentOS 7, run the following command: + + ``` + # yum -y install epel-release git curl sshpass + # yum -y install python-pip + ``` + +- If you use a Control Machine installed with Ubuntu, run the following command: + + ``` + # apt-get -y install git curl sshpass python-pip + ``` + +## Step 2: Create the `tidb` user on the Control Machine and generate the SSH key + +Make sure you have logged in to the Control Machine using the `root` user account, and then run the following command. + +1. Create the `tidb` user. + + ``` + # useradd -m -d /home/tidb tidb + ``` + +2. Set a password for the `tidb` user account. + + ``` + # passwd tidb + ``` + +3. Configure sudo without password for the `tidb` user account by adding `tidb ALL=(ALL) NOPASSWD: ALL` to the end of the sudo file: + + ``` + # visudo + tidb ALL=(ALL) NOPASSWD: ALL + ``` +4. Generate the SSH key. + + Execute the `su` command to switch the user from `root` to `tidb`. Create the SSH key for the `tidb` user account and hit the Enter key when `Enter passphrase` is prompted. After successful execution, the SSH private key file is `/home/tidb/.ssh/id_rsa`, and the SSH public key file is `/home/tidb/.ssh/id_rsa.pub`. + + ``` + # su - tidb + $ ssh-keygen -t rsa + Generating public/private rsa key pair. + Enter file in which to save the key (/home/tidb/.ssh/id_rsa): + Created directory '/home/tidb/.ssh'. + Enter passphrase (empty for no passphrase): + Enter same passphrase again: + Your identification has been saved in /home/tidb/.ssh/id_rsa. + Your public key has been saved in /home/tidb/.ssh/id_rsa.pub. + The key fingerprint is: + SHA256:eIBykszR1KyECA/h0d7PRKz4fhAeli7IrVphhte7/So tidb@172.16.10.49 + The key's randomart image is: + +---[RSA 2048]----+ + |=+o+.o. | + |o=o+o.oo | + | .O.=.= | + | . B.B + | + |o B * B S | + | * + * + | + | o + . | + | o E+ . | + |o ..+o. | + +----[SHA256]-----+ + ``` + +## Step 3: Download TiDB-Ansible to the Control Machine + +1. Log in to the Control Machine using the `tidb` user account and enter the `/home/tidb` directory. + +2. Download the corresponding TiDB-Ansible version from the [TiDB-Ansible project](https://github.com/pingcap/tidb-ansible). The default folder name is `tidb-ansible`. + + - Download the 2.0 GA version: + + ```bash + $ git clone -b release-2.0 https://github.com/pingcap/tidb-ansible.git + ``` + + - Download the master version: + + ```bash + $ git clone https://github.com/pingcap/tidb-ansible.git + ``` + + > **Note:** It is required to download `tidb-ansible` to the `/home/tidb` directory using the `tidb` user account. If you download it to the `/root` directory, a privilege issue occurs. + + If you have questions regarding which version to use, email to info@pingcap.com for more information or [file an issue](https://github.com/pingcap/tidb-ansible/issues/new). + +## Step 4: Install Ansible and its dependencies on the Control Machine + +Make sure you have logged in to the Control Machine using the `tidb` user account. + +It is required to use `pip` to install Ansible and its dependencies, otherwise a compatibility issue occurs. Currently, the TiDB 2.0 GA version and the master version are compatible with Ansible 2.4 and Ansible 2.5. + +1. Install Ansible and the dependencies on the Control Machine: + + ```bash + $ cd /home/tidb/tidb-ansible + $ sudo pip install -r ./requirements.txt + ``` + + Ansible and the related dependencies are in the `tidb-ansible/requirements.txt` file. + +2. View the version of Ansible: + + ```bash + $ ansible --version + ansible 2.5.0 + ``` + +## Step 5: Configure the SSH mutual trust and sudo rules on the Control Machine + +Make sure you have logged in to the Control Machine using the `tidb` user account. + +1. Add the IPs of your target machines to the `[servers]` section of the `hosts.ini` file. + + ```bash + $ cd /home/tidb/tidb-ansible + $ vi hosts.ini + [servers] + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + 172.16.10.4 + 172.16.10.5 + 172.16.10.6 + + [all:vars] + username = tidb + ntp_server = pool.ntp.org + ``` + +2. Run the following command and input the `root` user account password of your target machines. + + ```bash + $ ansible-playbook -i hosts.ini create_users.yml -u root -k + ``` + + This step creates the `tidb` user account on the target machines, configures the sudo rules and the SSH mutual trust between the Control Machine and the target machines. + +> To configure the SSH mutual trust and sudo without password manually, see [How to manually configure the SSH mutual trust and sudo without password](#how-to-manually-configure-the-ssh-mutual-trust-and-sudo-without-password) + +## Step 6: Install the NTP service on the target machines + +> **Note:** If the time and time zone of all your target machines are same, the NTP service is on and is normally synchronizing time, you can ignore this step. See [How to check whether the NTP service is normal](#how-to-check-whether-the-ntp-service-is-normal). + +Make sure you have logged in to the Control Machine using the `tidb` user account, run the following command: + +```bash +$ cd /home/tidb/tidb-ansible +$ ansible-playbook -i hosts.ini deploy_ntp.yml -u tidb -b +``` + +The NTP service is installed and started using the software repository that comes with the system on the target machines. The default NTP server list in the installation package is used. The related `server` parameter is in the `/etc/ntp.conf` configuration file. + +To make the NTP service start synchronizing as soon as possible, the system executes the `ntpdate` command to set the local date and time by polling `ntp_server` in the `hosts.ini` file. The default server is `pool.ntp.org`, and you can also replace it with your NTP server. + +## Step 7: Configure the CPUfreq governor mode on the target machine + +For details about CPUfreq, see [the CPUfreq Governor documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/power_management_guide/cpufreq_governors). + +Set the CPUfreq governor mode to `performance` to make full use of CPU performance. + +### Check the governor modes supported by the system + +You can run the `cpupower frequency-info --governors` command to check the governor modes which the system supports: + +``` +# cpupower frequency-info --governors +analyzing CPU 0: + available cpufreq governors: performance powersave +``` + +Taking the above code for example, the system supports the `performance` and `powersave` modes. + +> **Note:** As the following shows, if it returns "Not Available", it means that the current system does not support CPUfreq configuration and you can skip this step. + +``` +# cpupower frequency-info --governors +analyzing CPU 0: + available cpufreq governors: Not Available +``` + +### Check the current governor mode + +You can run the `cpupower frequency-info --policy` command to check the current CPUfreq governor mode: + +``` +# cpupower frequency-info --policy +analyzing CPU 0: + current policy: frequency should be within 1.20 GHz and 3.20 GHz. + The governor "powersave" may decide which speed to use + within this range. +``` + +As the above code shows, the current mode is `powersave` in this example. + +### Change the governor mode + +- You can run the following command to change the current mode to `performance`: + + ``` + # cpupower frequency-set --governor performance + ``` + +- You can also run the following command to set the mode on the target machine in batches: + + ``` + $ ansible -i hosts.ini all -m shell -a "cpupower frequency-set --governor performance" -u tidb -b + ``` + +## Step 8: Mount the data disk ext4 filesystem with options on the target machines + +Log in to the Control Machine using the `root` user account. + +Format your data disks to the ext4 filesystem and mount the filesystem with the `nodelalloc` and `noatime` options. It is required to mount the `nodelalloc` option, or else the Ansible deployment cannot pass the test. The `noatime` option is optional. + +> **Note:** If your data disks have been formatted to ext4 and have mounted the options, you can uninstall it by running the `# umount /dev/nvme0n1` command, follow the steps starting from editing the `/etc/fstab` file, and remount the filesystem with options. + +Take the `/dev/nvme0n1` data disk as an example: + +1. View the data disk. + + ``` + # fdisk -l + Disk /dev/nvme0n1: 1000 GB + ``` + +2. Create the partition table. + + ``` + # parted -s -a optimal /dev/nvme0n1 mklabel gpt -- mkpart primary ext4 1 -1 + ``` + +3. Format the data disk to the ext4 filesystem. + + ``` + # mkfs.ext4 /dev/nvme0n1 + ``` + +4. View the partition UUID of the data disk. + + In this example, the UUID of `nvme0n1` is `c51eb23b-195c-4061-92a9-3fad812cc12f`. + + ``` + # lsblk -f + NAME FSTYPE LABEL UUID MOUNTPOINT + sda + ├─sda1 ext4 237b634b-a565-477b-8371-6dff0c41f5ab /boot + ├─sda2 swap f414c5c0-f823-4bb1-8fdf-e531173a72ed + └─sda3 ext4 547909c1-398d-4696-94c6-03e43e317b60 / + sr0 + nvme0n1 ext4 c51eb23b-195c-4061-92a9-3fad812cc12f + ``` + +5. Edit the `/etc/fstab` file and add the mount options. + + ``` + # vi /etc/fstab + UUID=c51eb23b-195c-4061-92a9-3fad812cc12f /data1 ext4 defaults,nodelalloc,noatime 0 2 + ``` + +6. Mount the data disk. + + ``` + # mkdir /data1 + # mount -a + ``` + +7. Check using the following command. + + ``` + # mount -t ext4 + /dev/nvme0n1 on /data1 type ext4 (rw,noatime,nodelalloc,data=ordered) + ``` + + If the filesystem is ext4 and `nodelalloc` is included in the mount options, you have successfully mount the data disk ext4 filesystem with options on the target machines. + +## Step 9: Edit the `inventory.ini` file to orchestrate the TiDB cluster + +Log in to the Control Machine using the `tidb` user account, and edit the `tidb-ansible/inventory.ini` file to orchestrate the TiDB cluster. The standard TiDB cluster contains 6 machines: 2 TiDB nodes, 3 PD nodes and 3 TiKV nodes. + +- Deploy at least 3 instances for TiKV. +- Do not deploy TiKV together with TiDB or PD on the same machine. +- Use the first TiDB machine as the monitoring machine. + +> **Note:** It is required to use the internal IP address to deploy. If the SSH port of the target machines is not the default 22 port, you need to add the `ansible_port` variable. For example, `TiDB1 ansible_host=172.16.10.1 ansible_port=5555`. + +You can choose one of the following two types of cluster topology according to your scenario: + +- [The cluster topology of a single TiKV instance on each TiKV node](#option-1-use-the-cluster-topology-of-a-single-tikv-instance-on-each-tikv-node) + + In most cases, it is recommended to deploy one TiKV instance on each TiKV node for better performance. However, if the CPU and memory of your TiKV machines are much better than the required in [Hardware and Software Requirements](../op-guide/recommendation.md), and you have more than two disks in one node or the capacity of one SSD is larger than 2 TB, you can deploy no more than 2 TiKV instances on a single TiKV node. + +- [The cluster topology of multiple TiKV instances on each TiKV node](#option-2-use-the-cluster-topology-of-multiple-tikv-instances-on-each-tikv-node) + +### Option 1: Use the cluster topology of a single TiKV instance on each TiKV node + +| Name | Host IP | Services | +|:------|:------------|:-----------| +| node1 | 172.16.10.1 | PD1, TiDB1 | +| node2 | 172.16.10.2 | PD2, TiDB2 | +| node3 | 172.16.10.3 | PD3 | +| node4 | 172.16.10.4 | TiKV1 | +| node5 | 172.16.10.5 | TiKV2 | +| node6 | 172.16.10.6 | TiKV3 | + +```ini +[tidb_servers] +172.16.10.1 +172.16.10.2 + +[pd_servers] +172.16.10.1 +172.16.10.2 +172.16.10.3 + +[tikv_servers] +172.16.10.4 +172.16.10.5 +172.16.10.6 + +[monitoring_servers] +172.16.10.1 + +[grafana_servers] +172.16.10.1 + +[monitored_servers] +172.16.10.1 +172.16.10.2 +172.16.10.3 +172.16.10.4 +172.16.10.5 +172.16.10.6 +``` + +### Option 2: Use the cluster topology of multiple TiKV instances on each TiKV node + +Take two TiKV instances on each TiKV node as an example: + +| Name | Host IP | Services | +|:------|:------------|:-----------| +| node1 | 172.16.10.1 | PD1, TiDB1 | +| node2 | 172.16.10.2 | PD2, TiDB2 | +| node3 | 172.16.10.3 | PD3 | +| node4 | 172.16.10.4 | TiKV1-1, TiKV1-2 | +| node5 | 172.16.10.5 | TiKV2-1, TiKV2-2 | +| node6 | 172.16.10.6 | TiKV3-1, TiKV3-2 | + +```ini +[tidb_servers] +172.16.10.1 +172.16.10.2 + +[pd_servers] +172.16.10.1 +172.16.10.2 +172.16.10.3 + +[tikv_servers] +TiKV1-1 ansible_host=172.16.10.4 deploy_dir=/data1/deploy tikv_port=20171 labels="host=tikv1" +TiKV1-2 ansible_host=172.16.10.4 deploy_dir=/data2/deploy tikv_port=20172 labels="host=tikv1" +TiKV2-1 ansible_host=172.16.10.5 deploy_dir=/data1/deploy tikv_port=20171 labels="host=tikv2" +TiKV2-2 ansible_host=172.16.10.5 deploy_dir=/data2/deploy tikv_port=20172 labels="host=tikv2" +TiKV3-1 ansible_host=172.16.10.6 deploy_dir=/data1/deploy tikv_port=20171 labels="host=tikv3" +TiKV3-2 ansible_host=172.16.10.6 deploy_dir=/data2/deploy tikv_port=20172 labels="host=tikv3" + +[monitoring_servers] +172.16.10.1 + +[grafana_servers] +172.16.10.1 + +[monitored_servers] +172.16.10.1 +172.16.10.2 +172.16.10.3 +172.16.10.4 +172.16.10.5 +172.16.10.6 + +...... + +[pd_servers:vars] +location_labels = ["host"] +``` + +**Edit the parameters in the service configuration file:** + +1. For the cluster topology of multiple TiKV instances on each TiKV node, you need to edit the `block-cache-size` parameter in `tidb-ansible/conf/tikv.yml`: + + - `rocksdb defaultcf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 30% + - `rocksdb writecf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 45% + - `rocksdb lockcf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 2.5% (128 MB at a minimum) + - `raftdb defaultcf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 2.5% (128 MB at a minimum) + +2. For the cluster topology of multiple TiKV instances on each TiKV node, you need to edit the `high-concurrency`, `normal-concurrency` and `low-concurrency` parameters in the `tidb-ansible/conf/tikv.yml` file: + + ``` + readpool: + coprocessor: + # Notice: if CPU_NUM > 8, default thread pool size for coprocessors + # will be set to CPU_NUM * 0.8. + # high-concurrency: 8 + # normal-concurrency: 8 + # low-concurrency: 8 + ``` + + Recommended configuration: `number of instances * parameter value = CPU_Vcores * 0.8`. + +3. If multiple TiKV instances are deployed on a same physical disk, edit the `capacity` parameter in `conf/tikv.yml`: + + - `capacity`: total disk capacity / number of TiKV instances (the unit is GB) + +## Step 10: Edit variables in the `inventory.ini` file + +This step describes how to edit the variable of deployment directory and other variables in the `inventory.ini` file. + +### Configure the deployment directory + +Edit the `deploy_dir` variable to configure the deployment directory. + +The global variable is set to `/home/tidb/deploy` by default, and it applies to all services. If the data disk is mounted on the `/data1` directory, you can set it to `/data1/deploy`. For example: + +```bash +## Global variables +[all:vars] +deploy_dir = /data1/deploy +``` + +**Note:** To separately set the deployment directory for a service, you can configure the host variable while configuring the service host list in the `inventory.ini` file. It is required to add the first column alias, to avoid confusion in scenarios of mixed services deployment. + +```bash +TiKV1-1 ansible_host=172.16.10.4 deploy_dir=/data1/deploy +``` + +### Edit other variables (Optional) + +To enable the following control variables, use the capitalized `True`. To disable the following control variables, use the capitalized `False`. + +| Variable Name | Description | +| ---- | ------- | +| cluster_name | the name of a cluster, adjustable | +| tidb_version | the version of TiDB, configured by default in TiDB-Ansible branches | +| process_supervision | the supervision way of processes, systemd by default, supervise optional | +| timezone | the timezone of the managed node, adjustable, `Asia/Shanghai` by default, used together with the `set_timezone` variable | +| set_timezone | to edit the timezone of the managed node, True by default; False means closing | +| enable_firewalld | to enable the firewall, closed by default; to enable it, add the ports in [network requirements](recommendation.md#network-requirements) to the white list | +| enable_ntpd | to monitor the NTP service of the managed node, True by default; do not close it | +| set_hostname | to edit the hostname of the mananged node based on the IP, False by default | +| enable_binlog | whether to deploy Pump and enable the binlog, False by default, dependent on the Kafka cluster; see the `zookeeper_addrs` variable | +| zookeeper_addrs | the zookeeper address of the binlog Kafka cluster | +| enable_slow_query_log | to record the slow query log of TiDB into a single file: ({{ deploy_dir }}/log/tidb_slow_query.log). False by default, to record it into the TiDB log | +| deploy_without_tidb | the Key-Value mode, deploy only PD, TiKV and the monitoring service, not TiDB; set the IP of the tidb_servers host group to null in the `inventory.ini` file | +| alertmanager_target | optional: If you have deployed `alertmanager` separately, you can configure this variable using the `alertmanager_host:alertmanager_port` format | +| grafana_admin_user | the username of Grafana administrator; default `admin` | +| grafana_admin_password | the password of Grafana administrator account; default `admin`; used to import Dashboard and create the API key using Ansible; update this variable if you have modified it through Grafana web | +| collect_log_recent_hours | to collect the log of recent hours; default the recent 2 hours | +| enable_bandwidth_limit | to set a bandwidth limit when pulling the diagnostic data from the target machines to the Control Machine; used together with the `collect_bandwidth_limit` variable | +| collect_bandwidth_limit | the limited bandwidth when pulling the diagnostic data from the target machines to the Control Machine; unit: Kbit/s; default 10000, indicating 10Mb/s; for the cluster topology of multiple TiKV instances on each TiKV node, you need to divide the number of the TiKV instances on each TiKV node | + +## Step 11: Deploy the TiDB cluster + +When `ansible-playbook` runs Playbook, the default concurrent number is 5. If many deployment target machines are deployed, you can add the `-f` parameter to specify the concurrency, such as `ansible-playbook deploy.yml -f 10`. + +The following example uses `tidb` as the user who runs the service. + +1. Edit the `tidb-ansible/inventory.ini` file to make sure `ansible_user = tidb`. + + ``` + ## Connection + # ssh via normal user + ansible_user = tidb + ``` + + > **Note:** Do not configure `ansible_user` to `root`, because `tidb-ansible` limits the user that runs the service to the normal user. + + Run the following command and if all servers return `tidb`, then the SSH mutual trust is successfully configured: + + ``` + ansible -i inventory.ini all -m shell -a 'whoami' + ``` + + Run the following command and if all servers return `root`, then sudo without password of the `tidb` user is successfully configured: + + ``` + ansible -i inventory.ini all -m shell -a 'whoami' -b + ``` + +2. Run the `local_prepare.yml` playbook and download TiDB binary to the Control Machine. + + ``` + ansible-playbook local_prepare.yml + ``` + +3. Initialize the system environment and modify the kernel parameters. + + ``` + ansible-playbook bootstrap.yml + ``` + +4. Deploy the TiDB cluster software. + + ``` + ansible-playbook deploy.yml + ``` + + > **Note:** You can use the `Report` button on the Grafana Dashboard to generate the PDF file. This function depends on the `fontconfig` package and English fonts. To use this function, log in to the `grafana_servers` machine and install it using the following command: + > + > ``` + > $ sudo yum install fontconfig open-sans-fonts + > ``` + +5. Start the TiDB cluster. + + ``` + ansible-playbook start.yml + ``` + +> **Note:** If you want to deploy TiDB using the root user account, see [Ansible Deployment Using the Root User Account](root-ansible-deployment.md). + +## Test the TiDB cluster + +Because TiDB is compatible with MySQL, you must use the MySQL client to connect to TiDB directly. It is recommended to configure load balancing to provide uniform SQL interface. + +1. Connect to the TiDB cluster using the MySQL client. + + ```sql + mysql -u root -h 172.16.10.1 -P 4000 + ``` + + > **Note**: The default port of TiDB service is 4000. + +2. Access the monitoring platform using a web browser. + + ``` + http://172.16.10.1:3000 + ``` + + > **Note**: The default account and password: `admin`/`admin`. + +## Deployment FAQs + +This section lists the common questions about deploying TiDB using Ansible. + +### How to customize the port? + +Edit the `inventory.ini` file and add the following host variable after the IP of the corresponding service: + +| Component | Variable Port | Default Port | Description | +|:--------------|:-------------------|:-------------|:-------------------------| +| TiDB | tidb_port | 4000 | the communication port for the application and DBA tools | +| TiDB | tidb_status_port | 10080 | the communication port to report TiDB status | +| TiKV | tikv_port | 20160 | the TiKV communication port | +| PD | pd_client_port | 2379 | the communication port between TiDB and PD | +| PD | pd_peer_port | 2380 | the inter-node communication port within the PD cluster | +| Pump | pump_port | 8250 | the pump communication port | +| Prometheus | prometheus_port | 9090 | the communication port for the Prometheus service | +| Pushgateway | pushgateway_port | 9091 | the aggregation and report port for TiDB, TiKV, and PD monitor | +| Node_exporter | node_exporter_port | 9100 | the communication port to report the system information of every TiDB cluster node | +| Grafana | grafana_port | 3000 | the port for the external Web monitoring service and client (Browser) access | +| Grafana | grafana_collector_port | 8686 | the grafana_collector communication port, used to export Dashboard as the PDF format | +| Kafka_exporter | kafka_exporter_port | 9308 | the communication port for Kafka_exporter, used to monitor the binlog Kafka cluster | + +### How to customize the deployment directory? + +Edit the `inventory.ini` file and add the following host variable after the IP of the corresponding service: + +| Component | Variable Directory | Default Directory | Description | +|:--------------|:----------------------|:------------------------------|:-----| +| Global | deploy_dir | /home/tidb/deploy | the deployment directory | +| TiDB | tidb_log_dir | {{ deploy_dir }}/log | the TiDB log directory | +| TiKV | tikv_log_dir | {{ deploy_dir }}/log | the TiKV log directory | +| TiKV | tikv_data_dir | {{ deploy_dir }}/data | the data directory | +| TiKV | wal_dir | "" | the rocksdb write-ahead log directory, consistent with the TiKV data directory when the value is null | +| TiKV | raftdb_path | "" | the raftdb directory, being tikv_data_dir/raft when the value is null | +| PD | pd_log_dir | {{ deploy_dir }}/log | the PD log directory | +| PD | pd_data_dir | {{ deploy_dir }}/data.pd | the PD data directory | +| Pump | pump_log_dir | {{ deploy_dir }}/log | the Pump log directory | +| Pump | pump_data_dir | {{ deploy_dir }}/data.pump | the Pump data directory | +| Prometheus | prometheus_log_dir | {{ deploy_dir }}/log | the Prometheus log directory | +| Prometheus | prometheus_data_dir | {{ deploy_dir }}/data.metrics | the Prometheus data directory | +| Pushgateway | pushgateway_log_dir | {{ deploy_dir }}/log | the pushgateway log directory | +| Node_exporter | node_exporter_log_dir | {{ deploy_dir }}/log | the node_exporter log directory | +| Grafana | grafana_log_dir | {{ deploy_dir }}/log | the Grafana log directory | +| Grafana | grafana_data_dir | {{ deploy_dir }}/data.grafana | the Grafana data directory | + +### How to check whether the NTP service is normal? + +1. Run the following command. If it returns `running`, then the NTP service is running: + + ``` + $ sudo systemctl status ntpd.service + ntpd.service - Network Time Service + Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled) + Active: active (running) since 一 2017-12-18 13:13:19 CST; 3s ago + ``` + +2. Run the ntpstat command. If it returns `synchronised to NTP server` (synchronizing with the NTP server), then the synchronization process is normal. + + ``` + $ ntpstat + synchronised to NTP server (85.199.214.101) at stratum 2 + time correct to within 91 ms + polling server every 1024 s + ``` + +> **Note:** For the Ubuntu system, you need to install the `ntpstat` package. + +- The following condition indicates the NTP service is not synchronizing normally: + + ``` + $ ntpstat + unsynchronised + ``` + +- The following condition indicates the NTP service is not running normally: + + ``` + $ ntpstat + Unable to talk to NTP daemon. Is it running? + ``` + +- To make the NTP service start synchronizing as soon as possible, run the following command. You can replace `pool.ntp.org` with other NTP servers. + + ``` + $ sudo systemctl stop ntpd.service + $ sudo ntpdate pool.ntp.org + $ sudo systemctl start ntpd.service + ``` + +- To install the NTP service manually on the CentOS 7 system, run the following command: + + ``` + $ sudo yum install ntp ntpdate + $ sudo systemctl start ntpd.service + $ sudo systemctl enable ntpd.service + ``` + +### How to modify the supervision method of a process from `supervise` to `systemd`? + +Run the following command: + +``` +# process supervision, [systemd, supervise] +process_supervision = systemd +``` + +For versions earlier than TiDB 1.0.4, the TiDB-Ansible supervision method of a process is `supervise` by default. The previously installed cluster can remain the same. If you need to change the supervision method to `systemd`, stop the cluster and run the following command: + +``` +ansible-playbook stop.yml +ansible-playbook deploy.yml -D +ansible-playbook start.yml +``` + +### How to manually configure the SSH mutual trust and sudo without password? + +Log in to the deployment target machine using the `root` user account, create the `tidb` user and set the login password. + +``` +# useradd tidb +# passwd tidb +``` + +To configure sudo without password, run the following command, and add `tidb ALL=(ALL) NOPASSWD: ALL` to the end of the file: + +``` +# visudo +tidb ALL=(ALL) NOPASSWD: ALL +``` + +Use the `tidb` user to log in to the Control Machine, and run the following command. Replace `172.16.10.61` with the IP of your deployment target machine, and enter the `tidb` user password of the deployment target machine as prompted. Successful execution indicates that SSH mutual trust is already created. This applies to other machines as well. + +``` +[tidb@172.16.10.49 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.10.61 +``` + +Log in to the Control Machine using the `tidb` user account, and log in to the IP of the target machine using SSH. If you do not need to enter the password and can successfully log in, then the SSH mutual trust is successfully configured. + +``` +[tidb@172.16.10.49 ~]$ ssh 172.16.10.61 +[tidb@172.16.10.61 ~]$ +``` + +After you login to the deployment target machine using the `tidb` user, run the following command. If you do not need to enter the password and can switch to the `root` user, then sudo without password of the `tidb` user is successfully configured. + +``` +[tidb@172.16.10.61 ~]$ sudo -su root +[root@172.16.10.61 tidb]# +``` + +### Error: You need to install jmespath prior to running json_query filter + +See [Install Ansible and its dependencies on the Control Machine](#step-4-install-ansible-and-its-dependencies-on-the-control-machine) and use `pip` to install Ansible and the related specific dependencies in the Control Machine. The `jmespath` dependent package is installed by default. + +Enter `import jmespath` in the Python interactive window of the Control Machine. + +- If no error displays, the dependency is successfully installed. +- If the `ImportError: No module named jmespath` error displays, the Python `jmespath` module is not successfully installed. + +``` +$ python +Python 2.7.5 (default, Nov 6 2016, 00:28:07) +[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2 +Type "help", "copyright", "credits" or "license" for more information. +>>> import jmespath +``` + +### The `zk: node does not exist` error when starting Pump/Drainer + +Check whether the `zookeeper_addrs` configuration in `inventory.ini` is the same with the configuration in the Kafka cluster, and whether the namespace is filled in. The description about namespace configuration is as follows: + +``` +# ZooKeeper connection string (see ZooKeeper docs for details). +# ZooKeeper address of the Kafka cluster. Example: +# zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181" +# You can also append an optional chroot string to the URLs to specify the root directory for all Kafka znodes. Example: +# zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181/kafka/123" +``` diff --git a/v2.0/op-guide/ansible-operation.md b/v2.0/op-guide/ansible-operation.md new file mode 100755 index 0000000000000..8ecc8af11380f --- /dev/null +++ b/v2.0/op-guide/ansible-operation.md @@ -0,0 +1,43 @@ +--- +title: TiDB-Ansible Common Operations +summary: Learn some common operations when using TiDB-Ansible to administer a TiDB cluster. +category: operations +--- + +# TiDB-Ansible Common Operations + +This guide describes the common operations when you administer a TiDB cluster using TiDB-Ansible. + +## Start a cluster + +```bash +$ ansible-playbook start.yml +``` + +This operation starts all the components in the entire TiDB cluster in order, which include PD, TiDB, TiKV, and the monitoring components. + +## Stop a cluster + +```bash +$ ansible-playbook stop.yml +``` + +This operation stops all the components in the entire TiDB cluster in order, which include PD, TiDB, TiKV, and the monitoring components. + +## Clean up cluster data + +``` +$ ansible-playbook unsafe_cleanup_data.yml +``` + +This operation stops the TiDB, Pump, TiKV and PD services, and cleans up the data directory of Pump, TiKV and PD. + +## Destroy a cluster + +``` +$ ansible-playbook unsafe_cleanup.yml +``` + +This operation stops the cluster and cleans up the data directory. + +> **Note:** If the deployment directory is a mount point, an error will be reported, but implementation results remain unaffected, so you can ignore it. \ No newline at end of file diff --git a/v2.0/op-guide/backup-restore.md b/v2.0/op-guide/backup-restore.md new file mode 100755 index 0000000000000..e95e57f58caee --- /dev/null +++ b/v2.0/op-guide/backup-restore.md @@ -0,0 +1,122 @@ +--- +title: Backup and Restore +summary: Learn how to back up and restore the data of TiDB. +category: operations +--- + +# Backup and Restore + +## About + +This document describes how to back up and restore the data of TiDB. Currently, this document only covers full backup and restoration. + +Here we assume that the TiDB service information is as follows: + +|Name|Address|Port|User|Password| +|:----:|:-------:|:----:|:----:|:------:| +|TiDB|127.0.0.1|4000|root|*| + +Use the following tools for data backup and restoration: + +- `mydumper`: to export data from TiDB +- `loader`: to import data into TiDB + +### Download TiDB toolset (Linux) + +```bash +# Download the tool package. +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf tidb-enterprise-tools-latest-linux-amd64.tar.gz +cd tidb-enterprise-tools-latest-linux-amd64 +``` + +## Full backup and restoration using `mydumper`/`loader` + +You can use `mydumper` to export data from MySQL and `loader` to import the data into TiDB. + +> **Note**: Although TiDB also supports the official `mysqldump` tool from MySQL for data migration, it is not recommended to use it. Its performance is much lower than `mydumper`/`loader` and it takes much time to migrate large amounts of data. `mydumper`/`loader` is more powerful. For more information, see https://github.com/maxbube/mydumper. + +### Best practices of full backup and restoration using `mydumper`/`loader` + +To quickly backup and restore data (especially large amounts of data), refer to the following recommendations: + +- Keep the exported data file as small as possible and it is recommended keep it within 64M. You can use the `-F` parameter to set the value. +- You can adjust the `-t` parameter of `loader` based on the number and the load of TiKV instances. For example, if there are three TiKV instances, `-t` can be set to 3 * (1 ~ n). If the load of TiKV is too high and the log `backoffer.maxSleep 15000ms is exceeded` is displayed many times, decrease the value of `-t`; otherwise, increase it. + +#### An example of restoring data and related configuration + +- The total size of the exported files is 214G. A single table has 8 columns and 2 billion rows. +- The cluster topology: + - 12 TiKV instances: 4 nodes, 3 TiKV instances per node + - 4 TiDB instances + - 3 PD instances +- The configuration of each node: + - CPU: Intel Xeon E5-2670 v3 @ 2.30GHz + - 48 vCPU [2 x 12 physical cores] + - Memory: 128G + - Disk: sda [raid 10, 300G] sdb[RAID 5, 2T] + - Operating System: CentOS 7.3 +- The `-F` parameter of `mydumper` is set to 16 and the `-t` parameter of `loader` is set to 64. + +**Results**: It takes 11 hours to import all the data, which is 19.4G/hour. + +### Backup data from TiDB + +Use `mydumper` to backup data from TiDB. + +```bash +./bin/mydumper -h 127.0.0.1 -P 4000 -u root -t 16 -F 64 -B test -T t1,t2 --skip-tz-utc -o ./var/test +``` +In this command, + +- `-B test`: means the data is exported from the `test` database. +- `-T t1,t2`: means only the `t1` and `t2` tables are exported. +- `-t 16`: means 16 threads are used to export the data. +- `-F 64`: means a table is partitioned into chunks and one chunk is 64MB. +- `--skip-tz-utc`: the purpose of adding this parameter is to ignore the inconsistency of time zone setting between MySQL and the data exporting machine and to disable automatic conversion. + +### Restore data into TiDB + +To restore data into TiDB, use `loader` to import the previously exported data. See [Loader instructions](../tools/loader.md) for more information. + +```bash +./bin/loader -h 127.0.0.1 -u root -P 4000 -t 32 -d ./var/test +``` + +After the data is imported, you can view the data in TiDB using the MySQL client: + +```sql +mysql -h127.0.0.1 -P4000 -uroot + +mysql> show tables; ++----------------+ +| Tables_in_test | ++----------------+ +| t1 | +| t2 | ++----------------+ + +mysql> select * from t1; ++----+------+ +| id | age | ++----+------+ +| 1 | 1 | +| 2 | 2 | +| 3 | 3 | ++----+------+ + +mysql> select * from t2; ++----+------+ +| id | name | ++----+------+ +| 1 | a | +| 2 | b | +| 3 | c | ++----+------+ +``` \ No newline at end of file diff --git a/v2.0/op-guide/configuration.md b/v2.0/op-guide/configuration.md new file mode 100755 index 0000000000000..c9cc20974c54f --- /dev/null +++ b/v2.0/op-guide/configuration.md @@ -0,0 +1,299 @@ +--- +title: Configuration Flags +summary: Learn some configuration flags of TiDB, TiKV and PD. +category: operations +--- + +# Configuration Flags + +TiDB, TiKV and PD are configurable using command-line flags and environment variables. + +## TiDB + +The default TiDB ports are 4000 for client requests and 10080 for status report. + +### `--advertise-address` + +- The IP address on which to advertise the apiserver to the TiDB server +- Default: "" +- This address must be reachable by the rest of the TiDB cluster and the user. + +### `--binlog-socket` + +- The TiDB services use the unix socket file for internal connections, such as the Pump service +- Default: "" +- You can use "/tmp/pump.sock" to accept the communication of Pump unix socket file. + +### `--config` + +- The configuration file +- Default: "" +- If you have specified the configuration file, TiDB reads the configuration file. If the corresponding configuration also exists in the command line flags, TiDB uses the configuration in the command line flags to overwrite that in the configuration file. For detailed configuration information, see [TiDB Configuration File Description](tidb-config-file.md) + +### `--host` + +- The host address that the TiDB server monitors +- Default: "0.0.0.0" +- The TiDB server monitors this address. +- The "0.0.0.0" monitors all network cards by default. If you have multiple network cards, specify the network card that provides service, such as 192.168.100.113. + +### `-L` + +- The log level +- Default: "info" +- You can choose from "debug", "info", "warn", "error", or "fatal". + +### `--log-file` + +- The log file +- Default: "" +- If this flag is not set, logs are output to "stderr". If this flag is set, logs are output to the corresponding file, which is automatically rotated in the early morning every day, and the previous file is renamed as a backup. + +### `--log-slow-query` + +- The directory for the slow query log +- Default: "" +- If this flag is not set, logs are written to the file specified by `--log-file` by default. + +### `--metrics-addr` + +- The Prometheus Pushgateway address +- Default: "" +- Leaving it empty stops the Prometheus client from pushing. +- The format is: + + ``` + --metrics-addr=192.168.100.115:9091 + ``` + +### `--metrics-interval` + +- The Prometheus client push interval in seconds +- Default: 15s +- Setting the value to 0 stops the Prometheus client from pushing. + +### `-P` + +- The monitoring port of TiDB services +- Default: "4000" +- The TiDB server accepts MySQL client requests from this port. + +### `--path` + +- The path to the data directory for local storage engine like "mocktikv" +- For `--store = tikv`, you must specify the path; for `--store = mocktikv`, the default value is used if you do not specify the path. +- For the distributed storage engine like TiKV, `--path` specifies the actual PD address. Assuming that you deploy the PD server on 192.168.100.113:2379, 192.168.100.114:2379 and 192.168.100.115:2379, the value of `--path` is "192.168.100.113:2379, 192.168.100.114:2379, 192.168.100.115:2379". +- Default: "/tmp/tidb" +- You can use `tidb-server --store=mocktikv --path=""` to enable a pure in-memory TiDB. + +### `--proxy-protocol-networks` + +- The list of proxy server's IP addresses allowed by PROXY Protocol; if you need to configure multiple addresses, separate them using ",". +- Default: "" +- Leaving it empty disables PROXY Protocol. The value can be the IP address (192.168.1.50) or CIDR (192.168.1.0/24). "*" means any IP addresses. + +### `--proxy-protocol-header-timeout` + +- Timeout for the PROXY protocol header read +- Default: 5 (seconds) +- Generally use the default value and do not set its value to 0. The unit is second. + +### `--report-status` + +- To enable(true) or disable(false) the status report and pprof tool +- Default: true +- The value can be (true) or (false). (true) is to enable metrics and pprof. (false) is to disable metrics and pprof. + +### `--run-ddl` + +- To see whether the `tidb-server` runs DDL statements, and set when the number of `tidb-server` is over two in the cluster +- Default: true +- The value can be (true) or (false). (true) indicates the `tidb-server` runs DDL itself. (false) indicates the `tidb-server` does not run DDL itself. + +### `--socket string` + +- The TiDB services use the unix socket file for external connections. +- Default: "" +- You can use “/tmp/tidb.sock” to open the unix socket file. + +### `--status` + +- The status report port for TiDB server +- Default: "10080" +- This is used to get server internal data. The data includes [Prometheus metrics](https://prometheus.io/) and [pprof](https://golang.org/pkg/net/http/pprof/). +- Prometheus metrics can be got through "http://host:status_port/metrics". +- Pprof data can be got through "http://host:status_port/debug/pprof". + +### `--store` + +- To specify the storage engine used by TiDB in the bottom layer +- Default: "mocktikv" +- You can choose "mocktikv" or "tikv". ("mocktikv" is the local storage engine; "tikv" is a distributed storage engine) + +### `--token-limit` + +- The number of sessions allowed to run concurrently in TiDB. It is used for traffic control. +- Default: 1000 +- If the number of the concurrent sessions is larger than `token-limit`, the request is blocked and waiting for the operations which have been finished to +release tokens. + +### `-V` + +- Output the version of TiDB +- Default: "" + +## Placement Driver (PD) + +### `--advertise-client-urls` + +- The advertise URL list for client traffic from outside +- Default: ${client-urls} +- If the client cannot connect to PD through the default listening client URLs, you must manually set the advertise client URLs explicitly. +- For example, the internal IP address of Docker is 172.17.0.1, while the IP address of the host is 192.168.100.113 and the port mapping is set to `-p 2379:2379`. In this case, you can set `--advertise-client-urls` to "http://192.168.100.113:2379". The client can find this service through "http://192.168.100.113:2379". + +### `--advertise-peer-urls` + +- The advertise URL list for peer traffic from outside +- Default: ${peer-urls} +- If the peer cannot connect to PD through the default listening peer URLs, you must manually set the advertise peer URLs explicitly. +- For example, the internal IP address of Docker is 172.17.0.1, while the IP address of the host is 192.168.100.113 and the port mapping is set to `-p 2380:2380`. In this case, you can set `--advertise-peer-urls` to "http://192.168.100.113:2380". The other PD nodes can find this service through "http://192.168.100.113:2380". + +### `--client-urls` + +- The listening URL list for client traffic +- Default: "http://127.0.0.1:2379" +- To deploy a cluster, you must use `--client-urls` to specify the IP address of the current host, such as "http://192.168.100.113:2379". If the cluster runs on Docker, specify the IP address of Docker as "http://0.0.0.0:2379". + +### `--peer-urls` + +- The listening URL list for peer traffic +- Default: "http://127.0.0.1:2380" +- To deploy a cluster, you must use `--peer-urls` to specify the IP address of the current host, such as "http://192.168.100.113:2380". If the cluster runs on Docker, specify the IP address of Docker as "http://0.0.0.0:2380". + +### `--config` + +- The configuration file +- Default: "" +- If you set the configuration using the command line, the same setting in the configuration file will be overwritten. + +### `--data-dir` + +- The path to the data directory +- Default: "default.${name}" + +### `--initial-cluster` + +- The initial cluster configuration for bootstrapping +- Default: "{name}=http://{advertise-peer-url}" +- For example, if `name` is "pd", and `advertise-peer-urls` is "http://192.168.100.113:2380", the `initial-cluster` is "pd=http://192.168.100.113:2380". +- If you need to start three PD servers, the `initial-cluster` might be: + + ``` + pd1=http://192.168.100.113:2380, pd2=http://192.168.100.114:2380, pd3=192.168.100.115:2380 + ``` + +### `--join` + +- Join the cluster dynamically +- Default: "" +- If you want to join an existing cluster, you can use `--join="${advertise-client-urls}"`, the `advertise-client-url` is any existing PD's, multiply advertise client urls are separated by comma. + +### `-L` + +- The log level +- Default: "info" +- You can choose from debug, info, warn, error, or fatal. + +### `--log-file` + +- The log file +- Default: "" +- If this flag is not set, logs will be written to stderr. Otherwise, logs will be stored in the log file which will be automatically rotated every day. + +### `--log-rotate` + +- To enable or disable log rotation +- Default: true +- When the value is true, follow the `[log.file]` in PD configuration files. + +### `--name` + +- The human-readable unique name for this PD member +- Default: "pd" +- If you want to start multiply PDs, you must use different name for each one. + +### `--cacert` + +- The file path of CA, used to enable TLS +- Default: "" + +### `--cert` + +- The path of the PEM file including the X509 certificate, used to enable TLS +- Default: "" + +### `--key` + +- The path of the PEM file including the X509 key, used to enable TLS +- Default: "" + +### `--namespace-classifier` + +- To specify the namespace classifier used by PD +- Default: "table" +- If you use TiKV separately, not in the entire TiDB cluster, it is recommended to configure the value to 'default'. + +## TiKV + +TiKV supports some readable unit conversions for command line parameters. + +- File size (based on byte): KB, MB, GB, TB, PB (or lowercase) +- Time (based on ms): ms, s, m, h + +### `-A, --addr` + +- The address that the TiKV server monitors +- Default: "127.0.0.1:20160" +- To deploy a cluster, you must use `--addr` to specify the IP address of the current host, such as "192.168.100.113:20160". If the cluster is run on Docker, specify the IP address of Docker as "0.0.0.0:20160". + +### `--advertise-addr` + +- The server advertise address for client traffic from outside +- Default: ${addr} +- If the client cannot connect to TiKV through the default monitoring address because of Docker or NAT network, you must manually set the advertise address explicitly. +- For example, the internal IP address of Docker is 172.17.0.1, while the IP address of the host is 192.168.100.113 and the port mapping is set to `-p 20160:20160`. In this case, you can set `--advertise-addr` to "192.168.100.113:20160". The client can find this service through 192.168.100.113:20160. + +### `-C, --config` + +- The config file +- Default: "" +- If you set the configuration using the command line, the same setting in the config file will be overwritten. + +### `--capacity` + +- The store capacity +- Default: 0 (unlimited) +- PD uses this flag to determine how to balance the TiKV servers. (Tip: you can use 10GB instead of 1073741824) + +### `--data-dir` + +- The path to the data directory +- Default: "/tmp/tikv/store" + +### `-L, --Log` + +- The log level +- Default: "info" +- You can choose from trace, debug, info, warn, error, or off. + +### `--log-file` + +- The log file +- Default: "" +- If this flag is not set, logs will be written to stderr. Otherwise, logs will be stored in the log file which will be automatically rotated every day. + +### `--pd` + +- The address list of PD servers +- Default: "" +- To make TiKV work, you must use the value of `--pd` to connect the TiKV server to the PD server. Separate multiple PD addresses using comma, for example "192.168.100.113:2379, 192.168.100.114:2379, 192.168.100.115:2379". diff --git a/v2.0/op-guide/dashboard-overview-info.md b/v2.0/op-guide/dashboard-overview-info.md new file mode 100755 index 0000000000000..0d816f5c5149d --- /dev/null +++ b/v2.0/op-guide/dashboard-overview-info.md @@ -0,0 +1,77 @@ +--- +title: Key Metrics +summary: Learn some key metrics displayed on the Grafana Overview dashboard. +category: operations +--- + +# Key Metrics + +If you use Ansible to deploy the TiDB cluster, the monitoring system is deployed at the same time. For more information, see [Overview of the Monitoring Framework](monitor-overview.md) . + +The Grafana dashboard is divided into a series of sub dashboards which include Overview, PD, TiDB, TiKV, Node\_exporter, Disk Performance, and so on. A lot of metrics are there to help you diagnose. + +For routine operations, you can get an overview of the component (PD, TiDB, TiKV) status and the entire cluster from the Overview dashboard, where the key metrics are displayed. This document provides a detailed description of these key metrics. + +## Key metrics description + +To understand the key metrics displayed on the Overview dashboard, check the following table: + +Service | Panel Name | Description | Normal Range +---- | ---------------- | ---------------------------------- | -------------- +Services Port Status | Services Online | the online nodes number of each service | +Services Port Status | Services Offline | the offline nodes number of each service | +PD | Storage Capacity | the total storage capacity of the TiDB cluster | +PD | Current Storage Size | the occupied storage capacity of the TiDB cluster | +PD | Number of Regions | the total number of Regions of the current cluster | +PD | Leader Balance Ratio | the leader ratio difference of the nodes with the biggest leader ratio and the smallest leader ratio | It is less than 5% for a balanced situation and becomes bigger when you restart a node. +PD | Region Balance Ratio | the region ratio difference of the nodes with the biggest Region ratio and the smallest Region ratio | It is less than 5% for a balanced situation and becomes bigger when you add or remove a node. +PD | Store Status -- Up Stores | the number of TiKV nodes that are up | +PD | Store Status -- Disconnect Stores | the number of TiKV nodes that encounter abnormal communication within a short time | +PD | Store Status -- LowSpace Stores | the number of TiKV nodes with an available space of less than 80% | +PD | Store Status -- Down Stores | the number of TiKV nodes that are down | The normal value is `0`. If the number is bigger than `0`, it means some node(s) are abnormal. +PD | Store Status -- Offline Stores | the number of TiKV nodes (still providing service) that are being made offline | +PD | Store Status -- Tombstone Stores | the number of TiKV nodes that are successfully offline | +PD | 99% completed_cmds_duration_seconds | the 99th percentile duration to complete a pd-server request | less than 5ms +PD | handle_requests_duration_seconds | the request duration of a PD request | +TiDB | Statement OPS | the total number of executed SQL statements, including `SELECT`, `INSERT`, `UPDATE` and so on | +TiDB | Duration | the execution time of a SQL statement | +TiDB | QPS By Instance | the QPS on each TiDB instance | +TiDB | Failed Query OPM | the number of failed SQL statements, including syntax error and key conflicts and so on | +TiDB | Connection Count | the connection number of each TiDB instance | +TiDB | Heap Memory Usage | the size of heap memory used by each TiDB instance | +TiDB | Transaction OPS | the number of executed transactions per second | +TiDB | Transaction Duration | the execution time of a transaction | +TiDB | KV Cmd OPS | the number of executed KV commands | +TiDB | KV Cmd Duration 99 | the execution time of the KV command | +TiDB | PD TSO OPS | the number of TSO that TiDB obtains from PD | +TiDB | PD TSO Wait Duration | the time consumed when TiDB obtains TSO from PD | +TiDB | TiClient Region Error OPS | the number of Region related errors returned by TiKV | +TiDB | Lock Resolve OPS | the number of transaction related conflicts | +TiDB | Load Schema Duration | the time consumed when TiDB obtains Schema from TiKV | +TiDB | KV Backoff OPS | the number of errors returned by TiKV (such as transaction conflicts ) +TiKV | leader | the number of leaders on each TiKV node | +TiKV | region | the number of Regions on each TiKV node | +TiKV | CPU | the CPU usage ratio on each TiKV node | +TiKV | Memory | the memory usage on each TiKV node | +TiKV | store size | the data amount on each TiKV node | +TiKV | cf size | the data amount on different CFs in the cluster | +TiKV | channel full | `No data points` is displayed in normal conditions. If a monitoring value displays, it means the corresponding TiKV node fails to handle the messages | +TiKV | server report failures | `No data points` is displayed in normal conditions. If `Unreachable` is displayed, it means TiKV encounters a communication issue. | +TiKV | scheduler pending commands | the number of commits on queue | Occasional value peaks are normal. +TiKV | coprocessor pending requests | the number of requests on queue | `0` or very small +TiKV | coprocessor executor count | the number of various query operations | +TiKV | coprocessor request duration | the time consumed by TiKV queries | +TiKV | raft store CPU | the CPU usage ratio of the raftstore thread | Currently, it is a single thread. A value of over 80% indicates that the CPU usage ratio is very high. +TiKV | Coprocessor CPU | the CPU usage ratio of the TiKV query thread, related to the application; complex queries consume a great deal of CPU | +System Info | Vcores | the number of CPU cores | +System Info | Memory | the total memory | +System Info | CPU Usage | the CPU usage ratio, 100% at a maximum | +System Info | Load [1m] | the overload within 1 minute | +System Info | Memory Available | the size of the available memory | +System Info | Network Traffic | the statistics of the network traffic | +System Info | TCP Retrans | the statistics about network monitoring and TCP | +System Info | IO Util | the disk usage ratio, 100% at a maximum; generally you need to consider adding a new node when the usage ratio is up to 80% ~ 90% | + +## Interface of the Overview dashboard + +![Overview Dashboard](../media/overview.png) \ No newline at end of file diff --git a/v2.0/op-guide/docker-compose.md b/v2.0/op-guide/docker-compose.md new file mode 100755 index 0000000000000..3fd0236269be9 --- /dev/null +++ b/v2.0/op-guide/docker-compose.md @@ -0,0 +1,176 @@ +--- +title: TiDB Docker Compose Deployment +summary: Use Docker Compose to quickly deploy a TiDB testing cluster. +category: operations +--- + +# TiDB Docker Compose Deployment + +This document describes how to quickly deploy a TiDB testing cluster with a single command using [Docker Compose](https://docs.docker.com/compose/overview). + +With Docker Compose, you can use a YAML file to configure application services in multiple containers. Then, with a single command, you can create and start all the services from your configuration. + +## Prerequisites + +Make sure you have installed the following items on your machine: + +- Docker (17.06.0 or later) +- Docker Compose +- Git + +## Deploy TiDB using Docker Compose + +1. Download `tidb-docker-compose`. + + ```bash + git clone https://github.com/pingcap/tidb-docker-compose.git + ``` + +2. Create and start the cluster. + + ```bash + cd tidb-docker-compose && docker-compose pull # Get the latest Docker images + docker-compose up -d + ``` + +3. Access the cluster. + + ```bash + mysql -h 127.0.0.1 -P 4000 -u root + ``` + + Access the Grafana monitoring interface: + + - Default address: + - Default account name: admin + - Default password: admin + + Access the [cluster data visualization interface](https://github.com/pingcap/tidb-vision): + +## Customize the cluster + +After the deployment is completed, the following components are deployed by default: + +- 3 PD instances, 3 TiKV instances, 1 TiDB instance +- Monitoring components: Prometheus, Pushgateway, Grafana +- Data visualization component: tidb-vision + +To customize the cluster, you can edit the `docker-compose.yml` file directly. It is recommended to generate `docker-compose.yml` using the [Helm](https://helm.sh) template engine, because manual editing is tedious and error-prone. + +1. Install Helm. + + [Helm](https://helm.sh) can be used as a template rendering engine. To use Helm, you only need to download its binary file: + + ```bash + curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash + ``` + + For macOS, you can also install Helm using the following command in Homebrew: + + ```bash + brew install kubernetes-helm + ``` + +2. Download `tidb-docker-compose`. + + ```bash + git clone https://github.com/pingcap/tidb-docker-compose.git + ``` + +3. Customize the cluster. + + ```bash + cd tidb-docker-compose + cp compose/values.yaml values.yaml + vim values.yaml + ``` + + You can modify the configuration in `values.yaml`, such as the cluster size, TiDB image version, and so on. + + [tidb-vision](https://github.com/pingcap/tidb-vision) is the data visualization interface of the TiDB cluster, used to visually display the PD scheduling on TiKV data. If you do not need this component, leave `tidbVision` empty. + + For PD, TiKV, TiDB and tidb-vision, you can build Docker images from GitHub source code or local files for development and testing. + + - To build the image of a component from GitHub source code, you need to leave the `image` field empty and set `buildFrom` to `remote`. + - To build PD, TiKV or TiDB images from the locally compiled binary file, you need to leave the `image` field empty, set `buildFrom` to `local` and copy the compiled binary file to the corresponding `pd/bin/pd-server`, `tikv/bin/tikv-server`, `tidb/bin/tidb-server`. + - To build the tidb-vision image from local, you need to leave the `image` field empty, set `buildFrom` to `local` and copy the tidb-vision project to `tidb-vision/tidb-vision`. + +4. Generate the `docker-compose.yml` file. + + ```bash + helm template -f values.yaml compose > generated-docker-compose.yml + ``` + +5. Create and start the cluster using the generated `docker-compose.yml` file. + + ```bash + docker-compose -f generated-docker-compose.yml pull # Get the latest Docker images + docker-compose -f generated-docker-compose.yml up -d + ``` + +6. Access the cluster. + + ```bash + mysql -h 127.0.0.1 -P 4000 -u root + ``` + + Access the Grafana monitoring interface: + + - Default address: + - Default account name: admin + - Default password: admin + + If tidb-vision is enabled, you can access the cluster data visualization interface: . + +## Access the Spark shell and load TiSpark + +Insert some sample data to the TiDB cluster: + +```bash +$ docker-compose exec tispark-master bash +$ cd /opt/spark/data/tispark-sample-data +$ mysql -h tidb -P 4000 -u root < dss.ddl +``` + +After the sample data is loaded into the TiDB cluster, you can access the Spark shell using `docker-compose exec tispark-master /opt/spark/bin/spark-shell`. + +```bash +$ docker-compose exec tispark-master /opt/spark/bin/spark-shell +... +Spark context available as 'sc' (master = local[*], app id = local-1527045927617). +Spark session available as 'spark'. +Welcome to + ____ __ + / __/__ ___ _____/ /__ + _\ \/ _ \/ _ `/ __/ '_/ + /___/ .__/\_,_/_/ /_/\_\ version 2.1.1 + /_/ + +Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_172) +Type in expressions to have them evaluated. +Type :help for more information. + +scala> import org.apache.spark.sql.TiContext +... +scala> val ti = new TiContext(spark) +... +scala> ti.tidbMapDatabase("TPCH_001") +... +scala> spark.sql("select count(*) from lineitem").show ++--------+ +|count(1)| ++--------+ +| 60175| ++--------+ +``` + +You can also access Spark with Python or R using the following commands: + +``` +docker-compose exec tispark-master /opt/spark/bin/pyspark +docker-compose exec tispark-master /opt/spark/bin/sparkR +``` + +For more details about TiSpark, see [here](../tispark/tispark-quick-start-guide.md). + +Here is [a 5-minute tutorial](https://www.pingcap.com/blog/how_to_spin_up_an_htap_database_in_5_minutes_with_tidb_tispark/) for macOS users that shows how to spin up a standard TiDB cluster using Docker Compose on your local computer. \ No newline at end of file diff --git a/v2.0/op-guide/docker-deployment.md b/v2.0/op-guide/docker-deployment.md new file mode 100755 index 0000000000000..b9812b09a0d67 --- /dev/null +++ b/v2.0/op-guide/docker-deployment.md @@ -0,0 +1,203 @@ +--- +title: Deploy TiDB Using Docker +summary: Use Docker to manually deploy a multi-node TiDB cluster on multiple machines. +category: operations +--- + +# Deploy TiDB Using Docker + +This page shows you how to manually deploy a multi-node TiDB cluster on multiple machines using Docker. + +To learn more, see [TiDB architecture](../overview.md#tidb-architecture) and [Software and Hardware Requirements](recommendation.md). + +## Preparation + +Before you start, make sure that you have: + ++ Installed the latest version of [Docker](https://www.docker.com/products/docker) ++ Pulled the latest images of TiDB, TiKV and PD from [Docker Hub](https://hub.docker.com). If not, pull the images using the following commands: + + ```bash + docker pull pingcap/tidb:latest + docker pull pingcap/tikv:latest + docker pull pingcap/pd:latest + ``` + +## Multi nodes deployment + +Assume we have 6 machines with the following details: + +| Host Name | IP | Services | Data Path | +| --------- | ------------- | ---------- | --------- | +| **host1** | 192.168.1.101 | PD1 & TiDB | /data | +| **host2** | 192.168.1.102 | PD2 | /data | +| **host3** | 192.168.1.103 | PD3 | /data | +| **host4** | 192.168.1.104 | TiKV1 | /data | +| **host5** | 192.168.1.105 | TiKV2 | /data | +| **host6** | 192.168.1.106 | TiKV3 | /data | + +### 1. Start PD + +Start PD1 on the **host1** +```bash +docker run -d --name pd1 \ + -p 2379:2379 \ + -p 2380:2380 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/pd:latest \ + --name="pd1" \ + --data-dir="/data/pd1" \ + --client-urls="http://0.0.0.0:2379" \ + --advertise-client-urls="http://192.168.1.101:2379" \ + --peer-urls="http://0.0.0.0:2380" \ + --advertise-peer-urls="http://192.168.1.101:2380" \ + --initial-cluster="pd1=http://192.168.1.101:2380,pd2=http://192.168.1.102:2380,pd3=http://192.168.1.103:2380" +``` + +Start PD2 on the **host2** +```bash +docker run -d --name pd2 \ + -p 2379:2379 \ + -p 2380:2380 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/pd:latest \ + --name="pd2" \ + --data-dir="/data/pd2" \ + --client-urls="http://0.0.0.0:2379" \ + --advertise-client-urls="http://192.168.1.102:2379" \ + --peer-urls="http://0.0.0.0:2380" \ + --advertise-peer-urls="http://192.168.1.102:2380" \ + --initial-cluster="pd1=http://192.168.1.101:2380,pd2=http://192.168.1.102:2380,pd3=http://192.168.1.103:2380" +``` + +Start PD3 on the **host3** +```bash +docker run -d --name pd3 \ + -p 2379:2379 \ + -p 2380:2380 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/pd:latest \ + --name="pd3" \ + --data-dir="/data/pd3" \ + --client-urls="http://0.0.0.0:2379" \ + --advertise-client-urls="http://192.168.1.103:2379" \ + --peer-urls="http://0.0.0.0:2380" \ + --advertise-peer-urls="http://192.168.1.103:2380" \ + --initial-cluster="pd1=http://192.168.1.101:2380,pd2=http://192.168.1.102:2380,pd3=http://192.168.1.103:2380" +``` + +### 2. Start TiKV + +Start TiKV1 on the **host4** +```bash +docker run -d --name tikv1 \ + -p 20160:20160 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/tikv:latest \ + --addr="0.0.0.0:20160" \ + --advertise-addr="192.168.1.104:20160" \ + --data-dir="/data/tikv1" \ + --pd="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" +``` + +Start TiKV2 on the **host5** +```bash +docker run -d --name tikv2 \ + -p 20160:20160 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/tikv:latest \ + --addr="0.0.0.0:20160" \ + --advertise-addr="192.168.1.105:20160" \ + --data-dir="/data/tikv2" \ + --pd="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" +``` + +Start TiKV3 on the **host6** +```bash +docker run -d --name tikv3 \ + -p 20160:20160 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/tikv:latest \ + --addr="0.0.0.0:20160" \ + --advertise-addr="192.168.1.106:20160" \ + --data-dir="/data/tikv3" \ + --pd="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" +``` + +### 3. Start TiDB + +Start TiDB on the **host1** + +```bash +docker run -d --name tidb \ + -p 4000:4000 \ + -p 10080:10080 \ + -v /etc/localtime:/etc/localtime:ro \ + pingcap/tidb:latest \ + --store=tikv \ + --path="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" +``` + +### 4. Use the MySQL client to connect to TiDB + +Install the [MySQL client](http://dev.mysql.com/downloads/mysql/) on **host1** and run: + +```bash +$ mysql -h 127.0.0.1 -P 4000 -u root -D test +mysql> show databases; ++--------------------+ +| Database | ++--------------------+ +| INFORMATION_SCHEMA | +| PERFORMANCE_SCHEMA | +| mysql | +| test | ++--------------------+ +4 rows in set (0.00 sec) +``` + +### How to customize the configuration file + +The TiKV and PD can be started with a specified configuration file, which includes some advanced parameters, for the performance tuning. + +Assume that the path to configuration file of PD and TiKV on the host is `/path/to/config/pd.toml` and `/path/to/config/tikv.toml` + +You can start TiKV and PD as follows: + +```bash +docker run -d --name tikv1 \ + -p 20160:20160 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + -v /path/to/config/tikv.toml:/tikv.toml:ro \ + pingcap/tikv:latest \ + --addr="0.0.0.0:20160" \ + --advertise-addr="192.168.1.104:20160" \ + --data-dir="/data/tikv1" \ + --pd="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" \ + --config="/tikv.toml" +``` + +```bash +docker run -d --name pd1 \ + -p 2379:2379 \ + -p 2380:2380 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + -v /path/to/config/pd.toml:/pd.toml:ro \ + pingcap/pd:latest \ + --name="pd1" \ + --data-dir="/data/pd1" \ + --client-urls="http://0.0.0.0:2379" \ + --advertise-client-urls="http://192.168.1.101:2379" \ + --peer-urls="http://0.0.0.0:2380" \ + --advertise-peer-urls="http://192.168.1.101:2380" \ + --initial-cluster="pd1=http://192.168.1.101:2380,pd2=http://192.168.1.102:2380,pd3=http://192.168.1.103:2380" \ + --config="/pd.toml" +``` diff --git a/v2.0/op-guide/gc.md b/v2.0/op-guide/gc.md new file mode 100755 index 0000000000000..2eb2793890747 --- /dev/null +++ b/v2.0/op-guide/gc.md @@ -0,0 +1,86 @@ +--- +title: TiDB Garbage Collection (GC) +summary: Use Garbage Collection (GC) to clear the obsolete data of TiDB. +category: advanced +--- + +# TiDB Garbage Collection (GC) + +TiDB uses MVCC to control concurrency. When you update or delete data, the original data is not deleted immediately but is kept for a period during which it can be read. Thus the write operation and the read operation are not mutually exclusive and it is possible to read the history versions of the data. + +The data versions whose duration exceeds a specific time and that are not used any more will be cleared, otherwise they will occupy the disk space and affect TiDB's performance. TiDB uses Garbage Collection (GC) to clear the obsolete data. + +## Working mechanism + +GC runs periodically on TiDB. When a TiDB server is started, a `gc_worker` is enabled in the background. In each TiDB cluster, one `gc_worker` is elected to be the leader which is used to maintain the GC status and send GC commands to all the TiKV Region leaders. + +## Configuration and monitor + +The GC configuration and operational status are recorded in the `mysql.tidb` system table as below, which can be monitored and configured using SQL statements: + +```sql +mysql> select VARIABLE_NAME, VARIABLE_VALUE from mysql.tidb; ++-----------------------+------------------------------------------------------------------------------------------------+ +| VARIABLE_NAME | VARIABLE_VALUE | ++-----------------------+------------------------------------------------------------------------------------------------+ +| bootstrapped | True | +| tidb_server_version | 18 | +| tikv_gc_leader_uuid | 58accebfa7c0004 | +| tikv_gc_leader_desc | host:ip-172-16-30-5, pid:95472, start at 2018-04-11 13:43:30.73076656 +0800 CST m=+0.068873865 | +| tikv_gc_leader_lease | 20180418-11:02:30 +0800 CST | +| tikv_gc_run_interval | 10m0s | +| tikv_gc_life_time | 10m0s | +| tikv_gc_last_run_time | 20180418-10:59:30 +0800 CST | +| tikv_gc_safe_point | 20180418-10:58:30 +0800 CST | +| tikv_gc_concurrency | 1 | ++-----------------------+------------------------------------------------------------------------------------------------+ +10 rows in set (0.02 sec) +``` + +In the table above, `tikv_gc_run_interval`, `tikv_gc_life_time` and `tikv_gc_concurrency` can be configured manually. Other variables with the `tikv_gc` prefix record the current status, which are automatically updated by TiDB. Do not modify these variables. + +- `tikv_gc_leader_uuid`, `tikv_gc_leader_desc`, `tikv_gc_leader_lease`: the current GC leader information. + +- `tikv_gc_run_interval`: the interval of GC work. The value is 10 min by default and cannot be smaller than 10 min. + +- `tikv_gc_life_time`: the retention period of data versions; The value is 10 min by default and cannot be smaller than 10 min. + + When GC works, the outdated data is cleared. You can set it using the SQL statement. For example, if you want to retain the data within a day, you can execute the operation as below: + + ```sql + update mysql.tidb set VARIABLE_VALUE = '24h' where VARIABLE_NAME = 'tikv_gc_life_time'; + ``` + + The duration strings are a sequence of a number with the time unit, such as 24h, 2h30m and 2.5h. The time units you can use include "h", "m" and "s". + + > **Note**: When you set `tikv_gc_life_time` to a large number (like days or even months) in a scenario where data is updated frequently, some problems as follows may occur: + + - The more versions of the data, the more disk storage space is occupied. + - A large number of history versions might slow down the query. They may affect range queries like `select count(*) from t`. + - If `tikv_gc_life_time` is suddenly turned to a smaller value during operation, a great deal of old data may be deleted in a short time, causing I/O pressure. + +- `tikv_gc_last_run_time`: the last time GC works. + +- `tikv_gc_safe_point`: the time before which versions are cleared by GC and after which versions are readable. + +- `tikv_gc_concurrency`: the GC concurrency. It is set to 1 by default. In this case, a single thread operates and threads send request to each Region and wait for the response one by one. You can set the variable value larger to improve the system performance, but keep the value smaller than 128. + +## Implementation details + +The GC implementation process is complex. When the obsolete data is cleared, data consistency is guaranteed. The process of doing GC is as below: + +### 1. Resolve locks + +The TiDB transaction model is inspired by Google's Percolator. It's mainly a two-phase commit protocol with some practical optimizations. When the first phase is finished, all the related keys are locked. Among these locks, one is the primary lock and the others are secondary locks which contain a pointer of the primary locks; in the secondary phase, the key with the primary lock gets a write record and its lock is removed. The write record indicates the write or delete operation in the history or the transactional rollback record of this key. Replacing the primary lock with which write record indicates whether the corresponding transaction is committed successfully. Then all the secondary locks are replaced successively. If the threads fail to replace the secondary locks, these locks are retained. During GC, the lock whose timestamp is before the safe point is replaced with the corresponding write record based on the transaction committing status. + +> **Note**: This is a required step. Once GC has cleared the write record of the primary lock, you can never know whether this transaction is successful or not. As a result, data consistency cannot be guaranteed. + +### 2. Delete ranges + +`DeleteRanges` is usually executed after operations like `drop table`, used to delete a range which might be very large. If the `use_delete_range` option of TiKV is not enabled, TiKV deletes the keys in the range. + +### 3. Do GC + +Clear the data before the safe point of each key and the write record. + +> **Note**: If the last record in all the write records of `Put` and `Delete` types before the safe point is `Put`, this record and its data cannot be deleted directly. Otherwise, you cannot successfully perform the read operation whose timestamp is after the safe point and before the next version of the key. \ No newline at end of file diff --git a/v2.0/op-guide/generate-self-signed-certificates.md b/v2.0/op-guide/generate-self-signed-certificates.md new file mode 100755 index 0000000000000..f10fc612943c8 --- /dev/null +++ b/v2.0/op-guide/generate-self-signed-certificates.md @@ -0,0 +1,155 @@ +--- +title: Generate Self-signed Certificates +summary: Use `cfssl` to generate self-signed certificates. +category: deployment +--- + +# Generate Self-signed Certificates + +## Overview + +This document describes how to generate self-signed certificates using `cfssl`. + +Assume that the topology of the instance cluster is as follows: + +| Name | Host IP | Services | +| ----- | ----------- | ---------- | +| node1 | 172.16.10.1 | PD1, TiDB1 | +| node2 | 172.16.10.2 | PD2, TiDB2 | +| node3 | 172.16.10.3 | PD3 | +| node4 | 172.16.10.4 | TiKV1 | +| node5 | 172.16.10.5 | TiKV2 | +| node6 | 172.16.10.6 | TiKV3 | + +## Download `cfssl` + +Assume that the host is x86_64 Linux: + +```bash +mkdir ~/bin +curl -s -L -o ~/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 +curl -s -L -o ~/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 +chmod +x ~/bin/{cfssl,cfssljson} +export PATH=$PATH:~/bin +``` + +## Initialize the certificate authority + +To make it easy for modification later, generate the default configuration of `cfssl`: + +```bash +mkdir ~/cfssl +cd ~/cfssl +cfssl print-defaults config > ca-config.json +cfssl print-defaults csr > ca-csr.json +``` + +## Generate certificates + +### Certificates description + +- tidb-server certificate: used by TiDB to authenticate TiDB for other components and clients +- tikv-server certificate: used by TiKV to authenticate TiKV for other components and clients +- pd-server certificate: used by PD to authenticate PD for other components and clients +- client certificate: used to authenticate the clients from PD, TiKV and TiDB, such as `pd-ctl`, `tikv-ctl` and `pd-recover` + +### Configure the CA option + +Edit `ca-config.json` according to your need: + +```json +{ + "signing": { + "default": { + "expiry": "43800h" + }, + "profiles": { + "server": { + "expiry": "43800h", + "usages": [ + "signing", + "key encipherment", + "server auth", + "client auth" + ] + }, + "client": { + "expiry": "43800h", + "usages": [ + "signing", + "key encipherment", + "client auth" + ] + } + } + } +} +``` + +Edit `ca-csr.json` according to your need: + +```json +{ + "CN": "My own CA", + "key": { + "algo": "rsa", + "size": 2048 + }, + "names": [ + { + "C": "CN", + "L": "Beijing", + "O": "PingCAP", + "ST": "Beijing" + } + ] +} +``` + +### Generate the CA certificate + +```bash +cfssl gencert -initca ca-csr.json | cfssljson -bare ca - +``` + +The command above generates the following files: + +```bash +ca-key.pem +ca.csr +ca.pem +``` + +### Generate the server certificate + +The IP address of all components and `127.0.0.1` are included in `hostname`. + +```bash +echo '{"CN":"tidb-server","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server -hostname="172.16.10.1,172.16.10.2,127.0.0.1" - | cfssljson -bare tidb-server + +echo '{"CN":"tikv-server","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server -hostname="172.16.10.4,172.16.10.5,172.16.10.6,127.0.0.1" - | cfssljson -bare tikv-server + +echo '{"CN":"pd-server","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server -hostname="172.16.10.1,172.16.10.2,172.16.10.3,127.0.0.1" - | cfssljson -bare pd-server +``` + +The command above generates the following files: + +```Bash +tidb-server-key.pem tikv-server-key.pem pd-server-key.pem +tidb-server.csr tikv-server.csr pd-server.csr +tidb-server.pem tikv-server.pem pd-server.pem +``` + +### Generate the client certificate + +```bash +echo '{"CN":"client","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client -hostname="" - | cfssljson -bare client +``` + +The command above generates the following files: + +```bash +client-key.pem +client.csr +client.pem +``` \ No newline at end of file diff --git a/v2.0/op-guide/history-read.md b/v2.0/op-guide/history-read.md new file mode 100755 index 0000000000000..3d1989b95e25a --- /dev/null +++ b/v2.0/op-guide/history-read.md @@ -0,0 +1,145 @@ +--- +title: Reading Data from History Versions +summary: Learn about how TiDB reads data from history versions. +category: advanced +--- + +# Reading Data From History Versions + +This document describes how TiDB reads data from the history versions, how TiDB manages the data versions, as well as an example to show how to use the feature. + +## Feature description + +TiDB implements a feature to read history data using the standard SQL interface directly without special clients or drivers. By using this feature, +- Even when data is updated or removed, its history versions can be read using the SQL interface. +- Even if the table structure changes after the data is updated, TiDB can use the old structure to read the history data. + +## How TiDB reads data from history versions + +The `tidb_snapshot` system variable is introduced to support reading history data. About the `tidb_snapshot` variable: + +- The variable is valid in the `Session` scope. +- Its value can be modified using the `Set` statement. +- The data type for the variable is text. +- The variable accepts TSO (Timestamp Oracle) and datetime. TSO is a globally unique time service, which is obtained from PD. The acceptable datetime format is "2016-10-08 16:45:26.999". Generally, the datetime can be set using second precision, for example "2016-10-08 16:45:26". +- When the variable is set, TiDB creates a Snapshot using its value as the timestamp, just for the data structure and there is no any overhead. After that, all the `Select` operations will read data from this Snapshot. + +> **Note:** Because the timestamp in TiDB transactions is allocated by Placement Driver (PD), the version of the stored data is also marked based on the timestamp allocated by PD. When a Snapshot is created, the version number is based on the value of the `tidb_snapshot` variable. If there is a large difference between the local time of the TiDB server and the PD server, use the time of the PD server. + +After reading data from history versions, you can read data from the latest version by ending the current Session or using the `Set` statement to set the value of the `tidb_snapshot` variable to "" (empty string). + +## How TiDB manages the data versions + +TiDB implements Multi-Version Concurrency Control (MVCC) to manage data versions. The history versions of data are kept because each update/removal creates a new version of the data object instead of updating/removing the data object in-place. But not all the versions are kept. If the versions are older than a specific time, they will be removed completely to reduce the storage occupancy and the performance overhead caused by too many history versions. + +In TiDB, Garbage Collection (GC) runs periodically to remove the obsolete data versions. For GC details, see [TiDB Garbage Collection (GC)](gc.md) + +Pay special attention to the following two variables: + +- `tikv_gc_life_time`: It is used to configure the retention time of the history version. You can modify it manually. +- `tikv_gc_safe_point`: It records the current `safePoint`. You can safely create the snapshot to read the history data using the timestamp that is later than `safePoint`. `safePoint` automatically updates every time GC runs. + +## Example + +1. At the initial stage, create a table and insert several rows of data: + + ```sql + mysql> create table t (c int); + Query OK, 0 rows affected (0.01 sec) + + mysql> insert into t values (1), (2), (3); + Query OK, 3 rows affected (0.00 sec) + ``` + +2. View the data in the table: + + ```sql + mysql> select * from t; + +------+ + | c | + +------+ + | 1 | + | 2 | + | 3 | + +------+ + 3 rows in set (0.00 sec) + ``` + +3. View the timestamp of the table: + + ```sql + mysql> select now(); + +---------------------+ + | now() | + +---------------------+ + | 2016-10-08 16:45:26 | + +---------------------+ + 1 row in set (0.00 sec) + ``` + +4. Update the data in one row: + + ```sql + mysql> update t set c=22 where c=2; + Query OK, 1 row affected (0.00 sec) + ``` + +5. Make sure the data is updated: + + ```sql + mysql> select * from t; + +------+ + | c | + +------+ + | 1 | + | 22 | + | 3 | + +------+ + 3 rows in set (0.00 sec) + ``` + +6. Set the `tidb_snapshot` variable whose scope is Session. The variable is set so that the latest version before the value can be read. + + > **Note:** In this example, the value is set to be the time before the update operation. + + ```sql + mysql> set @@tidb_snapshot="2016-10-08 16:45:26"; + Query OK, 0 rows affected (0.00 sec) + ``` + + > **Note:** You should use `@@` instead of `@` before `tidb_snapshot` because `@@` is used to denote the system variable while `@` is used to denote the user variable. + + **Result:** The read from the following statement is the data before the update operation, which is the history data. + + ```sql + mysql> select * from t; + +------+ + | c | + +------+ + | 1 | + | 2 | + | 3 | + +------+ + 3 rows in set (0.00 sec) + ``` + +7. Set the `tidb_snapshot` variable to be "" (empty string) and you can read the data from the latest version: + + ```sql + mysql> set @@tidb_snapshot=""; + Query OK, 0 rows affected (0.00 sec) + ``` + + ```sql + mysql> select * from t; + +------+ + | c | + +------+ + | 1 | + | 22 | + | 3 | + +------+ + 3 rows in set (0.00 sec) + ``` + + > **Note:** You should use `@@` instead of `@` before `tidb_snapshot` because `@@` is used to denote the system variable while `@` is used to denote the user variable. diff --git a/v2.0/op-guide/horizontal-scale.md b/v2.0/op-guide/horizontal-scale.md new file mode 100755 index 0000000000000..368d7c0658fba --- /dev/null +++ b/v2.0/op-guide/horizontal-scale.md @@ -0,0 +1,123 @@ +--- +title: Scale a TiDB cluster +summary: Learn how to add or delete PD, TiKV and TiDB nodes. +category: operations +--- + +# Scale a TiDB cluster + +## Overview + +The capacity of a TiDB cluster can be increased or reduced without affecting online services. + +> **Note:** If your TiDB cluster is deployed using Ansible, see [Scale the TiDB Cluster Using TiDB-Ansible](ansible-deployment-scale.md). + +The following part shows you how to add or delete PD, TiKV or TiDB nodes. + +About `pd-ctl` usage, refer to [PD Control User Guide](../tools/pd-control.md). + +## PD + +Assume we have three PD servers with the following details: + +| Name | ClientUrls | PeerUrls | +|:-----|:------------------|:------------------| +| pd1 | http://host1:2379 | http://host1:2380 | +| pd2 | http://host2:2379 | http://host2:2380 | +| pd3 | http://host3:2379 | http://host3:2380 | + +Get the information about the existing PD nodes through pd-ctl: + +```bash +./pd-ctl -u http://host1:2379 +>> member +``` + +### Add a node dynamically + +Add a new PD server to the current PD cluster by using the parameter `join`. +To add `pd4`, you just need to specify the client url of any PD server in the PD cluster in the parameter `--join`, like: + +```bash +./bin/pd-server --name=pd4 \ + --client-urls="http://host4:2379" \ + --peer-urls="http://host4:2380" \ + --join="http://host1:2379" +``` + +### Delete a node dynamically + +Delete `pd4` through pd-ctl: + +```bash +./pd-ctl -u http://host1:2379 +>> member delete pd4 +``` + +### Migrate a node dynamically + +If you want to migrate a node to a new machine, you need to, first of all, add a node on the new machine and then delete the node on the old machine. +As you can just migrate one node at a time, if you want to migrate multiple nodes, you need to repeat the above steps until you have migrated all nodes. After completing each step, you can verify the process by checking the information of all nodes. + +## TiKV + +Get the information about the existing TiKV nodes through pd-ctl: + +```bash +./pd-ctl -u http://host1:2379 +>> store +``` + +### Add a node dynamically + +It is very easy to add a new TiKV server dynamically. You just need to start a TiKV server on the new machine. +The newly started TiKV server will automatically register in the existing PD of the cluster. To reduce the pressure of the existing TiKV servers, PD loads balance automatically, which means PD gradually migrates some data to the new TiKV server. + +### Delete a node dynamically + +To delete (make it offline) a TiKV server safely, you need to inform PD in advance. After that, PD is able to migrate the data on this TiKV server to other TiKV servers, ensuring that data have enough replicas. + +Assume that you need to delete the TiKV server with a store id 1, you can complete this through pd-ctl: + +```bash +./pd-ctl -u http://host1:2379 +>> store delete 1 +``` + +Then you can check the state of this TiKV: + +```bash +./pd-ctl -u http://host1:2379 +>> store 1 +{ + "store": { + "id": 1, + "address": "127.0.0.1:21060", + "state": 1, + "state_name": "Offline" + }, + "status": { + ... + } +} +``` + +You can verify the state of this store using `state_name`: + + - `state_name=Up`: This store is in service. + - `state_name=Disconnected`: The heartbeats of this store cannot be detected currently, which might be caused by a failure or network interruption. + - `state_name=Down`: PD does not receive heartbeats from the TiKV store for more than an hour (the time can be configured using `max-down-time`). At this time, PD adds a replica for the data on this store. + - `state_name=Offline`: This store is shutting down, but the store is still in service. + - `state_name=Tombstone`: This store is shut down and has no data on it, so the instance can be deleted. + + +### Migrate a node dynamically + +To migrate TiKV servers to a new machine, you also need to add nodes on the new machine and then make all nodes on the old machine offline. +In the process of migration, you can add all machines in the new cluster to the existing cluster, then make old nodes offline one by one. +To verify whether a node has been made offline, you can check the state information of the node in process. After verifying, you can make the next node offline. + +## TiDB + +TiDB is a stateless server, which means it can be added or deleted directly. +It should be noted that if you deploy a proxy (such as HAProxy) in front of TiDB, you need to update the proxy configuration and reload it. diff --git a/v2.0/op-guide/location-awareness.md b/v2.0/op-guide/location-awareness.md new file mode 100755 index 0000000000000..2cb2778fd4dc8 --- /dev/null +++ b/v2.0/op-guide/location-awareness.md @@ -0,0 +1,87 @@ +--- +title: Cross-Region Deployment +summary: Learn the cross-region deployment that maximizes the capacity for disaster recovery. +category: operations +--- + +# Cross-Region Deployment + +## Overview + +PD schedules according to the topology of the TiKV cluster to maximize the TiKV's capability for disaster recovery. + +Before you begin, see [Deploy TiDB Using Ansible (Recommended)](ansible-deployment.md) and [Deploy TiDB Using Docker](docker-deployment.md). + +## TiKV reports the topological information + +TiKV reports the topological information to PD according to the startup parameter or configuration of TiKV. + +Assuming that the topology has three structures: zone > rack > host, use lables to specify the following information: + +Startup parameter: + +``` +tikv-server --labels zone=,rack=,host= +``` + +Configuration: + +``` toml +[server] +labels = "zone=,rack=,host=" +``` + +## PD understands the TiKV topology + +PD gets the topology of TiKV cluster through the PD configuration. + +``` toml +[replication] +max-replicas = 3 +location-labels = ["zone", "rack", "host"] +``` + +`location-labels` needs to correspond to the TiKV `labels` name so that PD can understand that the `labels` represents the TiKV topology. + +## PD schedules based on the TiKV topology + +PD makes optimal scheduling according to the topological information. You just need to care about what kind of topology can achieve the desired effect. + +If you use 3 replicas and hope that the TiDB cluster is always highly available even when a data zone goes down, you need at least 4 data zones. + +Assume that you have 4 data zones, each zone has 2 racks, and each rack has 2 hosts. You can start 2 TiKV instances on each host: + +``` +# zone=z1 +tikv-server --labels zone=z1,rack=r1,host=h1 +tikv-server --labels zone=z1,rack=r1,host=h2 +tikv-server --labels zone=z1,rack=r2,host=h1 +tikv-server --labels zone=z1,rack=r2,host=h2 + +# zone=z2 +tikv-server --labels zone=z2,rack=r1,host=h1 +tikv-server --labels zone=z2,rack=r1,host=h2 +tikv-server --labels zone=z2,rack=r2,host=h1 +tikv-server --labels zone=z2,rack=r2,host=h2 + +# zone=z3 +tikv-server --labels zone=z3,rack=r1,host=h1 +tikv-server --labels zone=z3,rack=r1,host=h2 +tikv-server --labels zone=z3,rack=r2,host=h1 +tikv-server --labels zone=z3,rack=r2,host=h2 + +# zone=z4 +tikv-server --labels zone=z4,rack=r1,host=h1 +tikv-server --labels zone=z4,rack=r1,host=h2 +tikv-server --labels zone=z4,rack=r2,host=h1 +tikv-server --labels zone=z4,rack=r2,host=h2 +``` + +In other words, 16 TiKV instances are distributed across 4 data zones, 8 racks and 16 machines. + +In this case, PD will schedule different replicas of each datum to different data zones. + +- If one of the data zones goes down, the high availability of the TiDB cluster is not affected. +- If the data zone cannot recover within a period of time, PD will remove the replica from this data zone. + +To sum up, PD maximizes the disaster recovery of the cluster according to the current topology. Therefore, if you want to reach a certain level of disaster recovery, deploy many machines in different sites according to the topology. The number of machines must be more than the number of `max-replicas`. diff --git a/v2.0/op-guide/migration-overview.md b/v2.0/op-guide/migration-overview.md new file mode 100755 index 0000000000000..f8128cac8d698 --- /dev/null +++ b/v2.0/op-guide/migration-overview.md @@ -0,0 +1,141 @@ +--- +title: Migration Overview +summary: Learn how to migrate data from MySQL to TiDB. +category: operations +--- + +# Migration Overview + +## Overview + +This document describes how to migrate data from MySQL to TiDB in detail. + +See the following for the assumed MySQL and TiDB server information: + +|Name|Address|Port|User|Password| +|----|-------|----|----|--------| +|MySQL|127.0.0.1|3306|root|* | +|TiDB|127.0.0.1|4000|root|* | + +## Scenarios + ++ To import all the history data. This needs the following tools: + - `Checker`: to check if the shema is compatible with TiDB. + - `Mydumper`: to export data from MySQL. + - `Loader`: to import data to TiDB. + ++ To incrementally synchronise data after all the history data is imported. This needs the following tools: + - `Checker`: to check if the shema is compatible with TiDB. + - `Mydumper`: to export data from MySQL. + - `Loader`: to import data to TiDB. + - `Syncer`: to incrementally synchronize data from MySQL to TiDB. + + > **Note:** To incrementally synchronize data from MySQL to TiDB, the binary logging (binlog) must be enabled and must use the `row` format in MySQL. + +### Enable binary logging (binlog) in MySQL + +Before using the `syncer` tool, make sure: ++ Binlog is enabled in MySQL. See [Setting the Replication Master Configuration](http://dev.mysql.com/doc/refman/5.7/en/replication-howto-masterbaseconfig.html). + ++ Binlog must use the `row` format which is the recommended binlog format in MySQL 5.7. It can be configured using the following statement: + + ```bash + SET GLOBAL binlog_format = ROW; + ``` + +## Use the `checker` tool to check the schema + +Before migrating, you can use the `checker` tool in TiDB to check if TiDB supports the table schema of the data to be migrated. If the `checker` fails to check a certain table schema, it means that the table is not currently supported by TiDB and therefore the data in the table cannot be migrated. + +See [Download the TiDB toolset](#download-the-tidb-toolset-linux) to download the `checker` tool. + +### Download the TiDB toolset (Linux) + +```bash +# Download the tool package. +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf tidb-enterprise-tools-latest-linux-amd64.tar.gz +cd tidb-enterprise-tools-latest-linux-amd64 +``` + +### A sample to use the `checker` tool + +1. Create several tables in the `test` database in MySQL and insert data. + + ```sql + USE test; + CREATE TABLE t1 (id INT, age INT, PRIMARY KEY(id)) ENGINE=InnoDB; + CREATE TABLE t2 (id INT, name VARCHAR(256), PRIMARY KEY(id)) ENGINE=InnoDB; + + INSERT INTO t1 VALUES (1, 1), (2, 2), (3, 3); + INSERT INTO t2 VALUES (1, "a"), (2, "b"), (3, "c"); + ``` + +2. Use the `checker` tool to check all the tables in the `test` database. + + ```bash + ./bin/checker -host 127.0.0.1 -port 3306 -user root test + 2016/10/27 13:11:49 checker.go:48: [info] Checking database test + 2016/10/27 13:11:49 main.go:37: [info] Database DSN: root:@tcp(127.0.0.1:3306)/test?charset=utf8 + 2016/10/27 13:11:49 checker.go:63: [info] Checking table t1 + 2016/10/27 13:11:49 checker.go:69: [info] Check table t1 succ + 2016/10/27 13:11:49 checker.go:63: [info] Checking table t2 + 2016/10/27 13:11:49 checker.go:69: [info] Check table t2 succ + ``` + +3. Use the `checker` tool to check one of the tables in the `test` database. + + **Note:** Assuming you need to migrate the `t1` table only in this sample. + + ```bash + ./bin/checker -host 127.0.0.1 -port 3306 -user root test t1 + 2016/10/27 13:13:56 checker.go:48: [info] Checking database test + 2016/10/27 13:13:56 main.go:37: [info] Database DSN: root:@tcp(127.0.0.1:3306)/test?charset=utf8 + 2016/10/27 13:13:56 checker.go:63: [info] Checking table t1 + 2016/10/27 13:13:56 checker.go:69: [info] Check table t1 succ + Check database succ! + ``` + +### A sample of a table that cannot be migrated + +1. Create the following `t_error` table in MySQL: + + ```sql + CREATE TABLE t_error ( a INT NOT NULL, PRIMARY KEY (a)) + ENGINE=InnoDB TABLESPACE ts1 + PARTITION BY RANGE (a) PARTITIONS 3 ( + PARTITION P1 VALUES LESS THAN (2), + PARTITION P2 VALUES LESS THAN (4) TABLESPACE ts2, + PARTITION P3 VALUES LESS THAN (6) TABLESPACE ts3); + ``` +2. Use the `checker` tool to check the table. If the following error is displayed, the `t_error` table cannot be migrated. + + ```bash + ./bin/checker -host 127.0.0.1 -port 3306 -user root test t_error + 2017/08/04 11:14:35 checker.go:48: [info] Checking database test + 2017/08/04 11:14:35 main.go:39: [info] Database DSN: root:@tcp(127.0.0.1:3306)/test?charset=utf8 + 2017/08/04 11:14:35 checker.go:63: [info] Checking table t1 + 2017/08/04 11:14:35 checker.go:67: [error] Check table t1 failed with err: line 3 column 29 near " ENGINE=InnoDB DEFAULT CHARSET=latin1 + /*!50100 PARTITION BY RANGE (a) + (PARTITION P1 VALUES LESS THAN (2) ENGINE = InnoDB, + PARTITION P2 VALUES LESS THAN (4) TABLESPACE = ts2 ENGINE = InnoDB, + PARTITION P3 VALUES LESS THAN (6) TABLESPACE = ts3 ENGINE = InnoDB) */" (total length 354) + github.com/pingcap/tidb/parser/yy_parser.go:96: + github.com/pingcap/tidb/parser/yy_parser.go:109: + /home/jenkins/workspace/build_tidb_tools_master/go/src/github.com/pingcap/tidb-tools/checker/checker.go:122: parse CREATE TABLE `t1` ( + `a` int(11) NOT NULL, + PRIMARY KEY (`a`) + ) /*!50100 TABLESPACE ts1 */ ENGINE=InnoDB DEFAULT CHARSET=latin1 + /*!50100 PARTITION BY RANGE (a) + (PARTITION P1 VALUES LESS THAN (2) ENGINE = InnoDB, + PARTITION P2 VALUES LESS THAN (4) TABLESPACE = ts2 ENGINE = InnoDB, + PARTITION P3 VALUES LESS THAN (6) TABLESPACE = ts3 ENGINE = InnoDB) */ error + /home/jenkins/workspace/build_tidb_tools_master/go/src/github.com/pingcap/tidb-tools/checker/checker.go:114: + 2017/08/04 11:14:35 main.go:83: [error] Check database test with 1 errors and 0 warnings. + ``` diff --git a/v2.0/op-guide/migration.md b/v2.0/op-guide/migration.md new file mode 100755 index 0000000000000..f2aab97f906c0 --- /dev/null +++ b/v2.0/op-guide/migration.md @@ -0,0 +1,255 @@ +--- +title: Migrate Data from MySQL to TiDB +summary: Use `mydumper`, `loader` and `syncer` tools to migrate data from MySQL to TiDB. +category: operations +--- + +# Migrate Data from MySQL to TiDB + +## Use the `mydumper` / `loader` tool to export and import all the data + +You can use `mydumper` to export data from MySQL and `loader` to import the data into TiDB. + +> **Note:** Although TiDB also supports the official `mysqldump` tool from MySQL for data migration, it is not recommended to use it. Its performance is much lower than `mydumper` / `loader` and it takes much time to migrate large amounts of data. `mydumper`/`loader` is more powerful. For more information, see [https://github.com/maxbube/mydumper](https://github.com/maxbube/mydumper). + +### Export data from MySQL + +Use the `mydumper` tool to export data from MySQL by using the following command: + +```bash +./bin/mydumper -h 127.0.0.1 -P 3306 -u root -t 16 -F 64 -B test -T t1,t2 --skip-tz-utc -o ./var/test +``` +In this command, + +- `-B test`: means the data is exported from the `test` database. +- `-T t1,t2`: means only the `t1` and `t2` tables are exported. +- `-t 16`: means 16 threads are used to export the data. +- `-F 64`: means a table is partitioned into chunks and one chunk is 64MB. +- `--skip-tz-utc`: the purpose of adding this parameter is to ignore the inconsistency of time zone setting between MySQL and the data exporting machine and to disable automatic conversion. + +> **Note**: On the Cloud platforms which require the `super privilege`, such as on the Aliyun platform, add the `--no-locks` parameter to the command. If not, you might get the error message that you don't have the privilege. + +### Import data to TiDB + +Use `loader` to import the data from MySQL to TiDB. See [Loader instructions](./tools/loader.md) for more information. + +```bash +./bin/loader -h 127.0.0.1 -u root -P 4000 -t 32 -d ./var/test +``` + +After the data is imported, you can view the data in TiDB using the MySQL client: + +```sql +mysql -h127.0.0.1 -P4000 -uroot + +mysql> show tables; ++----------------+ +| Tables_in_test | ++----------------+ +| t1 | +| t2 | ++----------------+ + +mysql> select * from t1; ++----+------+ +| id | age | ++----+------+ +| 1 | 1 | +| 2 | 2 | +| 3 | 3 | ++----+------+ + +mysql> select * from t2; ++----+------+ +| id | name | ++----+------+ +| 1 | a | +| 2 | b | +| 3 | c | ++----+------+ +``` + +### Best practice + +To migrate data quickly, especially for huge amount of data, you can refer to the following recommendations. + +- Keep the exported data file as small as possible and it is recommended keep it within 64M. You can use the `-F` parameter to set the value. +- You can adjust the `-t` parameter of `loader` based on the number and the load of TiKV instances. For example, if there are three TiKV instances, `-t` can be set to 3 * (1 ~ n). If the load of TiKV is too high and the log `backoffer.maxSleep 15000ms is exceeded` is displayed many times, decrease the value of `-t`; otherwise, increase it. + +### A sample and the configuration + + - The total size of the exported files is 214G. A single table has 8 columns and 2 billion rows. + - The cluster topology: + - 12 TiKV instances: 4 nodes, 3 TiKV instances per node + - 4 TiDB instances + - 3 PD instances + - The configuration of each node: + - CPU: Intel Xeon E5-2670 v3 @ 2.30GHz + - 48 vCPU [2 x 12 physical cores] + - Memory: 128G + - Disk: sda [raid 10, 300G] sdb[RAID 5, 2T] + - Operating System: CentOS 7.3 + - The `-F` parameter of `mydumper` is set to 16 and the `-t` parameter of `loader` is set to 64. + +**Results**: It takes 11 hours to import all the data, which is 19.4G/hour. + +## Use the `syncer` tool to import data incrementally (optional) + +The previous section introduces how to import all the history data from MySQL to TiDB using `mydumper`/`loader`. But this is not applicable if the data in MySQL is updated after the migration and it is expected to import the updated data quickly. + +Therefore, TiDB provides the `syncer` tool for an incremental data import from MySQL to TiDB. + +See [Download the TiDB enterprise toolset](#download-the-tidb-enterprise-toolset-linux) to download the `syncer` tool. + +### Download the TiDB enterprise toolset (Linux) + +```bash +# Download the enterprise tool package. +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf tidb-enterprise-tools-latest-linux-amd64.tar.gz +cd tidb-enterprise-tools-latest-linux-amd64 +``` + +Assuming the data from `t1` and `t2` is already imported to TiDB using `mydumper`/`loader`. Now we hope that any updates to these two tables are synchronised to TiDB in real time. + +### Obtain the position to synchronise + +The data exported from MySQL contains a metadata file which includes the position information. Take the following metadata information as an example: +``` +Started dump at: 2017-04-28 10:48:10 +SHOW MASTER STATUS: + Log: mysql-bin.000003 + Pos: 930143241 + GTID: + +Finished dump at: 2017-04-28 10:48:11 + +``` +The position information (`Pos: 930143241`) needs to be stored in the `syncer.meta` file for `syncer` to synchronize: + +```bash +# cat syncer.meta +binlog-name = "mysql-bin.000003" +binlog-pos = 930143241 +``` + +> **Note:** The `syncer.meta` file only needs to be configured once when it is first used. The position will be automatically updated when binlog is synchronised. + +### Start `syncer` + +The `config.toml` file for `syncer`: + +```toml +log-level = "info" + +server-id = 101 + +# The file path for meta: +meta = "./syncer.meta" +worker-count = 16 +batch = 10 + +# The testing address for pprof. It can also be used by Prometheus to pull the syncer metrics. +status-addr = ":10081" + +skip-sqls = ["ALTER USER", "CREATE USER"] + +# Support whitelist filter. You can specify the database and table to be synchronised. For example: +# Synchronise all the tables of db1 and db2: +replicate-do-db = ["db1","db2"] + +# Synchronise db1.table1. +[[replicate-do-table]] +db-name ="db1" +tbl-name = "table1" + +# Synchronise db3.table2. +[[replicate-do-table]] +db-name ="db3" +tbl-name = "table2" + +# Support regular expressions. Start with '~' to use regular expressions. +# To synchronise all the databases that start with `test`: +replicate-do-db = ["~^test.*"] + +# The sharding synchronising rules support wildcharacter. +# 1. The asterisk character (*, also called "star") matches zero or more characters, +# for example, "doc*" matches "doc" and "document" but not "dodo"; +# asterisk character must be in the end of the wildcard word, +# and there is only one asterisk in one wildcard word. +# 2. The question mark ? matches exactly one character. +#[[route-rules]] +#pattern-schema = "route_*" +#pattern-table = "abc_*" +#target-schema = "route" +#target-table = "abc" + +#[[route-rules]] +#pattern-schema = "route_*" +#pattern-table = "xyz_*" +#target-schema = "route" +#target-table = "xyz" + +[from] +host = "127.0.0.1" +user = "root" +password = "" +port = 3306 + +[to] +host = "127.0.0.1" +user = "root" +password = "" +port = 4000 + +``` +Start `syncer`: + +```bash +./bin/syncer -config config.toml +2016/10/27 15:22:01 binlogsyncer.go:226: [info] begin to sync binlog from position (mysql-bin.000003, 1280) +2016/10/27 15:22:01 binlogsyncer.go:130: [info] register slave for master server 127.0.0.1:3306 +2016/10/27 15:22:01 binlogsyncer.go:552: [info] rotate to (mysql-bin.000003, 1280) +2016/10/27 15:22:01 syncer.go:549: [info] rotate binlog to (mysql-bin.000003, 1280) +``` + +### Insert data into MySQL + +```bash +INSERT INTO t1 VALUES (4, 4), (5, 5); +``` + +### Log in TiDB and view the data + +```sql +mysql -h127.0.0.1 -P4000 -uroot -p +mysql> select * from t1; ++----+------+ +| id | age | ++----+------+ +| 1 | 1 | +| 2 | 2 | +| 3 | 3 | +| 4 | 4 | +| 5 | 5 | ++----+------+ +``` + +`syncer` outputs the current synchronised data statistics every 30 seconds: + +```bash +2017/06/08 01:18:51 syncer.go:934: [info] [syncer]total events = 15, total tps = 130, recent tps = 4, +master-binlog = (ON.000001, 11992), master-binlog-gtid=53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-74, +syncer-binlog = (ON.000001, 2504), syncer-binlog-gtid = 53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-17 +2017/06/08 01:19:21 syncer.go:934: [info] [syncer]total events = 15, total tps = 191, recent tps = 2, +master-binlog = (ON.000001, 11992), master-binlog-gtid=53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-74, +syncer-binlog = (ON.000001, 2504), syncer-binlog-gtid = 53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-35 +``` + +You can see that by using `syncer`, the updates in MySQL are automatically synchronised in TiDB. \ No newline at end of file diff --git a/v2.0/op-guide/monitor-overview.md b/v2.0/op-guide/monitor-overview.md new file mode 100755 index 0000000000000..b470b6e113c7a --- /dev/null +++ b/v2.0/op-guide/monitor-overview.md @@ -0,0 +1,29 @@ +--- +title: Overview of the TiDB Monitoring Framework +summary: Use Prometheus and Grafana to build the TiDB monitoring framework. +category: operations +--- + +# Overview of the Monitoring Framework + +The TiDB monitoring framework adopts two open source projects: Prometheus and Grafana. TiDB uses Prometheus to store the monitoring and performance metrics and Grafana to visualize these metrics. + +## About Prometheus in TiDB + +As a time series database, Prometheus has a multi-dimensional data model and flexible query language. As one of the most popular open source projects, many companies and organizations have adopted Prometheus, and the project has a very active community. PingCAP is one of the active developers and adoptors of Prometheus for monitoring and alerting in TiDB, TiKV and PD. + +Prometheus consists of multiple components. Currently, TiDB uses the following of them: + +- The Prometheus Server to scrape and store time series data. +- The client libraries to customize necessary metrics in the application. +- A push GateWay to receive the data from Client Push for the Prometheus main server. +- An AlertManager for the alerting mechanism. + +The diagram is as follows: + + + +## About Grafana in TiDB +Grafana is an open source project for analysing and visualizing metrics. TiDB uses Grafana to display the performance metrics as follows: + +![screenshot](../media/grafana-screenshot.png) diff --git a/v2.0/op-guide/monitor.md b/v2.0/op-guide/monitor.md new file mode 100755 index 0000000000000..4f69558d3377c --- /dev/null +++ b/v2.0/op-guide/monitor.md @@ -0,0 +1,245 @@ +--- +title: Monitor a TiDB Cluster +summary: Learn how to monitor the state of a TiDB cluster. +category: operations +--- + +# Monitor a TiDB Cluster + +Currently there are two types of interfaces to monitor the state of the TiDB cluster: + +- Using the HTTP interface to get the internal information of a component, which is called the component state interface. +- Using Prometheus to record the detailed information of the various operations in the components, which is called the Metrics interface. + +## The component state interface + +You can use this type of interface to monitor the basic information of the component. This interface can act as the interface to monitor Keepalive. In addition, the interface of the Placement Driver (PD) can get the details of the entire TiKV cluster. + +### TiDB server + +The HTTP interface of TiDB is: `http://host:port/status` + +The default port number is: 10080 which can be set using the `--status` flag. + +The interface can be used to get the current TiDB server state and to determine whether the server is alive. The result is returned in the following JSON format: + +```bash +curl http://127.0.0.1:10080/status +{ + connections: 0, + version: "5.5.31-TiDB-1.0", + git_hash: "b99521846ff6f71f06e2d49a3f98fa1c1d93d91b" +} +``` + +In this example, + +- connection: the current number of clients connected to the TiDB server +- version: the TiDB version number +- git_hash: the Git Hash of the current TiDB code + +### PD server + +The API address of PD is: `http://${host}:${port}/pd/api/v1/${api_name}` + +The default port number is: 2379. + +See [PD API doc](https://cdn.rawgit.com/pingcap/docs/master/op-guide/pd-api-v1.html) for detailed information about various API names. + +The interface can be used to get the state of all the TiKV servers and the information about load balancing. It is the most important and frequently-used interface to get the state information of all the TiKV nodes. See the following example for the the information about a single-node TiKV cluster: + +```bash +curl http://127.0.0.1:2379/pd/api/v1/stores +{ + "count": 1 // the number of the TiKV node + "stores": [ // the list of the TiKV node + // the detailed information about the single TiKV node + { + "store": { + "id": 1, + "address": "127.0.0.1:22161", + "state": 0 + }, + "status": { + "store_id": 1, // the ID of the node + "capacity": 1968874332160, // the total capacity + "available": 1264847716352, // the available capacity + "region_count": 1, // the count of Regions in this node + "sending_snap_count": 0, + "receiving_snap_count": 0, + "start_ts": "2016-10-24T19:54:00.110728339+08:00", // the starting timestamp + "last_heartbeat_ts": "2016-10-25T10:52:54.973669928+08:00", // the timestamp of the last heartbeat + "total_region_count": 1, // the count of the total Regions + "leader_region_count": 1, // the count of the Leader Regions + "uptime": "14h58m54.862941589s" + }, + "scores": [ + 100, + 35 + ] + } + ] +} +``` + +## The metrics interface + +You can use this type of interface to monitor the state and performance of the entire cluster. The metrics data is displayed in Prometheus and Grafana. See [Use Prometheus and Grafana](#use-prometheus-and-grafana) for how to set up the monitoring system. + +You can get the following metrics for each component: + +### TiDB server + +- query processing time to monitor the latency and throughput + +- the DDL process monitoring + +- TiKV client related monitoring + +- PD client related monitoring + +### PD server + +- the total number of times that the command executes + +- the total number of times that a certain command fails + +- the duration that a command succeeds + +- the duration that a command fails + +- the duration that a command finishes and returns result + +### TiKV server + +- Garbage Collection (GC) monitoring + +- the total number of times that the TiKV command executes + +- the duration that Scheduler executes commands + +- the total number of times of the Raft propose command + +- the duration that Raft executes commands + +- the total number of times that Raft commands fail + +- the total number of times that Raft processes the ready state + +## Use Prometheus and Grafana + +### The deployment architecture + +See the following diagram for the deployment architecture: + +![image alt text](../media/monitor-architecture.png) + +> **Note:** You must add the Prometheus Pushgateway addresses to the startup parameters of the TiDB, PD and TiKV components. + +### Set up the monitoring system + +See the following links for your reference: + +- Prometheus Push Gateway: [https://github.com/prometheus/pushgateway](https://github.com/prometheus/pushgateway) + +- Prometheus Server: [https://github.com/prometheus/prometheus#install](https://github.com/prometheus/prometheus#install) + +- Grafana: [http://docs.grafana.org](http://docs.grafana.org/) + +## Configuration + +### Configure TiDB, PD and TiKV + ++ TiDB: Set the two parameters: `--metrics-addr` and `--metrics-interval`. + + - Set the Push Gateway address as the `--metrics-addr` parameter. + - Set the push frequency as the `--metrics-interval` parameter. The unit is s, and the default value is 15. + ++ PD: update the toml configuration file with the Push Gateway address and the the push frequency: + + ```toml + [metric] + # prometheus client push interval, set "0s" to disable prometheus. + interval = "15s" + # prometheus pushgateway address, leaves it empty will disable prometheus. + address = "host:port" + ``` + ++ TiKV: update the toml configuration file with the Push Gateway address and the the push frequency. Set the job field as "tikv". + + ```toml + [metric] + # the Prometheus client push interval. Setting the value to 0s stops Prometheus client from pushing. + interval = "15s" + # the Prometheus pushgateway address. Leaving it empty stops Prometheus client from pushing. + address = "host:port" + # the Prometheus client push job name. Note: A node id will automatically append, e.g., "tikv_1". + job = "tikv" + ``` + +### Configure PushServer + +Generally, it does not need to be configured. You can use the default port: 9091. + +### Configure Prometheus + +Add the Push Gateway address to the yaml configuration file: + +```yaml + scrape_configs: +# The job name is added as a label `job=` to any timeseries scraped from this config. +- job_name: 'TiDB' + + # Override the global default and scrape targets from this job every 5 seconds. + scrape_interval: 5s + + honor_labels: true + + static_configs: + - targets: ['host:port'] # use the Push Gateway address +labels: + group: 'production' + ``` + +### Configure Grafana + +#### Create a Prometheus data source + +1. Login the Grafana Web interface. + + - The default address is: [http://localhost:3000](http://localhost:3000) + + - The default account name: admin + + - The password for the default account: admin + +2. Click the Grafana logo to open the sidebar menu. + +3. Click "Data Sources" in the sidebar. + +4. Click "Add data source". + +5. Specify the data source information: + + - Specify the name for the data source. + + - For Type, select Prometheus. + + - For Url, specify the Prometheus address. + + - Specify other fields as needed. + +6. Click "Add" to save the new data source. + +#### Create a Grafana dashboard + +1. Click the Grafana logo to open the sidebar menu. + +2. On the sidebar menu, click "Dashboards" -> "Import" to open the "Import Dashboard" window. + +3. Click "Upload .json File" to upload a JSON file ( Download [TiDB Grafana Config](https://grafana.com/tidb) ). + +4. Click "Save & Open". + +5. A Prometheus dashboard is created. + diff --git a/v2.0/op-guide/offline-ansible-deployment.md b/v2.0/op-guide/offline-ansible-deployment.md new file mode 100755 index 0000000000000..4657528828c81 --- /dev/null +++ b/v2.0/op-guide/offline-ansible-deployment.md @@ -0,0 +1,157 @@ +--- +title: Deploy TiDB Offline Using Ansible +summary: Use Ansible to deploy a TiDB cluster offline. +category: operations +--- + +# Deploy TiDB Offline Using Ansible + +This guide describes how to deploy a TiDB cluster offline using Ansible. + +## Prepare + +Before you start, make sure that you have: + +1. A download machine + + - The machine must have access to the Internet in order to download TiDB-Ansible, TiDB and related packages. + - For Linux operating system, it is recommended to install CentOS 7.3 or later. + +2. Several target machines and one Control Machine + + - For system requirements and configuration, see [Prepare the environment](ansible-deployment.md#prerequisites). + - It is acceptable without access to the Internet. + +## Step 1: Install system dependencies on the Control Machine + +Take the following steps to install system dependencies on the Control Machine installed with the CentOS 7 system. + +1. Download the [`pip`](https://download.pingcap.org/ansible-system-rpms.el7.tar.gz) offline installation package to the Control Machine. + + ``` + # tar -xzvf ansible-system-rpms.el7.tar.gz + # cd ansible-system-rpms.el7 + # chmod u+x install_ansible_system_rpms.sh + # ./install_ansible_system_rpms.sh + ``` + + > **Note:** This offline installation package includes `pip` and `sshpass`, and only supports the CentOS 7 system. + +2. After the installation is finished, you can use `pip -V` to check whether it is successfully installed. + + ```bash + # pip -V + pip 8.1.2 from /usr/lib/python2.7/site-packages (python 2.7) + ``` + + > **Note:** If `pip` is already installed to your system, make sure that the version is 8.1.2 or later. Otherwise, compatibility error occurs when you install Ansible and its dependencies offline. + +## Step 2: Create the `tidb` user on the Control Machine and generate the SSH key + +See [Create the `tidb` user on the Control Machine and generate the SSH key](ansible-deployment.md#step-2-create-the-tidb-user-on-the-control-machine-and-generate-the-ssh-key). + +## Step 3: Install Ansible and its dependencies offline on the Control Machine + +Currently, the TiDB 2.0 GA version and the master version are compatible with Ansible 2.5. Ansible and the related dependencies are in the `tidb-ansible/requirements.txt` file. + +1. Download [Ansible 2.5 offline installation package](https://download.pingcap.org/ansible-2.5.0-pip.tar.gz). + +2. Install Ansible and its dependencies offline. + + ``` + # tar -xzvf ansible-2.5.0-pip.tar.gz + # cd ansible-2.5.0-pip/ + # chmod u+x install_ansible.sh + # ./install_ansible.sh + ``` + +3. View the version of Ansible. + + After Ansible is installed, you can view the version using `ansible --version`. + + ``` + # ansible --version + ansible 2.5.0 + ``` + +## Step 4: Download TiDB-Ansible and TiDB packages on the download machine + +1. Install Ansible on the download machine. + + Use the following method to install Ansible online on the download machine installed with the CentOS 7 system. After Ansible is installed, you can view the version using `ansible --version`. + + ```bash + # yum install epel-release + # yum install ansible curl + # ansible --version + ansible 2.5.0 + ``` + > **Note:** Make sure that the version of Ansible is 2.5, otherwise a compatibility issue occurs. + +2. Download TiDB-Ansible. + + Use the following command to download the corresponding version of TiDB-Ansible from the GitHub [TiDB-Ansible project](https://github.com/pingcap/tidb-ansible). The default folder name is `tidb-ansible`. The following are examples of downloading various versions, and you can turn to the official team for advice on which version to choose. + + Download the 2.0 version: + + ``` + git clone -b release-2.0 https://github.com/pingcap/tidb-ansible.git + ``` + + or + + Download the master version: + + ``` + git clone https://github.com/pingcap/tidb-ansible.git + ``` + +3. Run the `local_prepare.yml` playbook, and download TiDB binary online to the download machine. + + ``` + cd tidb-ansible + ansible-playbook local_prepare.yml + ``` + +4. After running the above command, copy the `tidb-ansible` folder to the `/home/tidb` directory of the Control Machine. The ownership authority of the file must be the `tidb` user. + +## Step 5: Configure the SSH mutual trust and sudo rules on the Control Machine + +See [Configure the SSH mutual trust and sudo rules on the Control Machine](ansible-deployment.md#configure-the-ssh-mutual-trust-and-sudo-rules-on-the-control-machine). + +## Step 6: Install the NTP service on the target machines + +See [Install the NTP service on the target machines](ansible-deployment.md#install-the-ntp-service-on-the-target-machines). + +> **Note:** If the time and time zone of all your target machines are same, the NTP service is on and is normally synchronizing time, you can ignore this step. See [How to check whether the NTP service is normal](#how-to-check-whether-the-ntp-service-is-normal). + +## Step 7: Configure the CPUfreq governor mode on the target machine + +See [Configure the CPUfreq governor mode on the target machine](ansible-deployment.md#configure-the-cpufreq-governor-mode-on-the-target-machine). + +## Step 8: Mount the data disk ext4 filesystem with options on the target machines + +See [Mount the data disk ext4 filesystem with options on the target machines](ansible-deployment.md#mount-the-data-disk-ext4-filesystem-with-options-on-the-target-machines). + +## Step 9: Edit the `inventory.ini` file to orchestrate the TiDB cluster + +See [Edit the `inventory.ini` file to orchestrate the TiDB cluster](ansible-deployment.md#edit-the-inventory.ini-file-to-orchestrate-the-tidb-cluster). + +## Step 10: Deploy the TiDB cluster + +1. You do not need to run the playbook in `ansible-playbook local_prepare.yml`. + +2. You can use the `Report` button on the Grafana Dashboard to generate the PDF file. This function depends on the `fontconfig` package and English fonts. To use this function, download the offline installation package, upload it to the `grafana_servers` machine, and install it. This package includes `fontconfig` and `open-sans-fonts`, and only supports the CentOS 7 system. + + ``` + $ tar -xzvf grafana-font-rpms.el7.tar.gz + $ cd grafana-font-rpms.el7 + $ chmod u+x install_grafana_font_rpms.sh + $ ./install_grafana_font_rpms.sh + ``` + +3. See [Deploy the TiDB cluster](ansible-deployment.md#step-10-deploy-the-tidb-cluster). + +## Test the TiDB cluster + +See [Test the TiDB cluster](ansible-deployment.md#test-the-tidb-cluster). \ No newline at end of file diff --git a/v2.0/op-guide/pd-api-v1.html b/v2.0/op-guide/pd-api-v1.html new file mode 100755 index 0000000000000..4e67e907278d9 --- /dev/null +++ b/v2.0/op-guide/pd-api-v1.html @@ -0,0 +1,6704 @@ + + + + + Placement Driver API + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+
+
+

Placement Driver API

+
+
+
+ +
+ + +
+

Default

+ + + + + + + +
+ +
+
+

pdApiV1BalancersGet

+
+
+ +
+
+ +

+

Get all PD balancers.

+

+
+ +
/pd/api/v1/balancers
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/balancers"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Balancers result = apiInstance.pdApiV1BalancersGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1BalancersGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Balancers result = apiInstance.pdApiV1BalancersGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1BalancersGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1BalancersGetWithCompletionHandler: 
+              ^(Balancers output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1BalancersGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1BalancersGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                Balancers result = apiInstance.pdApiV1BalancersGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1BalancersGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1BalancersGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1BalancersGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - A balancers object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1ConfigGet

+
+
+ +
+
+ +

+

Get the PD config.

+

+
+ +
/pd/api/v1/config
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/config"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Config result = apiInstance.pdApiV1ConfigGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1ConfigGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Config result = apiInstance.pdApiV1ConfigGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1ConfigGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1ConfigGetWithCompletionHandler: 
+              ^(Config output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1ConfigGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1ConfigGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                Config result = apiInstance.pdApiV1ConfigGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1ConfigGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1ConfigGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1ConfigGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - A config object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - Unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1EventsGet

+
+
+ +
+
+ +

+

Get all PD events.

+

+
+ +
/pd/api/v1/events
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/events"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            array[LogEvent] result = apiInstance.pdApiV1EventsGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1EventsGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            array[LogEvent] result = apiInstance.pdApiV1EventsGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1EventsGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1EventsGetWithCompletionHandler: 
+              ^(array[LogEvent] output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1EventsGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1EventsGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                array[LogEvent] result = apiInstance.pdApiV1EventsGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1EventsGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1EventsGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1EventsGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - An array of event objects.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1LeaderGet

+
+
+ +
+
+ +

+

Get the PD leader.

+

+
+ +
/pd/api/v1/leader
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/leader"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Leader result = apiInstance.pdApiV1LeaderGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1LeaderGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Leader result = apiInstance.pdApiV1LeaderGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1LeaderGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1LeaderGetWithCompletionHandler: 
+              ^(Leader output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1LeaderGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1LeaderGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                Leader result = apiInstance.pdApiV1LeaderGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1LeaderGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1LeaderGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1LeaderGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - A leader object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - Unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1MembersGet

+
+
+ +
+
+ +

+

Get all PD members.

+

+
+ +
/pd/api/v1/members
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/members"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            array[Member] result = apiInstance.pdApiV1MembersGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1MembersGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            array[Member] result = apiInstance.pdApiV1MembersGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1MembersGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1MembersGetWithCompletionHandler: 
+              ^(array[Member] output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1MembersGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1MembersGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                array[Member] result = apiInstance.pdApiV1MembersGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1MembersGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1MembersGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1MembersGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - An array of member objects.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - Unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1MembersNameDelete

+
+
+ +
+
+ +

+

Delete a PD member.

+

+
+ +
/pd/api/v1/members/{name}
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X delete -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/members/{name}"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        String name = name_example; // String | The name of the member to delete.
+        try {
+            apiInstance.pdApiV1MembersNameDelete(name);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1MembersNameDelete");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        String name = name_example; // String | The name of the member to delete.
+        try {
+            apiInstance.pdApiV1MembersNameDelete(name);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1MembersNameDelete");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+String *name = name_example; // The name of the member to delete.
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1MembersNameDeleteWith:name
+              completionHandler: ^(NSError* error) {
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var name = name_example; // {String} The name of the member to delete.
+
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully.');
+  }
+};
+api.pdApiV1MembersNameDelete(name, callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1MembersNameDeleteExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+            var name = name_example;  // String | The name of the member to delete.
+
+            try
+            {
+                apiInstance.pdApiV1MembersNameDelete(name);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1MembersNameDelete: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1MembersNameDelete($name);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1MembersNameDelete: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + +
Path parameters
+ + + + + + + + + + +
NameDescription
name* + + + +
+
+ + + + + +

Responses

+ +

Status: 200 - Member deleted

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 404 - Member not found

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - Unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1RegionIdGet

+
+
+ +
+
+ +

+

Get a TiKV region.

+

+
+ +
/pd/api/v1/region/{id}
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/region/{id}"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        Integer id = 56; // Integer | The id of the region to get.
+        try {
+            Region result = apiInstance.pdApiV1RegionIdGet(id);
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1RegionIdGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        Integer id = 56; // Integer | The id of the region to get.
+        try {
+            Region result = apiInstance.pdApiV1RegionIdGet(id);
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1RegionIdGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+Integer *id = 56; // The id of the region to get.
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1RegionIdGetWith:id
+              completionHandler: ^(Region output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var id = 56; // {Integer} The id of the region to get.
+
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1RegionIdGet(id, callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1RegionIdGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+            var id = 56;  // Integer | The id of the region to get.
+
+            try
+            {
+                Region result = apiInstance.pdApiV1RegionIdGet(id);
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1RegionIdGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1RegionIdGet($id);
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1RegionIdGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + +
Path parameters
+ + + + + + + + + + +
NameDescription
id* + + + +
+
+ + + + + +

Responses

+ +

Status: 200 - A region object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1RegionsGet

+
+
+ +
+
+ +

+

Get all TiKV regions.

+

+
+ +
/pd/api/v1/regions
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/regions"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Regions result = apiInstance.pdApiV1RegionsGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1RegionsGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Regions result = apiInstance.pdApiV1RegionsGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1RegionsGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1RegionsGetWithCompletionHandler: 
+              ^(Regions output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1RegionsGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1RegionsGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                Regions result = apiInstance.pdApiV1RegionsGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1RegionsGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1RegionsGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1RegionsGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - A regions object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1StoreIdDelete

+
+
+ +
+
+ +

+

Delete a TiKV store.

+

+
+ +
/pd/api/v1/store/{id}
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X delete -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/store/{id}"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        Integer id = 56; // Integer | The id of the store to delete.
+        try {
+            apiInstance.pdApiV1StoreIdDelete(id);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1StoreIdDelete");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        Integer id = 56; // Integer | The id of the store to delete.
+        try {
+            apiInstance.pdApiV1StoreIdDelete(id);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1StoreIdDelete");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+Integer *id = 56; // The id of the store to delete.
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1StoreIdDeleteWith:id
+              completionHandler: ^(NSError* error) {
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var id = 56; // {Integer} The id of the store to delete.
+
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully.');
+  }
+};
+api.pdApiV1StoreIdDelete(id, callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1StoreIdDeleteExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+            var id = 56;  // Integer | The id of the store to delete.
+
+            try
+            {
+                apiInstance.pdApiV1StoreIdDelete(id);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1StoreIdDelete: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1StoreIdDelete($id);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1StoreIdDelete: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + +
Path parameters
+ + + + + + + + + + +
NameDescription
id* + + + +
+
+ + + + + +

Responses

+ +

Status: 200 - Store deleted

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1StoreIdGet

+
+
+ +
+
+ +

+

Get a TiKV store.

+

+
+ +
/pd/api/v1/store/{id}
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/store/{id}"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        Integer id = 56; // Integer | The id of the store to get.
+        try {
+            Store result = apiInstance.pdApiV1StoreIdGet(id);
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1StoreIdGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        Integer id = 56; // Integer | The id of the store to get.
+        try {
+            Store result = apiInstance.pdApiV1StoreIdGet(id);
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1StoreIdGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+Integer *id = 56; // The id of the store to get.
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1StoreIdGetWith:id
+              completionHandler: ^(Store output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var id = 56; // {Integer} The id of the store to get.
+
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1StoreIdGet(id, callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1StoreIdGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+            var id = 56;  // Integer | The id of the store to get.
+
+            try
+            {
+                Store result = apiInstance.pdApiV1StoreIdGet(id);
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1StoreIdGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1StoreIdGet($id);
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1StoreIdGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + +
Path parameters
+ + + + + + + + + + +
NameDescription
id* + + + +
+
+ + + + + +

Responses

+ +

Status: 200 - A store object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1StoresGet

+
+
+ +
+
+ +

+

Get all TiKV stores.

+

+
+ +
/pd/api/v1/stores
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/stores"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Stores result = apiInstance.pdApiV1StoresGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1StoresGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Stores result = apiInstance.pdApiV1StoresGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1StoresGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1StoresGetWithCompletionHandler: 
+              ^(Stores output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1StoresGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1StoresGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                Stores result = apiInstance.pdApiV1StoresGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1StoresGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1StoresGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1StoresGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - A stores object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + +

Status: 500 - unexpected error

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ + + + + + + + +
+ +
+
+

pdApiV1VersionGet

+
+
+ +
+
+ +

+

Get the PD version.

+

+
+ +
/pd/api/v1/version
+ +

+

Usage and SDK Samples

+

+ + + +
+
+

+curl -X get -H "apiKey: [[apiKey]]" -H "apiSecret: [[apiSecret]]" "http://localhost/pd/api/v1/version"
+
+
+
+
+ +
+

+import io.swagger.client.*;
+import io.swagger.client.auth.*;
+import io.swagger.client.model.*;
+import .DefaultApi;
+
+import java.io.File;
+import java.util.*;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Version result = apiInstance.pdApiV1VersionGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1VersionGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + +
+

+import .DefaultApi;
+
+public class DefaultApiExample {
+
+    public static void main(String[] args) {
+        DefaultApi apiInstance = new DefaultApi();
+        try {
+            Version result = apiInstance.pdApiV1VersionGet();
+            System.out.println(result);
+        } catch (ApiException e) {
+            System.err.println("Exception when calling DefaultApi#pdApiV1VersionGet");
+            e.printStackTrace();
+        }
+    }
+}
+
+                                                  
+
+ + + + +
+

+
+
+DefaultApi *apiInstance = [[DefaultApi alloc] init];
+
+[apiInstance pdApiV1VersionGetWithCompletionHandler: 
+              ^(Version output, NSError* error) {
+                            if (output) {
+                                NSLog(@"%@", output);
+                            }
+                            if (error) {
+                                NSLog(@"Error: %@", error);
+                            }
+                        }];
+
+                                                    
+
+
+

+var  = require('');
+
+var api = new .DefaultApi()
+
+var callback = function(error, data, response) {
+  if (error) {
+    console.error(error);
+  } else {
+    console.log('API called successfully. Returned data: ' + data);
+  }
+};
+api.pdApiV1VersionGet(callback);
+
+                                                    
+
+ + + +
+

+using System;
+using System.Diagnostics;
+using .Api;
+using .Client;
+using ;
+
+namespace Example
+{
+    public class pdApiV1VersionGetExample
+    {
+        public void main()
+        {
+            
+            var apiInstance = new DefaultApi();
+
+            try
+            {
+                Version result = apiInstance.pdApiV1VersionGet();
+                Debug.WriteLine(result);
+            }
+            catch (Exception e)
+            {
+                Debug.Print("Exception when calling DefaultApi.pdApiV1VersionGet: " + e.Message );
+            }
+        }
+    }
+}
+
+                                                    
+
+ + +
+

+pdApiV1VersionGet();
+    print_r($result);
+} catch (Exception $e) {
+    echo 'Exception when calling DefaultApi->pdApiV1VersionGet: ', $e->getMessage(), PHP_EOL;
+}
+
+                                                  
+
+ +
+ + + + + +

Parameters

+ + + + + + + + +

Responses

+ +

Status: 200 - A version object.

+ + + + + + + + + +
+ + + +
+ + + + +
+ + +
+ + + + + + + +
+ + + + +
+ + + + + + + + + +
+ +
+ +
+ +
+ + + + + + +
+ + + + + + + +
+
+ Generated 2016-09-14T04:08:53.357Z +
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + diff --git a/v2.0/op-guide/recommendation.md b/v2.0/op-guide/recommendation.md new file mode 100755 index 0000000000000..eaa84ee3e4a00 --- /dev/null +++ b/v2.0/op-guide/recommendation.md @@ -0,0 +1,85 @@ +--- +title: Software and Hardware Requirements +summary: Learn the software and hardware requirements for deploying and running TiDB. +category: operations +--- + +# Software and Hardware Requirements + +## About + +As an open source distributed NewSQL database with high performance, TiDB can be deployed in the Intel architecture server and major virtualization environments and runs well. TiDB supports most of the major hardware networks and Linux operating systems. + +## Linux OS version requirements + +| Linux OS Platform | Version | +| :-----------------------:| :----------: | +| Red Hat Enterprise Linux | 7.3 or later | +| CentOS | 7.3 or later | +| Oracle Enterprise Linux | 7.3 or later | +| Ubuntu LTS | 16.04 or later | + +> **Note:** +> +> - For Oracle Enterprise Linux, TiDB supports the Red Hat Compatible Kernel (RHCK) and does not support the Unbreakable Enterprise Kernel provided by Oracle Enterprise Linux. +> - A large number of TiDB tests have been run on the CentOS 7.3 system, and in our community there are a lot of best practices in which TiDB is deployed on the Linux operating system. Therefore, it is recommended to deploy TiDB on CentOS 7.3 or later. +> - The support for the Linux operating systems above includes the deployment and operation in physical servers as well as in major virtualized environments like VMware, KVM and XEM. + +## Server requirements + +You can deploy and run TiDB on the 64-bit generic hardware server platform in the Intel x86-64 architecture. The requirements and recommendations about server hardware configuration for development, test and production environments are as follows: + +### Development and test environments + +| Component | CPU | Memory | Local Storage | Network | Instance Number (Minimum Requirement) | +| :------: | :-----: | :-----: | :----------: | :------: | :----------------: | +| TiDB | 8 core+ | 16 GB+ | SAS, 200 GB+ | Gigabit network card | 1 (can be deployed on the same machine with PD) | +| PD | 4 core+ | 8 GB+ | SAS, 200 GB+ | Gigabit network card | 1 (can be deployed on the same machine with TiDB) | +| TiKV | 8 core+ | 32 GB+ | SAS, 200 GB+ | Gigabit network card | 3 | +| | | | | Total Server Number | 4 | + +> **Note**: +> +> - In the test environment, the TiDB and PD can be deployed on the same server. +> - For performance-related test, do not use low-performance storage and network hardware configuration, in order to guarantee the correctness of the test result. + +### Production environment + +| Component | CPU | Memory | Hard Disk Type | Network | Instance Number (Minimum Requirement) | +| :-----: | :------: | :------: | :------: | :------: | :-----: | +| TiDB | 16 core+ | 32 GB+ | SAS | 10 Gigabit network card (2 preferred) | 2 | +| PD | 4 core+ | 8 GB+ | SSD | 10 Gigabit network card (2 preferred) | 3 | +| TiKV | 16 core+ | 32 GB+ | SSD | 10 Gigabit network card (2 preferred) | 3 | +| Monitor | 8 core+ | 16 GB+ | SAS | Gigabit network card | 1 | +| | | | | Total Server Number | 9 | + +> **Note**: +> +> - In the production environment, you can deploy and run TiDB and PD on the same server. If you have a higher requirement for performance and reliability, try to deploy them separately. +> - It is strongly recommended to use higher configuration in the production environment. +> - It is recommended to keep the size of TiKV hard disk within 2 TB if you are using PCI-E SSD disks or within 1.5 TB if you are using regular SSD disks. + +## Network requirements + +As an open source distributed NewSQL database, TiDB requires the following network port configuration to run. Based on the TiDB deployment in actual environments, the administrator can open relevant ports in the network side and host side. + +| Component | Default Port | Description | +| :--:| :--: | :-- | +| TiDB | 4000 | the communication port for the application and DBA tools | +| TiDB | 10080 | the communication port to report TiDB status | +| TiKV | 20160 | the TiKV communication port | +| PD | 2379 | the communication port between TiDB and PD | +| PD | 2380 | the inter-node communication port within the PD cluster | +| Pump | 8250 | the Pump communication port | +| Drainer | 8249 | the Drainer communication port | +| Prometheus | 9090 | the communication port for the Prometheus service| +| Pushgateway | 9091 | the aggregation and report port for TiDB, TiKV, and PD monitor | +| Node_exporter | 9100 | the communication port to report the system information of every TiDB cluster node | +| Blackbox_exporter | 9115 | the Blackbox_exporter communication port, used to monitor the ports in the TiDB cluster | +| Grafana | 3000 | the port for the external Web monitoring service and client (Browser) access| +| Grafana | 8686 | the grafana_collector communication port, used to export the Dashboard as the PDF format | +| Kafka_exporter | 9308 | the Kafka_exporter communication port, used to monitor the binlog Kafka cluster | + +## Web browser requirements + +Based on the Prometheus and Grafana platform, TiDB provides a visual data monitoring solution to monitor the TiDB cluster status. To access the Grafana monitor interface, it is recommended to use a higher version of Microsoft IE, Google Chrome or Mozilla Firefox. diff --git a/v2.0/op-guide/security.md b/v2.0/op-guide/security.md new file mode 100755 index 0000000000000..744c09e62c3b3 --- /dev/null +++ b/v2.0/op-guide/security.md @@ -0,0 +1,128 @@ +--- +title: Enable TLS Authentication +summary: Learn how to enable TLS authentication in a TiDB cluster. +category: deployment +--- + +# Enable TLS Authentication + +## Overview + +This document describes how to enable TLS authentication in the TiDB cluster. The TLS authentication includes the following two conditions: + +- The mutual authentication between TiDB components, including the authentication among TiDB, TiKV and PD, between TiKV Control and TiKV, between PD Control and PD, between TiKV peers, and between PD peers. Once enabled, the mutual authentication applies to all components, and it does not support applying to only part of the components. +- The one-way and mutual authentication between the TiDB server and the MySQL Client. + +> **Note:** The authentication between the MySQL Client and the TiDB server uses one set of certificates, while the authentication among TiDB components uses another set of certificates. + +## Enable mutual TLS authentication among TiDB components + +### Prepare certificates + +It is recommended to prepare a separate server certificate for TiDB, TiKV and PD, and make sure that they can authenticate each other. The clients of TiDB, TiKV and PD share one client certificate. + +You can use multiple tools to generate self-signed certificates, such as `openssl`, `easy-rsa ` and `cfssl`. + +See an example of [generating self-signed certificates](generate-self-signed-certificates.md) using `cfssl`. + +### Configure certificates + +To enable mutual authentication among TiDB components, configure the certificates of TiDB, TiKV and PD as follows. + +#### TiDB + +Configure in the configuration file or command line arguments: + +```toml +[security] +# Path of file that contains list of trusted SSL CAs for connection with cluster components. +cluster-ssl-ca = "/path/to/ca.pem" +# Path of file that contains X509 certificate in PEM format for connection with cluster components. +cluster-ssl-cert = "/path/to/tidb-server.pem" +# Path of file that contains X509 key in PEM format for connection with cluster components. +cluster-ssl-key = "/path/to/tidb-server-key.pem" +``` + +#### TiKV + +Configure in the configuration file or command line arguments, and set the corresponding URL to https: + +```toml +[security] +# set the path for certificates. Empty string means disabling secure connections. +ca-path = "/path/to/ca.pem" +cert-path = "/path/to/client.pem" +key-path = "/path/to/client-key.pem" +``` + +#### PD + +Configure in the configuration file or command line arguments, and set the corresponding URL to https: + +```toml +[security] +# Path of file that contains list of trusted SSL CAs. If set, following four settings shouldn't be empty +cacert-path = "/path/to/ca.pem" +# Path of file that contains X509 certificate in PEM format. +cert-path = "/path/to/server.pem" +# Path of file that contains X509 key in PEM format. +key-path = "/path/to/server-key.pem" +``` + +Now mutual authentication among TiDB components is enabled. + +When you connect the server using the client, it is required to specify the client certificate. For example: + +```bash +./pd-ctl -u https://127.0.0.1:2379 --cacert /path/to/ca.pem --cert /path/to/pd-client.pem --key /path/to/pd-client-key.pem + +./tikv-ctl --host="127.0.0.1:20160" --ca-path="/path/to/ca.pem" --cert-path="/path/to/client.pem" --key-path="/path/to/clinet-key.pem" +``` + +## Enable TLS authentication between the MySQL client and TiDB server + +### Prepare certificates + +```bash +mysql_ssl_rsa_setup --datadir=certs +``` + +### Configure one-way authentication + +Configure in the configuration file or command line arguments of TiDB: + +```toml +[security] +# Path of file that contains list of trusted SSL CAs. +ssl-ca = "" +# Path of file that contains X509 certificate in PEM format. +ssl-cert = "/path/to/certs/server.pem" +# Path of file that contains X509 key in PEM format. +ssl-key = "/path/to/certs/server-key.pem" +``` + +Configure in the MySQL client: + +```bash +mysql -u root --host 127.0.0.1 --port 4000 --ssl-mode=REQUIRED +``` + +### Configure mutual authentication + +Configure in the configuration file or command line arguments of TiDB: + +```toml +[security] +# Path of file that contains list of trusted SSL CAs for connection with mysql client. +ssl-ca = "/path/to/certs/ca.pem" +# Path of file that contains X509 certificate in PEM format for connection with mysql client. +ssl-cert = "/path/to/certs/server.pem" +# Path of file that contains X509 key in PEM format for connection with mysql client. +ssl-key = "/path/to/certs/server-key.pem" +``` + +Specify the client certificate in the client: + +```bash +mysql -u root --host 127.0.0.1 --port 4000 --ssl-cert=/path/to/certs/client-cert.pem --ssl-key=/path/to/certs/client-key.pem --ssl-ca=/path/to/certs/ca.pem --ssl-mode=VERIFY_IDENTITY +``` diff --git a/v2.0/op-guide/tidb-config-file.md b/v2.0/op-guide/tidb-config-file.md new file mode 100755 index 0000000000000..2a1eaaa5340fb --- /dev/null +++ b/v2.0/op-guide/tidb-config-file.md @@ -0,0 +1,224 @@ +--- +title: TiDB Configuration File Description +summary: Learn the TiDB configuration file options that are not involved in command line options. +category: deployment +--- + +# TiDB Configuration File Description + +The TiDB configuration file supports more options than command line options. You can find the default configuration file in [config/config.toml.example](https://github.com/pingcap/tidb/blob/master/config/config.toml.example) and rename it to `config.toml`. + +This document describes the options that are not involved in command line options. For command line options, see [here](configuration.md). + +### `split-table` + +- To create a separate Region for each table +- Default: true +- It is recommended to set it to false if you need to create a large number of tables + +### `oom-action` + +- To specify the operation when out-of-memory occurs in TiDB +- Default: "log" +- The valid options are "log" and "cancel"; "log" only prints the log, without actual processing; "cancel" cancels the operation and outputs the log + +### `enable-streaming` + +- To enable the data fetch mode of streaming in Coprocessor +- Default: false + +### `lower-case-table-names` + +- To configure the value of the `lower_case_table_names` system variable +- Default: 2 +- For details, you can see the [MySQL description](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_lower_case_table_names) of this variable +- Currently, TiDB only supports setting the value of this option to 2. This means it is case-sensitive when you save a table name, but case-insensitive when you compare table names. The comparison is based on the lower case. + +## Log + +Configuration about log. + +### `format` + +- To specify the log output format +- The valid options are "json", "text" and "console" +- Default: "text" + +### `disable-timestamp` + +- Whether to disable outputting timestamp in the log +- Default: false +- If you set the value to true, the log does not output timestamp + +### `slow-query-file` + +- The file name of the slow query log +- Default: "" +- After you set it, the slow query log is output to this file separately + +### `slow-threshold` + +- To output the threshold value of consumed time in the slow log +- Default: 300ms +- If the value in a query is larger than the default value, it is a slow query and is output to the slow log + +### `expensive-threshold` + +- To output the threshold value of the number of rows for the `expensive` operation +- Default: 10000 +- When the number of query rows (including the intermediate results based on statistics) is larger than this value, it is an `expensive` operation and outputs log with the `[EXPENSIVE_QUERY]` prefix. + +### `query-log-max-len` + +- The maximum length of SQL output +- Default: 2048 +- When the length of the statement is longer than `query-log-max-len`, the statement is truncated to output + +### log.file + +#### `filename` + +- The file name of the general log file +- Default: "" +- If you set it, the log is output to this file + +#### `max-size` + +- The size limit of the log file +- Default: 300MB +- The maximum size is 4GB + +#### `max-days` + +- The maximum number of days that the log is retained +- Default: 0 +- The log is retained by default; if you set the value, the expired log is cleaned up after `max-days` + +#### `max-backups` + +- The maximum number of retained logs +- Default: 0 +- All the log files are retained by default; if you set it to 7, 7 log files are retained at maximum + +#### `log-rotate` + +- Whether to create a new log file every day +- Default: true +- If you set it to true, a new log file is created every day; if you set it to false, the log is output to a single log file + +## Security + +Configuration about security. + +### `ssl-ca` + +- The file path of the trusted CA certificate in the PEM format +- Default: "" +- If you set this option and `--ssl-cert`, `--ssl-key` at the same time, TiDB authenticates the client certificate based on the list of trusted CAs specified by this option when the client presents the certificate. If the authentication fails, the connection is terminated. +- If you set this option but the client does not present the certificate, the secure connection continues without client certificate authentication. + +### `ssl-cert` + +- The file path of the SSL certificate in the PEM format +- Default: "" +- If you set this option and `--ssl-key` at the same time, TiDB allows (but not forces) the client to securely connect to TiDB using TLS +- If the specified certificate or private key is invalid, TiDB starts as usual but cannot receive secure connection + +### `ssl-key` + +- The file path of the SSL certificate key in the PEM format, that is the private key of the certificate specified by `--ssl-cert` +- Default: "" +- Currently, TiDB does not support loading the private keys protected by passwords + +## Performance + +Configuration about performance. + +### `max-procs` + +- The number of CPUs used by TiDB +- Default: 0 +- The default "0" indicates using all CPUs in the machine; you can also set it to `max-procs`, and then TiDB uses `max-procs` CPUs + +### `stmt-count-limit` + +- The maximum number of statements allowed in a single TiDB transaction +- Default: 5000 +- If a transaction does not roll back or commit after the number of statements exceeds `stmt-count-limit`, TiDB returns the `statement count 5001 exceeds the transaction limitation, autocommit = false` error + +### `tcp-keep-alive` + +- To enable `keepalive` in the TCP layer +- Default: false + +### `retry-limit` + +- The number of retries that TiDB makes when it encounters a `key` conflict or other errors while committing a transaction +- Default: 10 +- If the number of retries exceeds `retry-limit` but the transaction still fails, TiDB returns an error + +### `cross-join` + +- Default: true +- TiDB supports executing the `join` statement without any condition (the `where` field) of both sides tables by default; if you set the value to false, the server refuses to execute when such a `join` statement appears + +### `stats-lease` + +- The time interval between analyzing TiDB statistics and reloading statistics +- Default: 3s + - At intervals of `stats-lease` time, TiDB checks the statistics for updates and updates them to the memory if updates exist + - At intervals of `5 * stats-lease` time, TiDB persists the total number of rows generated by DML and the number of modified rows + - At intervals of `stats-lease`, TiDB checks for tables and indexes that need to be automatically analyzed + - At intervals of `stats-lease`, TiDB checks for column statistics that need to be loaded to the memory + +### `run-auto-analyze` + +- Whether TiDB executes automatic analysis +- Default: true + +### `feedback-probability` + +- The probability that TiDB collects the feedback statistics of each query +- Default: 0 +- TiDB collects the feedback of each query at the probability of `feedback-probability`, to update statistics + +## prepared-plan-cache + +The Plan Cache configuration of the `prepare` statement. + +### `enabled` + +- To enable Plan Cache of the `prepare` statement +- Default: false + +### `capacity` + +- The number of cached statements +- Default: 100 + +## tikv-client + +### `grpc-connection-count` + +- The maximum number of connections established with each TiKV +- Default: 16 + +### `commit-timeout` + +- The maximum timeout time when executing a transaction commit +- Default: 41s +- It is required to set this value larger than twice of the Raft election timeout time + +### txn-local-latches + +Configuration about the transaction latch. It is recommended to enable it when many local transaction conflicts occur. + +### `enable` + +- To enable +- Default: false + +### `capacity` + +- The number of slots corresponding to Hash, which automatically adjusts upward to an exponential multiple of 2. Each slot occupies 32 Bytes of memory. If set too small, it might result in slower running speed and poor performance in the scenario where data writing covers a relatively large range (such as importing data). +- Default: 1024000 diff --git a/v2.0/op-guide/tidb-v2-upgrade-guide.md b/v2.0/op-guide/tidb-v2-upgrade-guide.md new file mode 100755 index 0000000000000..b60c30985245e --- /dev/null +++ b/v2.0/op-guide/tidb-v2-upgrade-guide.md @@ -0,0 +1,136 @@ +--- +title: TiDB 2.0 Upgrade Guide +summary: Learn how to upgrade from TiDB 1.0/TiDB 2.0 RC version to TiDB 2.0 GA version. +category: deployment +--- + +# TiDB 2.0 Upgrade Guide + +This document describes how to upgrade from TiDB 1.0 or TiDB 2.0 RC version to TiDB 2.0 GA version. + +## Install Ansible and dependencies in the Control Machine + +TiDB-Ansible release-2.0 depends on Ansible 2.4.2 or later, and is compatible with the latest Ansible 2.5. In addition, TiDB-Ansible release-2.0 depends on the Python module: `jinja2>=2.9.6` and `jmespath>=0.9.0`. + +To make it easy to manage dependencies, use `pip` to install Ansible and its dependencies. For details, see [Install Ansible and its dependencies on the Control Machine](ansible-deployment.md#step-4-install-ansible-and-its-dependencies-on-the-control-machine). For offline environment, see [Install Ansible and its dependencies offline on the Control Machine](offline-ansible-deployment.md#step-3-install-ansible-and-its-dependencies-offline-on-the-control-machine). + +After the installation is finished, you can view the version information using the following command: + +```bash +$ ansible --version +ansible 2.5.2 +$ pip show jinja2 +Name: Jinja2 +Version: 2.9.6 +$ pip show jmespath +Name: jmespath +Version: 0.9.0 +``` + +> **Note:** +> +> - You must install Ansible and its dependencies following the above procedures. +> - Make sure that the Jinja2 version is correct, otherwise an error occurs when you start Grafana. +> - Make sure that the jmespath is correct, otherwise an error occurs when you perform a rolling update for TiKV. + +## Download TiDB-Ansible to the Control Machine + +1. Login to the Control Machine using the `tidb` user account and enter the `/home/tidb` directory. + +2. Back up the `tidb-ansible` folders of TiDB 1.0 OR TiDB 2.0 RC versions using the following command: + + ``` + $ mv tidb-ansible tidb-ansible-bak + ``` + +3. Download the latest tidb-ansible `release-2.0` branch using the following command. The default folder name is `tidb-ansible`. + + ``` + $ git clone -b release-2.0 https://github.com/pingcap/tidb-ansible.git + ``` + +## Edit the `inventory.ini` file and the configuration file + +Login to the Control Machine using the `tidb` user account and enter the `/home/tidb/tidb-ansible` directory. + +### Edit the `inventory.ini` file + +Edit the `inventory.ini` file. For IP information, see the `/home/tidb/tidb-ansible-bak/inventory.ini` backup file. + +Pay special attention to the following variables configuration. For variable meaning, see [Description of other variables](ansible-deployment.md#edit-other-variables-optional). + +1. Make sure that `ansible_user` is the normal user. For unified privilege management, remote installation using the root user is no longer supported. The default configuration uses the `tidb` user as the SSH remote user and the program running user. + + ``` + ## Connection + # ssh via normal user + ansible_user = tidb + ``` + + You can refer to [How to configure SSH mutual trust and sudo rules on the Control Machine](ansible-deployment.md#step-5-configure-the-ssh-mutual-trust-and-sudo-rules-on-the-control-machine) to automatically configure the mutual trust among hosts. + +2. Keep the `process_supervision` variable consistent with that in the previous version. It is recommended to use `systemd` by default. + + ``` + # process supervision, [systemd, supervise] + process_supervision = systemd + ``` + + If you need to modify this variable, see [How to modify the supervision method of a process from `supervise` to `systemd`](ansible-deployment.md#how-to-modify-the-supervision-method-of-a-process-from-supervise-to-systemd). Before you upgrade, first use the `/home/tidb/tidb-ansible-bak/` backup branch to modify the supervision method of a process. + +### Edit the configuration file of TiDB cluster components + +If you have previously customized the configuration file of TiDB cluster components, refer to the backup file to modify the corresponding configuration file in `/home/tidb/tidb-ansible/conf`. + +In TiKV configuration, `end-point-concurrency` is changed to three parameters: `high-concurrency`, `normal-concurrency` and `low-concurrency`. + +``` +readpool: + coprocessor: + # Notice: if CPU_NUM > 8, default thread pool size for coprocessors + # will be set to CPU_NUM * 0.8. + # high-concurrency: 8 + # normal-concurrency: 8 + # low-concurrency: 8 +``` + +For the cluster topology of multiple TiKV instances on a single machine, you need to modify the three parameters above. Recommended configuration: `number of instances * parameter value = number of CPU cores * 0.8`. + +## Download TiDB 2.0 binary to the Control Machine + +Make sure that `tidb_version = v2.0.4` in the `tidb-ansible/inventory.ini` file, and then run the following command to download TiDB 2.0 binary to the Control Machine: + +``` +$ ansible-playbook local_prepare.yml +``` + +## Perform a rolling update to TiDB cluster components + +``` +$ ansible-playbook rolling_update.yml +``` + +## Perform a rolling update to TiDB monitoring component + +To meet the users' demand on mixed deployment, the systemd service of the monitoring component is distinguished by port. + +1. Check the `process_supervision` variable in the `inventory.ini` file. + + ``` + # process supervision, [systemd, supervise] + process_supervision = systemd + ``` + + - If `process_supervision = systemd`, to make it compatible with versions earlier than `v2.0.0-rc.6`, you need to run `migrate_monitor.yml` Playbook. + + ``` + $ ansible-playbook migrate_monitor.yml + ``` + + - If `process_supervision = supervise`, you do not need to run the above command. + +2. Perform a rolling update to the TiDB monitoring component using the following command: + + ``` + $ ansible-playbook rolling_update_monitor.yml + ``` \ No newline at end of file diff --git a/v2.0/op-guide/tune-tikv.md b/v2.0/op-guide/tune-tikv.md new file mode 100755 index 0000000000000..f4ba72b72646b --- /dev/null +++ b/v2.0/op-guide/tune-tikv.md @@ -0,0 +1,260 @@ +--- +title: Tune TiKV Performance +summary: Learn how to tune the TiKV parameters for optimal performance. +category: tuning +--- + +# Tune TiKV Performance + +This document describes how to tune the TiKV parameters for optimal performance. + +TiKV uses RocksDB for persistent storage at the bottom level of the TiKV architecture. Therefore, many of the performance parameters are related to RocksDB. +TiKV uses two RocksDB instances: the default RocksDB instance stores KV data, the Raft RocksDB instance (RaftDB) stores Raft logs. + +TiKV implements `Column Families` (CF) from RocksDB. + +- The default RocksDB instance stores KV data in the `default`, `write` and `lock` CFs. + + - The `default` CF stores the actual data. The corresponding parameters are in `[rocksdb.defaultcf]`. + - The `write` CF stores the version information in Multi-Version Concurrency Control (MVCC) and index-related data. The corresponding parameters are in `[rocksdb.writecf]`. + - The `lock` CF stores the lock information. The system uses the default parameters. + +- The Raft RocksDB (RaftDB) instance stores Raft logs. + + - The `default` CF stores the Raft log. The corresponding parameters are in `[raftdb.defaultcf]`. + +Each CF has a separate `block cache` to cache data blocks to accelerate the data reading speed in RocksDB. You can configure the size of the `block cache` by setting the `block-cache-size` parameter. The bigger the `block-cache-size`, the more hot data can be cached, and the easier to read data, in the meantime, the more system memory will be occupied. + +Each CF also has a separate `write buffer`. You can configure the size by setting the `write-buffer-size` parameter. + +## Parameter specification + +``` +# Log level: trace, debug, info, warn, error, off. +log-level = "info" + +[server] +# Set listening address +# addr = "127.0.0.1:20160" + +# It is recommended to use the default value. +# notify-capacity = 40960 +# messages-per-tick = 4096 + +# Size of thread pool for gRPC +# grpc-concurrency = 4 +# The number of gRPC connections between each TiKV instance +# grpc-raft-conn-num = 10 + +# Most read requests from TiDB are sent to the coprocessor of TiKV. This parameter is used to set the number of threads +# of the coprocessor. If many read requests exist, add the number of threads and keep the number within that of the +# system CPU cores. For example, for a 32-core machine deployed with TiKV, you can even set this parameter to 30 in +# repeatable read scenarios. If this parameter is not set, TiKV automatically sets it to CPU cores * 0.8. +# end-point-concurrency = 8 + +# Tag the TiKV instances to schedule replicas. +# labels = {zone = "cn-east-1", host = "118", disk = "ssd"} + +[storage] +# The data directory +# data-dir = "/tmp/tikv/store" + +# In most cases, you can use the default value. When importing data, it is recommended to set the parameter to 1024000. +# scheduler-concurrency = 102400 +# This parameter controls the number of write threads. When write operations occur frequently, set this parameter value +# higher. Run `top -H -p tikv-pid` and if the threads named `sched-worker-pool` are busy, set the value of parameter +# `scheduler-worker-pool-size` higher and increase the number of write threads. +# scheduler-worker-pool-size = 4 + +[pd] +# PD address +# endpoints = ["127.0.0.1:2379","127.0.0.2:2379","127.0.0.3:2379"] + +[metric] +# The interval of pushing metrics to Prometheus pushgateway +interval = "15s" +# Prometheus pushgateway adress +address = "" +job = "tikv" + +[raftstore] +# The default value is true, which means writing the data on the disk compulsorily. If it is not in a business scenario +# of the financial security level, it is recommended to set the value to false to achieve better performance. +sync-log = true + +# Raft RocksDB directory. The default value is Raft subdirectory of [storage.data-dir]. +# If there are multiple disks on the machine, store the data of Raft RocksDB on different disks to improve TiKV performance. +# raftdb-dir = "/tmp/tikv/store/raft" + +region-max-size = "384MB" +# The threshold value of Region split +region-split-size = "256MB" +# When the data size in a Region is larger than the threshold value, TiKV checks whether this Region needs split. +# To reduce the costs of scanning data in the checking process, set the value to 32MB during checking and set it to +# the default value in normal operation. +region-split-check-diff = "32MB" + +[rocksdb] +# The maximum number of threads of RocksDB background tasks. The background tasks include compaction and flush. +# For detailed information why RocksDB needs to implement compaction, see RocksDB-related materials. When write +# traffic (like the importing data size) is big, it is recommended to enable more threads. But set the number of the enabled +# threads smaller than that of CPU cores. For example, when importing data, for a machine with a 32-core CPU, +# set the value to 28. +# max-background-jobs = 8 + +# The maximum number of file handles RocksDB can open +# max-open-files = 40960 + +# The file size limit of RocksDB MANIFEST. For more details, see https://github.com/facebook/rocksdb/wiki/MANIFEST +max-manifest-file-size = "20MB" + +# The directory of RocksDB write-ahead logs. If there are two disks on the machine, store the RocksDB data and WAL logs +# on different disks to improve TiKV performance. +# wal-dir = "/tmp/tikv/store" + +# Use the following two parameters to deal with RocksDB archiving WAL. +# For more details, see https://github.com/facebook/rocksdb/wiki/How-to-persist-in-memory-RocksDB-database%3F +# wal-ttl-seconds = 0 +# wal-size-limit = 0 + +# In most cases, set the maximum total size of RocksDB WAL logs to the default value. +# max-total-wal-size = "4GB" + +# Use this parameter to enable or disable the statistics of RocksDB. +# enable-statistics = true + +# Use this parameter to enable the readahead feature during RocksDB compaction. If you are using mechanical disks, it is recommended to set the value to 2MB at least. +# compaction-readahead-size = "2MB" + +[rocksdb.defaultcf] +# The data block size. RocksDB compresses data based on the unit of block. +# Similar to page in other databases, block is the smallest unit cached in block-cache. +block-size = "64KB" + +# The compaction mode of each layer of RocksDB data. The optional values include no, snappy, zlib, +# bzip2, lz4, lz4hc, and zstd. +# "no:no:lz4:lz4:lz4:zstd:zstd" indicates there is no compaction of level0 and level1; lz4 compaction algorithm is used +# from level2 to level4; zstd compaction algorithm is used from level5 to level6. +# "no" means no compaction. "lz4" is a compaction algorithm with moderate speed and compaction ratio. The +# compaction ratio of zlib is high. It is friendly to the storage space, but its compaction speed is slow. This +# compaction occupies many CPU resources. Different machines deploy compaction modes according to CPU and I/O resources. +# For example, if you use the compaction mode of "no:no:lz4:lz4:lz4:zstd:zstd" and find much I/O pressure of the +# system (run the iostat command to find %util lasts 100%, or run the top command to find many iowaits) when writing +# (importing) a lot of data while the CPU resources are adequate, you can compress level0 and level1 and exchange CPU +# resources for I/O resources. If you use the compaction mode of "no:no:lz4:lz4:lz4:zstd:zstd" and you find the I/O +# pressure of the system is not big when writing a lot of data, but CPU resources are inadequate. Then run the top +# command and choose the -H option. If you find a lot of bg threads (namely the compaction thread of RocksDB) are +# running, you can exchange I/O resources for CPU resources and change the compaction mode to "no:no:no:lz4:lz4:zstd:zstd". +# In a word, it aims at making full use of the existing resources of the system and improving TiKV performance +# in terms of the current resources. +compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"] + +# The RocksDB memtable size +write-buffer-size = "128MB" + +# The maximum number of the memtables. The data written into RocksDB is first recorded in the WAL log, and then inserted +# into memtables. When the memtable reaches the size limit of `write-buffer-size`, it turns into read only and generates +# a new memtable receiving new write operations. The flush threads of RocksDB will flush the read only memtable to the +# disks to become an sst file of level0. `max-background-flushes` controls the maximum number of flush threads. When the +# flush threads are busy, resulting in the number of the memtables waiting to be flushed to the disks reaching the limit +# of `max-write-buffer-number`, RocksDB stalls the new operation. +# "Stall" is a flow control mechanism of RocksDB. When importing data, you can set the `max-write-buffer-number` value +# higher, like 10. +max-write-buffer-number = 5 + +# When the number of sst files of level0 reaches the limit of `level0-slowdown-writes-trigger`, RocksDB +# tries to slow down the write operation, because too many sst files of level0 can cause higher read pressure of +# RocksDB. `level0-slowdown-writes-trigger` and `level0-stop-writes-trigger` are for the flow control of RocksDB. +# When the number of sst files of level0 reaches 4 (the default value), the sst files of level0 and the sst files +# of level1 which overlap those of level0 implement compaction to relieve the read pressure. +level0-slowdown-writes-trigger = 20 + +# When the number of sst files of level0 reaches the limit of `level0-stop-writes-trigger`, RocksDB stalls the new +# write operation. +level0-stop-writes-trigger = 36 + +# When the level1 data size reaches the limit value of `max-bytes-for-level-base`, the sst files of level1 +# and their overlap sst files of level2 implement compaction. The golden rule: the first reference principle +# of setting `max-bytes-for-level-base` is guaranteeing that the `max-bytes-for-level-base` value is roughly equal to the +# data volume of level0. Thus unnecessary compaction is reduced. For example, if the compaction mode is +# "no:no:lz4:lz4:lz4:lz4:lz4", the `max-bytes-for-level-base` value is write-buffer-size * 4, because there is no +# compaction of level0 and level1 and the trigger condition of compaction for level0 is that the number of the +# sst files reaches 4 (the default value). When both level0 and level1 adopt compaction, it is necessary to analyze +# RocksDB logs to know the size of an sst file compressed from an mentable. For example, if the file size is 32MB, +# the proposed value of `max-bytes-for-level-base` is 32MB * 4 = 128MB. +max-bytes-for-level-base = "512MB" + +# The sst file size. The sst file size of level0 is influenced by the compaction algorithm of `write-buffer-size` +# and level0. `target-file-size-base` is used to control the size of a single sst file of level1-level6. +target-file-size-base = "32MB" + +# When the parameter is not configured, TiKV sets the value to 40% of the system memory size. To deploy multiple +# TiKV nodes on one physical machine, configure this parameter explicitly. Otherwise, the OOM problem might occur +# in TiKV. +# block-cache-size = "1GB" + +[rocksdb.writecf] +# Set it the same as `rocksdb.defaultcf.compression-per-level`. +compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"] + +# Set it the same as `rocksdb.defaultcf.write-buffer-size`. +write-buffer-size = "128MB" +max-write-buffer-number = 5 +min-write-buffer-number-to-merge = 1 + +# Set it the same as `rocksdb.defaultcf.max-bytes-for-level-base`. +max-bytes-for-level-base = "512MB" +target-file-size-base = "32MB" + +# When this parameter is not configured, TiKV sets this parameter value to 15% of the system memory size. To +# deploy multiple TiKV nodes on a single physical machine, configure this parameter explicitly. The related data +# of the version information (MVCC) and the index-related data are recorded in write CF. In scenarios that +# include many single table indexes, set this parameter value higher. +# block-cache-size = "256MB" + +[raftdb] +# The maximum number of the file handles RaftDB can open +# max-open-files = 40960 + +# Configure this parameter to enable or disable the RaftDB statistics information. +# enable-statistics = true + +# Enable the readahead feature in RaftDB compaction. If you are using mechanical disks, it is recommended to set +# this value to 2MB at least. +# compaction-readahead-size = "2MB" + +[raftdb.defaultcf] +# Set it the same as `rocksdb.defaultcf.compression-per-level`. +compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"] + +# Set it the same as `rocksdb.defaultcf.write-buffer-size`. +write-buffer-size = "128MB" +max-write-buffer-number = 5 +min-write-buffer-number-to-merge = 1 + +# Set it the same as `rocksdb.defaultcf.max-bytes-for-level-base`. +max-bytes-for-level-base = "512MB" +target-file-size-base = "32MB" + +# Generally, you can set it from 256MB to 2GB. In most cases, you can use the default value. But if the system +# resources are adequate, you can set it higher. +block-cache-size = "256MB" +``` + + +## TiKV memory usage + +Besides `block cache` and `write buffer` which occupy the system memory, the system memory is occupied in the +following scenarios: + ++ Some of the memory is reserved as the system's page cache. + ++ When TiKV processes large queries such as `select * from ...`, it reads data, generates the corresponding data structure in the memory, and returns this structure to TiDB. During this process, TiKV occupies some of the memory. + +## Recommended configuration of TiKV + ++ In production environments, it is not recommended to deploy TiKV on the machine whose CPU cores are less than 8 or the memory is less than 32GB. + ++ If you demand a high write throughput, it is recommended to use a disk with good throughput capacity. + ++ If you demand a very low read-write latency, it is recommended to use SSD with high IOPS. \ No newline at end of file diff --git a/v2.0/overview.md b/v2.0/overview.md new file mode 100755 index 0000000000000..6cc4d2063e5d4 --- /dev/null +++ b/v2.0/overview.md @@ -0,0 +1,118 @@ +--- +title: About TiDB +summary: Learn about what TiDB is, and the key features, architecture and roadmap of TiDB. +category: introduction +--- + +# About TiDB + +## TiDB introduction + +TiDB (The pronunciation is: /'taɪdiːbi:/ tai-D-B, etymology: titanium) is an open-source distributed scalable Hybrid Transactional and Analytical Processing (HTAP) database. It features infinite horizontal scalability, strong consistency, and high availability. TiDB is MySQL compatible and serves as a one-stop data warehouse for both OLTP (Online Transactional Processing) and OLAP (Online Analytical Processing) workloads. + +- __Horizontal scalability__ + + TiDB provides horizontal scalability simply by adding new nodes. Never worry about infrastructure capacity ever again. + +- __MySQL compatibility__ + + Easily replace MySQL with TiDB to power your applications without changing a single line of code in most cases and still benefit from the MySQL ecosystem. + +- __Distributed transaction__ + + TiDB is your source of truth, guaranteeing ACID compliance, so your data is accurate and reliable anytime, anywhere. + +- __Cloud Native__ + + TiDB is designed to work in the cloud -- public, private, or hybrid -- making deployment, provisioning, and maintenance drop-dead simple. + +- __No more ETL__ + + ETL (Extract, Transform and Load) is no longer necessary with TiDB's hybrid OLTP/OLAP architecture, enabling you to create new values for your users, easier and faster. + +- __High availability__ + + With TiDB, your data and applications are always on and continuously available, so your users are never disappointed. + +TiDB is designed to support both OLTP and OLAP scenarios. For complex OLAP scenarios, use [TiSpark](tispark/tispark-user-guide.md). + +Read the following three articles to understand TiDB techniques: + +- [Data Storage](https://pingcap.github.io/blog/2017/07/11/tidbinternal1/) +- [Computing](https://pingcap.github.io/blog/2017/07/11/tidbinternal2/) +- [Scheduling](https://pingcap.github.io/blog/2017/07/20/tidbinternal3/) + +## Roadmap + +Read the [Roadmap](https://github.com/pingcap/docs/blob/master/ROADMAP.md). + +## Connect with us + +- **Twitter**: [@PingCAP](https://twitter.com/PingCAP) +- **Reddit**: https://www.reddit.com/r/TiDB/ +- **Stack Overflow**: https://stackoverflow.com/questions/tagged/tidb +- **Mailing list**: [Google Group](https://groups.google.com/forum/#!forum/tidb-user) + +## TiDB architecture + +To better understand TiDB's features, you need to understand the TiDB architecture. The TiDB cluster includes three key components: the TiDB server, the PD server, and the TiKV server. In addition, TiDB also provides the [TiSpark](https://github.com/pingcap/tispark/) component for the complex OLAP requirements. + +![image alt text](media/tidb-architecture.png) + +### TiDB server + +The TiDB server is in charge of the following operations: + +1. Receiving the SQL requests + +2. Processing the SQL related logics + +3. Locating the TiKV address for storing and computing data through Placement Driver (PD) + +4. Exchanging data with TiKV + +5. Returning the result + +The TiDB server is stateless. It does not store data and it is for computing only. TiDB is horizontally scalable and provides the unified interface to the outside through the load balancing components such as Linux Virtual Server (LVS), HAProxy, or F5. + +### Placement Driver server + +The Placement Driver (PD) server is the managing component of the entire cluster and is in charge of the following three operations: + +1. Storing the metadata of the cluster such as the region location of a specific key. + +2. Scheduling and load balancing regions in the TiKV cluster, including but not limited to data migration and Raft group leader transfer. + +3. Allocating the transaction ID that is globally unique and monotonic increasing. + +As a cluster, PD needs to be deployed to an odd number of nodes. Usually it is recommended to deploy to 3 online nodes at least. + +### TiKV server + +The TiKV server is responsible for storing data. From an external view, TiKV is a distributed transactional Key-Value storage engine. Region is the basic unit to store data. Each Region stores the data for a particular Key Range which is a left-closed and right-open interval from StartKey to EndKey. There are multiple Regions in each TiKV node. TiKV uses the Raft protocol for replication to ensure the data consistency and disaster recovery. The replicas of the same Region on different nodes compose a Raft Group. The load balancing of the data among different TiKV nodes are scheduled by PD. Region is also the basic unit for scheduling the load balance. + +### TiSpark + +TiSpark deals with the complex OLAP requirements. TiSpark makes Spark SQL directly run on the storage layer of the TiDB cluster, combines the advantages of the distributed TiKV cluster, and integrates into the big data ecosystem. With TiSpark, TiDB can support both OLTP and OLAP scenarios in one cluster, so the users never need to worry about data synchronization. + +## Features + +### Horizontal scalability + +Horizontal scalability is the most important feature of TiDB. The scalability includes two aspects: the computing capability and the storage capacity. The TiDB server processes the SQL requests. As the business grows, the overall processing capability and higher throughput can be achieved by simply adding more TiDB server nodes. Data is stored in TiKV. As the size of the data grows, the scalability of data can be resolved by adding more TiKV server nodes. PD schedules data in Regions among the TiKV nodes and migrates part of the data to the newly added node. So in the early stage, you can deploy only a few service instances. For example, it is recommended to deploy at least 3 TiKV nodes, 3 PD nodes and 2 TiDB nodes. As business grows, more TiDB and TiKV instances can be added on-demand. + +### High availability + +High availability is another important feature of TiDB. All of the three components, TiDB, TiKV and PD, can tolerate the failure of some instances without impacting the availability of the entire cluster. For each component, See the following for more details about the availability, the consequence of a single instance failure and how to recover. + +#### TiDB + +TiDB is stateless and it is recommended to deploy at least two instances. The front-end provides services to the outside through the load balancing components. If one of the instances is down, the Session on the instance will be impacted. From the application’s point of view, it is a single request failure but the service can be regained by reconnecting to the TiDB server. If a single instance is down, the service can be recovered by restarting the instance or by deploying a new one. + +#### PD + +PD is a cluster and the data consistency is ensured using the Raft protocol. If an instance is down but the instance is not a Raft Leader, there is no impact on the service at all. If the instance is a Raft Leader, a new Leader will be elected to recover the service. During the election which is approximately 3 seconds, PD cannot provide service. It is recommended to deploy three instances. If one of the instances is down, the service can be recovered by restarting the instance or by deploying a new one. + +#### TiKV + +TiKV is a cluster and the data consistency is ensured using the Raft protocol. The number of the replicas can be configurable and the default is 3 replicas. The load of TiKV servers are balanced through PD. If one of the node is down, all the Regions in the node will be impacted. If the failed node is the Leader of the Region, the service will be interrupted and a new election will be initiated. If the failed node is a Follower of the Region, the service will not be impacted. If a TiKV node is down for a period of time (default 30 minutes), PD will move the data to another TiKV node. diff --git a/v2.0/releases/101.md b/v2.0/releases/101.md new file mode 100755 index 0000000000000..e9a8fba415763 --- /dev/null +++ b/v2.0/releases/101.md @@ -0,0 +1,23 @@ +--- +title: TiDB 1.0.1 Release Notes +category: Releases +--- + +# TiDB 1.0.1 Release Notes + +On November 1, 2017, TiDB 1.0.1 is released with the following updates: + +## TiDB: + + - Support canceling DDL Job. + - Optimize the `IN` expression. + - Correct the result type of the `Show` statement. + - Support log slow query into a separate log file. + - Fix bugs. + +## TiKV: + + - Support flow control with write bytes. + - Reduce Raft allocation. + - Increase coprocessor stack size to 10MB. + - Remove the useless log from the coprocessor. diff --git a/v2.0/releases/102.md b/v2.0/releases/102.md new file mode 100755 index 0000000000000..2a0a4865d2c69 --- /dev/null +++ b/v2.0/releases/102.md @@ -0,0 +1,29 @@ +--- +title: TiDB 1.0.2 Release Notes +category: Releases +--- + +# TiDB 1.0.2 Release Notes + +On November 13, 2017, TiDB 1.0.2 is released with the following updates: + +## TiDB: + + - Optimize the cost estimation of index point query + - Support the `Alter Table Add Column (ColumnDef ColumnPosition)` syntax + - Optimize the queries whose `where` conditions are contradictory + - Optimize the `Add Index` operation to rectify the progress and reduce repetitive operations + - Optimize the ` Index Look Join` operator to accelerate the query speed for small data size + - Fix the issue with prefix index judgment + +## Placement Driver (PD): + + - Improve the stability of scheduling under exceptional situations + +## TiKV: + + - Support splitting table to ensure one region does not contain data from multiple tables + - Limit the length of a key to be no more than 4 KB + - More accurate read traffic statistics + - Implement deep protection on the coprocessor stack + - Fix the `LIKE` behavior and the `do_div_mod` bug diff --git a/v2.0/releases/103.md b/v2.0/releases/103.md new file mode 100755 index 0000000000000..c1924e388bd98 --- /dev/null +++ b/v2.0/releases/103.md @@ -0,0 +1,33 @@ +--- +title: TiDB 1.0.3 Release Notes +category: Releases +--- + +# TiDB 1.0.3 Release Notes + +On November 28, 2017, TiDB 1.0.3 is released with the following updates: + +## TiDB + +- [Optimize the performance in transaction conflicts scenario](https://github.com/pingcap/tidb/pull/5051) +- [Add the `TokenLimit` option in the config file](https://github.com/pingcap/tidb/pull/5107) +- [Output the default database in slow query logs](https://github.com/pingcap/tidb/pull/5107) +- [Remove the DDL statement from query duration metrics](https://github.com/pingcap/tidb/pull/5107) +- [Optimize the query cost estimation](https://github.com/pingcap/tidb/pull/5140) +- [Fix the index prefix issue when creating tables](https://github.com/pingcap/tidb/pull/5149) +- [Support pushing down the expressions for the Float type to TiKV](https://github.com/pingcap/tidb/pull/5153) +- [Fix the issue that it is slow to add index for tables with discrete integer primary index](https://github.com/pingcap/tidb/pull/5155) +- [Reduce the unnecessary statistics updates](https://github.com/pingcap/tidb/pull/5164) +- [Fix a potential issue during the transaction retry](https://github.com/pingcap/tidb/pull/5219) + +## PD + +- Support adding more types of schedulers using API + +## TiKV + +- Fix the deadlock issue with the PD client +- Fix the issue that the wrong leader value is prompted for `NotLeader` +- Fix the issue that the chunk size is too large in the coprocessor + +To upgrade from 1.0.2 to 1.0.3, follow the rolling upgrade order of PD -> TiKV -> TiDB. diff --git a/v2.0/releases/104.md b/v2.0/releases/104.md new file mode 100755 index 0000000000000..ed509c9194ed5 --- /dev/null +++ b/v2.0/releases/104.md @@ -0,0 +1,24 @@ +--- +title: TiDB 1.0.4 Release Notes +category: Releases +--- + +# TiDB 1.0.4 Release Notes + +On December 11, 2017, TiDB 1.0.4 is released with the following updates: + +## TiDB + +- [Speed up the loading of the statistics when starting the `tidb-server`](https://github.com/pingcap/tidb/pull/5362) +- [Improve the performance of the `show variables` statement](https://github.com/pingcap/tidb/pull/5363) +- [Fix a potential issue when using the `Add Index` statement to handle the combined indexes](https://github.com/pingcap/tidb/pull/5323) +- [Fix a potential issue when using the `Rename Table` statement to move a table to another database](https://github.com/pingcap/tidb/pull/5314) +- [Accelerate the effectiveness for the `Alter/Drop User` statement](https://github.com/pingcap/tidb/pull/5226) + +## TiKV + +- [Fix a possible performance issue when a snapshot is applied ](https://github.com/pingcap/tikv/pull/2559) +- [Fix the performance issue for reverse scan after removing a lot of data](https://github.com/pingcap/tikv/pull/2559) +- [Fix the wrong encoded result for the Decimal type under special circumstances](https://github.com/pingcap/tikv/pull/2571) + +To upgrade from 1.0.3 to 1.0.4, follow the rolling upgrade order of PD -> TiKV -> TiDB. diff --git a/v2.0/releases/105.md b/v2.0/releases/105.md new file mode 100755 index 0000000000000..dc2a7917f7400 --- /dev/null +++ b/v2.0/releases/105.md @@ -0,0 +1,33 @@ +--- +title: TiDB 1.0.5 Release Notes +category: Releases +--- + +# TiDB 1.0.5 Release Notes + +On December 26, 2017, TiDB 1.0.5 is released with the following updates: + +## TiDB + +- [Add the max value for the current Auto_Increment ID in the `Show Create Table` statement.](https://github.com/pingcap/tidb/pull/5489) +- [Fix a potential goroutine leak.](https://github.com/pingcap/tidb/pull/5486) +- [Support outputting slow queries into a separate file.](https://github.com/pingcap/tidb/pull/5484) +- [Load the `TimeZone` variable from TiKV when creating a new session.](https://github.com/pingcap/tidb/pull/5479) +- [Support the schema state check so that the `Show Create Table`and `Analyze` statements process the public table/index only.](https://github.com/pingcap/tidb/pull/5474) +- [The `set transaction read only` should affect the `tx_read_only` variable.](https://github.com/pingcap/tidb/pull/5491) +- [Clean up incremental statistic data when rolling back.](https://github.com/pingcap/tidb/pull/5391) +- [Fix the issue of missing index length in the `Show Create Table` statement.](https://github.com/pingcap/tidb/pull/5421) + +## PD + +- Fix the issue that the leaders stop balancing under some circumstances. + - [869](https://github.com/pingcap/pd/pull/869) + - [874](https://github.com/pingcap/pd/pull/874) +- [Fix potential panic during bootstrapping.](https://github.com/pingcap/pd/pull/889) + +## TiKV + +- Fix the issue that it is slow to get the CPU ID using the [`get_cpuid`](https://github.com/pingcap/tikv/pull/2611) function. +- Support the [`dynamic-level-bytes`](https://github.com/pingcap/tikv/pull/2605) parameter to improve the space collection situation. + +To upgrade from 1.0.4 to 1.0.5, follow the rolling upgrade order of PD -> TiKV -> TiDB. diff --git a/v2.0/releases/106.md b/v2.0/releases/106.md new file mode 100755 index 0000000000000..04f15323f341b --- /dev/null +++ b/v2.0/releases/106.md @@ -0,0 +1,27 @@ +--- +title: TiDB 1.0.6 Release Notes +category: Releases +--- + +# TiDB 1.0.6 Release Notes + +On January 08, 2018, TiDB 1.0.6 is released with the following updates: + +## TiDB: + +- [Support the `Alter Table Auto_Increment` syntax](https://github.com/pingcap/tidb/pull/5511) +- [Fix the bug in Cost Based computation and the `Null Json` issue in statistics](https://github.com/pingcap/tidb/pull/5556) +- [Support the extension syntax to shard the implicit row ID to avoid write hot spot for a single table](https://github.com/pingcap/tidb/pull/5559) +- [Fix a potential DDL issue](https://github.com/pingcap/tidb/pull/5562) +- [Consider the timezone setting in the `curtime`, `sysdate` and `curdate` functions](https://github.com/pingcap/tidb/pull/5564) +- [Support the `SEPARATOR` syntax in the `GROUP_CONCAT` function](https://github.com/pingcap/tidb/pull/5569) +- [Fix the wrong return type issue of the `GROUP_CONCAT` function.](https://github.com/pingcap/tidb/pull/5582) + +## PD: +- [Fix store selection problem of hot-region scheduler](https://github.com/pingcap/pd/pull/898) + +## TiKV: + +None. + +To upgrade from 1.0.5 to 1.0.6, follow the rolling upgrade order of PD -> TiKV -> TiDB. diff --git a/v2.0/releases/107.md b/v2.0/releases/107.md new file mode 100755 index 0000000000000..d0037bc804368 --- /dev/null +++ b/v2.0/releases/107.md @@ -0,0 +1,39 @@ +--- +title: TiDB 1.0.7 Release Notes +category: Releases +--- + +# TiDB 1.0.7 Release Notes + +On January 22, 2018, TiDB 1.0.7 is released with the following updates: + +## TiDB: + +- [Optimize the `FIELD_LIST` command](https://github.com/pingcap/tidb/pull/5679) +- [Fix data race of the information schema](https://github.com/pingcap/tidb/pull/5676) +- [Avoid adding read-only statements to history](https://github.com/pingcap/tidb/pull/5661) +- [Add the `session` variable to control the log query](https://github.com/pingcap/tidb/pull/5659) +- [Fix the resource leak issue in statistics](https://github.com/pingcap/tidb/pull/5657) +- [Fix the goroutine leak issue](https://github.com/pingcap/tidb/pull/5624) +- [Add schema info API for the http status server](https://github.com/pingcap/tidb/pull/5256) +- [Fix an issue about `IndexJoin`](https://github.com/pingcap/tidb/pull/5623) +- [Update the behavior when `RunWorker` is false in DDL](https://github.com/pingcap/tidb/pull/5604) +- [Improve the stability of test results in statistics](https://github.com/pingcap/tidb/pull/5609) +- [Support `PACK_KEYS` syntax for the `CREATE TABLE` statement](https://github.com/pingcap/tidb/pull/5602) +- [Add `row_id` column for the null pushdown schema to optimize performance](https://github.com/pingcap/tidb/pull/5447) + +## PD: + +- [Fix possible scheduling loss issue in abnormal conditions](https://github.com/pingcap/pd/pull/921) +- [Fix the compatibility issue with proto3](https://github.com/pingcap/pd/pull/919) +- [Add the log](https://github.com/pingcap/pd/pull/917) + +## TiKV: + +- [Support `Table Scan`](https://github.com/pingcap/tikv/pull/2657) +- [Support the remote mode in tikv-ctl](https://github.com/pingcap/tikv/pull/2377) +- [Fix the format compatibility issue of tikv-ctl proto](https://github.com/pingcap/tikv/pull/2668) +- [Fix the loss of scheduling command from PD](https://github.com/pingcap/tikv/pull/2669) +- [Add timeout in Push metric](https://github.com/pingcap/tikv/pull/2686) + +To upgrade from 1.0.6 to 1.0.7, follow the rolling upgrade order of PD -> TiKV -> TiDB. \ No newline at end of file diff --git a/v2.0/releases/108.md b/v2.0/releases/108.md new file mode 100755 index 0000000000000..c972795f4fbaf --- /dev/null +++ b/v2.0/releases/108.md @@ -0,0 +1,32 @@ +--- +title: TiDB 1.0.8 Release Notes +category: Releases +--- + +# TiDB 1.0.8 Release Notes + +On February 11, 2018, TiDB 1.0.8 is released with the following updates: + +## TiDB: +- [Fix issues in the `Outer Join` result in some scenarios](https://github.com/pingcap/tidb/pull/5712) +- [Optimize the performance of the `InsertIntoIgnore` statement](https://github.com/pingcap/tidb/pull/5738) +- [Fix the issue in the `ShardRowID` option](https://github.com/pingcap/tidb/pull/5751) +- [Add limitation (Configurable, the default value is 5000) to the DML statements number within a transaction](https://github.com/pingcap/tidb/pull/5754) +- [Fix an issue in the Table/Column aliases returned by the `Prepare` statement](https://github.com/pingcap/tidb/pull/5776) +- [Fix an issue in updating statistics delta](https://github.com/pingcap/tidb/pull/5787) +- [Fix a panic error in the `Drop Column` statement](https://github.com/pingcap/tidb/pull/5805) +- [Fix an DML issue when running the `Add Column After` statement](https://github.com/pingcap/tidb/pull/5818) +- [Improve the stability of the GC process by ignoring the regions with GC errors](https://github.com/pingcap/tidb/pull/5815) +- [Run GC concurrently to accelerate the GC process](https://github.com/pingcap/tidb/pull/5850) +- [Provide syntax support for the `CREATE INDEX` statement](https://github.com/pingcap/tidb/pull/5853) + +## PD: +- [Reduce the lock overheat of the region heartbeats](https://github.com/pingcap/pd/pull/932) +- [Fix the issue that a hot region scheduler selects the wrong Leader](https://github.com/pingcap/pd/pull/939) + +## TiKV: +- [Use `DeleteFilesInRanges` to clear stale data and improve the TiKV starting speed](https://github.com/pingcap/tikv/pull/2740) +- [Using `Decimal` in Coprocessor sum](https://github.com/pingcap/tikv/pull/2754) +- [Sync the metadata of the received Snapshot compulsorily to ensure its safety](https://github.com/pingcap/tikv/pull/2758) + +To upgrade from 1.0.7 to 1.0.8, follow the rolling upgrade order of PD -> TiKV -> TiDB. diff --git a/v2.0/releases/11alpha.md b/v2.0/releases/11alpha.md new file mode 100755 index 0000000000000..dd629ac6fd680 --- /dev/null +++ b/v2.0/releases/11alpha.md @@ -0,0 +1,52 @@ +--- +title: TiDB 1.1 Alpha Release Notes +category: Releases +--- + +# TiDB 1.1 Alpha Release Notes + +On January 19, 2018, TiDB 1.1 Alpha is released. This release has great improvement in MySQL compatibility, SQL optimization, stability, and performance. + +## TiDB: + +- SQL parser + - Support more syntax +- SQL query optimizer + - Use more compact structure to reduce statistics info memory usage + - Speed up loading statistics info when starting tidb-server + - Provide more accurate query cost evaluation + - Use `Count-Min Sketch` to estimate the cost of queries using unique index more accurately + - Support more complex conditions to make full use of index +- SQL executor + - Refactor all executor operators using Chunk architecture, improve the execution performance of analytical statements and reduce memory usage + - Optimize performance of the `INSERT IGNORE` statement + - Push down more types and functions to TiKV + - Support more `SQL_MODE` + - Optimize the `Load Data` performance to increase the speed by 10 times + - Optimize the `Use Database` performance + - Support statistics on the memory usage of physical operators +- Server + - Support the PROXY protocol + +## PD: + +- Add more APIs +- Support TLS +- Add more cases for scheduling Simulator +- Schedule to adapt to different Region sizes +- Fix some bugs about scheduling + +## TiKV: + +- Support Raft learner +- Optimize Raft Snapshot and reduce the I/O overhead +- Support TLS +- Optimize the RocksDB configuration to improve performance +- Optimize `count (*)` and query performance of unique index in Coprocessor +- Add more failpoints and stability test cases +- Solve the reconnection issue between PD and TiKV +- Enhance the features of the data recovery tool `tikv-ctl` +- Support splitting according to table in Region +- Support the `Delete Range` feature +- Support setting the I/O limit caused by snapshot +- Improve the flow control mechanism \ No newline at end of file diff --git a/v2.0/releases/11beta.md b/v2.0/releases/11beta.md new file mode 100755 index 0000000000000..e2dc6695e56e5 --- /dev/null +++ b/v2.0/releases/11beta.md @@ -0,0 +1,49 @@ +--- +title: TiDB 1.1 Beta Release Notes +category: Releases +--- + +# TiDB 1.1 Beta Release Notes + +On February 24, 2018, TiDB 1.1 Beta is released. This release has great improvement in MySQL compatibility, SQL optimization, stability, and performance. + +## TiDB: + +- Add more monitoring metrics and refine the log +- Compatible with more MySQL syntax +- Support displaying the table creating time in `information_schema` +- Optimize queries containing the `MaxOneRow` operator +- Configure the size of intermediate result sets generated by Join, to further reduce the memory used by Join +- Add the `tidb_config` session variable to output the current TiDB configuration +- Fix the panic issue in the `Union` and `Index Join` operators +- Fix the wrong result issue of the `Sort Merge Join` operator in some scenarios +- Fix the issue that the `Show Index` statement shows indexes that are in the process of adding +- Fix the failure of the `Drop Stats` statement +- Optimize the query performance of the SQL engine to improve the test result of the Sysbench Select/OLTP by 10% +- Improve the computing speed of subqueries in the optimizer using the new execution engine; compared with TiDB 1.0, TiDB 1.1 Beta has great improvement in tests like TPC-H and TPC-DS + +## PD: + +- Add the Drop Region debug interface +- Support setting priority of the PD leader +- Support configuring stores with a specific label not to schedule Raft leaders +- Add the interfaces to enumerate the health status of each PD +- Add more metrics +- Keep the PD leader and the etcd leader together as much as possible in the same node +- Improve the priority and speed of restoring data when TiKV goes down +- Enhance the validity check of the `data-dir` configuration item +- Optimize the performance of Region heartbeat +- Fix the issue that hot spot scheduling violates label constraint +- Fix other stability issues + +## TiKV: + +- Traverse locks using offset + limit to avoid potential GC problems +- Support resolving locks in batches to improve GC speed +- Support GC concurrency to improve GC speed +- Update the Region size using the RocksDB compaction listener for more accurate PD scheduling +- Delete the outdated data in batches using `DeleteFilesInRanges`, to make TiKV start faster +- Configure the Raft snapshot max size to avoid the retained files taking up too much space +- Support more recovery operations in `tikv-ctl` +- Optimize the ordered flow aggregation operation +- Improve metrics and fix bugs \ No newline at end of file diff --git a/v2.0/releases/2.0ga.md b/v2.0/releases/2.0ga.md new file mode 100755 index 0000000000000..417e5da5d66ff --- /dev/null +++ b/v2.0/releases/2.0ga.md @@ -0,0 +1,155 @@ +--- +title: TiDB 2.0 Release Notes +category: Releases +--- + +# TiDB 2.0 Release Notes + +On April 27, 2018, TiDB 2.0 GA is released! Compared with TiDB 1.0, this release has great improvement in MySQL compatibility, SQL optimizer, executor, and stability. + +## TiDB + +- SQL Optimizer + - Use more compact data structure to reduce the memory usage of statistics information + - Speed up loading statistics information when starting a tidb-server process + - Support updating statistics information dynamically [experimental] + - Optimize the cost model to provide more accurate query cost evaluation + - Use `Count-Min Sketch` to estimate the cost of point queries more accurately + - Support analyzing more complex conditions to make full use of indexes + - Support manually specifying the `Join` order using the `STRAIGHT_JOIN` syntax + - Use the Stream Aggregation operator when the `GROUP BY` clause is empty to improve the performance + - Support using indexes for the `MAX/MIN` function + - Optimize the processing algorithms for correlated subqueries to support decorrelating more types of correlated subqueries and transform them to `Left Outer Join` + - Extend `IndexLookupJoin` to be used in matching the index prefix +- SQL Execution Engine + - Refactor all operators using the Chunk architecture, improve the execution performance of analytical queries, and reduce memory usage. There is a significant improvement in the TPC-H benchmark result. + - Support the Streaming Aggregation operators pushdown + - Optimize the `Insert Into Ignore` statement to improve the performance by over 10 times + - Optimize the `Insert On Duplicate Key Update` statement to improve the performance by over 10 times + - Optimize `Load Data` to improve the performance by over 10 times + - Push down more data types and functions to TiKV + - Support computing the memory usage of physical operators, and specifying the processing behavior in the configuration file and system variables when the memory usage exceeds the threshold + - Support limiting the memory usage by a single SQL statement to reduce the risk of OOM + - Support using implicit RowID in CRUD operations + - Improve the performance of point queries +- Server + - Support the Proxy Protocol + - Add more monitoring metrics and refine the log + - Support validating the configuration files + - Support obtaining the information of TiDB parameters through HTTP API + - Resolve Lock in the Batch mode to speed up garbage collection + - Support multi-threaded garbage collection + - Support TLS +- Compatibility + - Support more MySQL syntaxes + - Support modifying the `lower_case_table_names` system variable in the configuration file to support the OGG data synchronization tool + - Improve compatibility with the Navicat management tool + - Support displaying the table creating time in `Information_Schema` + - Fix the issue that the return types of some functions/expressions differ from MySQL + - Improve compatibility with JDBC + - Support more SQL Modes +- DDL + - Optimize the `Add Index` operation to greatly improve the execution speed in some scenarios + - Attach a lower priority to the `Add Index` operation to reduce the impact on online business + - Output more detailed status information of the DDL jobs in `Admin Show DDL Jobs` + - Support querying the original statements of currently running DDL jobs using `Admin Show DDL Job Queries JobID` + - Support recovering the index data using `Admin Recover Index` for disaster recovery + - Support modifying Table Options using the `Alter` statement + +## PD + +- Support `Region Merge`, to merge empty Regions after deleting data [experimental] +- Support `Raft Learner` [experimental] +- Optimize the scheduler + - Make the scheduler to adapt to different Region sizes + - Improve the priority and speed of restoring data during TiKV outage + - Speed up data transferring when removing a TiKV node + - Optimize the scheduling policies to prevent the disks from becoming full when the space of TiKV nodes is insufficient + - Improve the scheduling efficiency of the balance-leader scheduler + - Reduce the scheduling overhead of the balance-region scheduler + - Optimize the execution efficiency of the the hot-region scheduler +- Operations interface and configuration + - Support TLS + - Support prioritizing the PD leaders + - Support configuring the scheduling policies based on labels + - Support configuring stores with a specific label not to schedule the Raft leader + - Support splitting Region manually to handle the hotspot in a single Region + - Support scattering a specified Region to manually adjust Region distribution in some cases + - Add check rules for configuration parameters and improve validity check of the configuration items +- Debugging interface + - Add the `Drop Region` debugging interface + - Add the interfaces to enumerate the health status of each PD +- Statistics + - Add statistics about abnormal Regions + - Add statistics about Region isolation level + - Add scheduling related metrics +- Performance + - Keep the PD leader and the etcd leader together in the same node to improve write performance + - Optimize the performance of Region heartbeat + +## TiKV + +- Features + - Protect critical configuration from incorrect modification + - Support `Region Merge` [experimental] + - Add the `Raw DeleteRange` API + - Add the `GetMetric` API + - Add `Raw Batch Put`, `Raw Batch Get`, `Raw Batch Delete` and `Raw Batch Scan` + - Add Column Family options for the RawKV API and support executing operation on a specific Column Family + - Support Streaming and Streaming Aggregation in Coprocessor + - Support configuring the request timeout of Coprocessor + - Carry timestamps with Region heartbeats + - Support modifying some RocksDB parameters online, such as `block-cache-size` + - Support configuring the behavior of Coprocessor when it encounters some warnings or errors + - Support starting in the importing data mode to reduce write amplification during the data importing process + - Support manually splitting Region in halves + - Improve the data recovery tool `tikv-ctl` + - Return more statistics in Coprocessor to guide the behavior of TiDB + - Support the `ImportSST` API to import SST files [experimental] + - Add the TiKV Importer binary to integrate with TiDB Lightning to import data quickly [experimental] +- Performance + - Optimize read performance using `ReadPool` and increase the `raw_get/get/batch_get` by 30% + - Improve metrics performance + - Inform PD immediately once the Raft snapshot process is completed to speed up balancing + - Solve performance jitter caused by RocksDB flushing + - Optimize the space reclaiming mechanism after deleting data + - Speed up garbage cleaning while starting the server + - Reduce the I/O overhead during replica migration using `DeleteFilesInRanges` +- Stability + - Fix the issue that gRPC call does not get returned when the PD leader switches + - Fix the issue that it is slow to offline nodes caused by snapshots + - Limit the temporary space usage consumed by migrating replicas + - Report the Regions that cannot elect a leader for a long time + - Update the Region size information in time according to compaction events + - Limit the size of scan lock to avoid request timeout + - Limit the memory usage when receiving snapshots to avoid OOM + - Increase the speed of CI test + - Fix the OOM issue caused by too many snapshots + - Configure `keepalive` of gRPC + - Fix the OOM issue caused by an increase of the Region number + +## TiSpark + +TiSpark uses a separate version number. The current TiSpark version is 1.0 GA. The components of TiSpark 1.0 provide distributed computing of TiDB data using Apache Spark. + +- Provide a gRPC communication framework to read data from TiKV +- Provide encoding and decoding of TiKV component data and communication protocol +- Provide calculation pushdown, which includes: + - Aggregate pushdown + - Predicate pushdown + - TopN pushdown + - Limit pushdown +- Provide index related support + - Transform predicate into Region key range or secondary index + - Optimize `Index Only` queries +    - Adaptively downgrade index scan to table scan per Region +- Provide cost-based optimization + - Support statistics + - Select index + - Estimate broadcast table cost +- Provide support for multiple Spark interfaces + - Support Spark Shell + - Support ThriftServer/JDBC + - Support Spark-SQL interaction + - Support PySpark Shell + - Support SparkR diff --git a/v2.0/releases/201.md b/v2.0/releases/201.md new file mode 100755 index 0000000000000..57d1f5c4d89a4 --- /dev/null +++ b/v2.0/releases/201.md @@ -0,0 +1,49 @@ +--- +title: TiDB 2.0.1 Release Notes +category: Releases +--- + +# TiDB 2.0.1 Release Notes + +On May 16, 2018, TiDB 2.0.1 is released. Compared with TiDB 2.0.0 (GA), this release has great improvement in MySQL compatibility and system stability. + +## TiDB + +- Update the progress of `Add Index` to the DDL job information in real time +- Add the `tidb_auto_analyze_ratio` session variable to control the threshold value of automatic statistics update +- Fix an issue that not all residual states are cleaned up when the transaction commit fails +- Fix a bug about adding indexes in some conditions +- Fix the correctness related issue when DDL modifies surface operations in some concurrent scenarios +- Fix a bug that the result of `LIMIT` is incorrect in some conditions +- Fix a capitalization issue of the `ADMIN CHECK INDEX` statement to make its index name case insensitive +- Fix a compatibility issue of the `UNION` statement +- Fix a compatibility issue when inserting data of `TIME` type +- Fix a goroutine leak issue caused by `copIteratorTaskSender` in some conditions +- Add an option for TiDB to control the behaviour of Binlog failure +- Refactor the `Coprocessor` slow log to distinguish between the scenario of tasks with long processing time and long waiting time +- Log nothing when meeting MySQL protocol handshake error, to avoid too many logs caused by the load balancer Keep Alive mechanism +- Refine the “Out of range value for column” error message +- Fix a bug when there is a subquery in an `Update` statement +- Change the behaviour of handling `SIGTERM`, and do not wait for all queries to terminate anymore + +## PD + +- Add the `Scatter Range` scheduler to balance Regions with the specified key range +- Optimize the scheduling of Merge Region to prevent the newly split Region from being merged +- Add Learner related metrics +- Fix the issue that the scheduler is mistakenly deleted after restart +- Fix the error that occurs when parsing the configuration file +- Fix the issue that the etcd leader and the PD leader are not synchronized +- Fix the issue that Learner still appears after it is closed +- Fix the issue that Regions fail to load because the packet size is too large + +## TiKV + +- Fix the issue that `SELECT FOR UPDATE` prevents others from reading +- Optimize the slow query log +- Reduce the number of `thread_yield` calls +- Fix the bug that raftstore is accidentally blocked when generating the snapshot +- Fix the issue that Learner cannot be successfully elected in special conditions +- Fix the issue that split might cause dirty read in extreme conditions +- Correct the default value of the read thread pool configuration +- Speed up Delete Range diff --git a/v2.0/releases/202.md b/v2.0/releases/202.md new file mode 100755 index 0000000000000..c5c13cd12e243 --- /dev/null +++ b/v2.0/releases/202.md @@ -0,0 +1,30 @@ +--- +title: TiDB 2.0.2 Release Notes +category: Releases +--- + +# TiDB 2.0.2 Release Notes + +On May 21, 2018, TiDB 2.0.2 is released. Compared with TiDB 2.0.1, this release has great improvement in system stability. + +## TiDB + +- Fix the issue of pushing down the Decimal division expression +- Support using the `USE INDEX` syntax in the `Delete` statement +- Forbid using the `shard_row_id_bits` feature in columns with `Auto-Increment` +- Add the timeout mechanism for writing Binlog + +## PD + +- Make the balance leader scheduler filter the disconnected nodes +- Modify the timeout of the transfer leader operator to 10s +- Fix the issue that the label scheduler does not schedule when the cluster Regions are in an unhealthy state +- Fix the improper scheduling issue of `evict leader scheduler` + +## TiKV + +- Fix the issue that the Raft log is not printed +- Support configuring more gRPC related parameters +- Support configuring the timeout range of leader election +- Fix the issue that the obsolete learner is not deleted +- Fix the issue that the snapshot intermediate file is mistakenly deleted \ No newline at end of file diff --git a/v2.0/releases/203.md b/v2.0/releases/203.md new file mode 100755 index 0000000000000..5fd47b0825588 --- /dev/null +++ b/v2.0/releases/203.md @@ -0,0 +1,37 @@ +--- +title: TiDB 2.0.3 Release Notes +category: Releases +--- + +# TiDB 2.0.3 Release Notes + +On June 1, 2018, TiDB 2.0.3 is released. Compared with TiDB 2.0.2, this release has great improvement in system compatibility and stability. + +## TiDB + +- Support modifying the log level online +- Support the `COM_CHANGE_USER` command +- Support using the `TIME` type parameters under the binary protocol +- Optimize the cost estimation of query conditions with the `BETWEEN` expression +- Do not display the `FOREIGN KEY` information in the result of `SHOW CREATE TABLE` +- Optimize the cost estimation for queries with the `LIMIT` clause +- Fix the issue about the `YEAR` type as the unique index +- Fix the issue about `ON DUPLICATE KEY UPDATE` in conditions without the unique index +- Fix the compatibility issue of the `CEIL` function +- Fix the accuracy issue of the `DIV` calculation in the `DECIMAL` type +- Fix the false alarm of `ADMIN CHECK TABLE` +- Fix the panic issue of `MAX`/`MIN` under specific expression parameters +- Fix the issue that the result of `JOIN` is null in special conditions +- Fix the `IN` expression issue when building and querying Range +- Fix a Range calculation issue when using `Prepare` to query and `Plan Cache` is enabled +- Fix the issue that the Schema information is frequently loaded in abnormal conditions + +## PD + +- Fix the panic issue when collecting hot-cache metrics in specific conditions +- Fix the issue about scheduling of the obsolete Regions + +## TiKV + +- Fix the bug that the learner flag mistakenly reports to PD +- Report an error instead of getting a result if `divisor/dividend` is 0 in `do_div_mod` \ No newline at end of file diff --git a/v2.0/releases/204.md b/v2.0/releases/204.md new file mode 100755 index 0000000000000..6e6ce10b8d4f7 --- /dev/null +++ b/v2.0/releases/204.md @@ -0,0 +1,41 @@ +--- +title: TiDB 2.0.4 Release Notes +category: Releases +--- + +# TiDB 2.0.4 Release Notes + +On June 15, 2018, TiDB 2.0.4 is released. Compared with TiDB 2.0.3, this release has great improvement in system compatibility and stability. + +## TiDB + +- Support the `ALTER TABLE t DROP COLUMN a CASCADE` syntax +- Support configuring the value of `tidb_snapshot` to TSO +- Refine the display of statement types in monitoring items +- Optimize the accuracy of query cost estimation +- Configure the `backoff max delay` parameter of gRPC +- Support configuring the memory threshold of a single statement in the configuration file +- Refactor the error of Optimizer +- Fix the side effects of the `Cast Decimal` data +- Fix the wrong result issue of the `Merge Join` operator in specific scenarios +- Fix the issue of converting the Null object to String +- Fix the issue of casting the JSON type of data to the JSON type +- Fix the issue that the result order is not consistent with MySQL in the condition of `Union` + `OrderBy` +- Fix the compliance rules issue when the `Union` statement checks the `Limit/OrderBy` clause +- Fix the compatibility issue of the `Union All` result +- Fix a bug in predicate pushdown +- Fix the compatibility issue of the `Union` statement with the `For Update` clause +- Fix the issue that the `concat_ws` function mistakenly truncates the result + +## PD + +- Improve the behavior of the unset scheduling argument `max-pending-peer-count` by changing it to no limit for the maximum number of `PendingPeer`s + +## TiKV + +- Add the RocksDB `PerfContext` interface for debugging +- Remove the `import-mode` parameter +- Add the `region-properties` command for `tikv-ctl` +- Fix the issue that `reverse-seek` is slow when many RocksDB tombstones exist +- Fix the crash issue caused by `do_sub` +- Make GC record the log when GC encounters many versions of data diff --git a/v2.0/releases/205.md b/v2.0/releases/205.md new file mode 100755 index 0000000000000..306b0178eacde --- /dev/null +++ b/v2.0/releases/205.md @@ -0,0 +1,40 @@ +--- +title: TiDB 2.0.5 Release Notes +category: Releases +--- + +# TiDB 2.0.5 Release Notes + +On July 6, 2018, TiDB 2.0.5 is released. Compared with TiDB 2.0.4, this release has great improvement in system compatibility and stability. + +## TiDB + +- New Features + - Add the `tidb_disable_txn_auto_retry` system variable which is used to disable the automatic retry of transactions [#6877](https://github.com/pingcap/tidb/pull/6877) +- Improvements + - Optimize the cost calculation of `Selection` to make the result more accurate [#6989](https://github.com/pingcap/tidb/pull/6989) + - Select the query condition that completely matches the unique index or the primary key as the query path directly [#6966](https://github.com/pingcap/tidb/pull/6966) + - Execute necessary cleanup when failing to start the service [#6964](https://github.com/pingcap/tidb/pull/6964) + - Handle `\N` as NULL in the `Load Data` statement [#6962](https://github.com/pingcap/tidb/pull/6962) + - Optimize the code structure of CBO [#6953](https://github.com/pingcap/tidb/pull/6953) + - Report the monitoring metrics earlier when starting the service [#6931](https://github.com/pingcap/tidb/pull/6931) + - Optimize the format of slow queries by removing the line breaks in SQL statements and adding user information [#6920](https://github.com/pingcap/tidb/pull/6920) + - Support multiple asterisks in comments [#6858](https://github.com/pingcap/tidb/pull/6858) +- Bug Fixes + - Fix the issue that `KILL QUERY` always requires SUPER privilege [#7003](https://github.com/pingcap/tidb/pull/7003) + - Fix the issue that users might fail to login when the number of users exceeds 1024 [#6986](https://github.com/pingcap/tidb/pull/6986) + - Fix an issue about inserting unsigned `float`/`double` data [#6940](https://github.com/pingcap/tidb/pull/6940) + - Fix the compatibility of the `COM_FIELD_LIST` command to resolve the panic issue in some MariaDB clients [#6929](https://github.com/pingcap/tidb/pull/6929) + - Fix the `CREATE TABLE IF NOT EXISTS LIKE` behavior [#6928](https://github.com/pingcap/tidb/pull/6928) + - Fix an issue in the process of TopN pushdown [#6923](https://github.com/pingcap/tidb/pull/6923) + - Fix the ID record issue of the currently processing row when an error occurs in executing `Add Index` [#6903](https://github.com/pingcap/tidb/pull/6903) + +## PD + +- Fix the issue that replicas migration uses up TiKV disks space in some scenarios +- Fix the crash issue caused by `AdjacentRegionScheduler` + +## TiKV + +- Fix the potential overflow issue in decimal operations +- Fix the dirty read issue that might occur in the process of merging \ No newline at end of file diff --git a/v2.0/releases/206.md b/v2.0/releases/206.md new file mode 100755 index 0000000000000..27b9bc9cabb4f --- /dev/null +++ b/v2.0/releases/206.md @@ -0,0 +1,49 @@ +--- +title: TiDB 2.0.6 Release Notes +category: Releases +--- + +# TiDB 2.0.6 Release Notes + +On August 6, 2018, TiDB 2.0.6 is released. Compared with TiDB 2.0.5, this release has great improvement in system compatibility and stability. + +## TiDB + +- Improvements + - Make "set system variable" log shorter to save disk space [#7031](https://github.com/pingcap/tidb/pull/7031) + - Record slow operations during the execution of `ADD INDEX` in the log, to make troubleshooting easier [#7083](https://github.com/pingcap/tidb/pull/7083) + - Reduce transaction conflicts when updating statistics [#7138](https://github.com/pingcap/tidb/pull/7138) + - Improve the accuracy of row count estimation when the values pending to be estimated exceeds the statistics range [#7185](https://github.com/pingcap/tidb/pull/7185) + - Choose the table with a smaller estimated row count as the outer table for `Index Join` to improve its execution efficiency [#7277](https://github.com/pingcap/tidb/pull/7277) + - Add the recover mechanism for panics occurred during the execution of `ANALYZE TABLE`, to avoid that the tidb-server is unavailable caused by abnormal behavior in the process of collecting statistics [#7228](https://github.com/pingcap/tidb/pull/7228) + - Return `NULL` and the corresponding warning when the results of `RPAD`/`LPAD` exceed the value of the `max_allowed_packet` system variable, compatible with MySQL [#7244](https://github.com/pingcap/tidb/pull/7244) + - Set the upper limit of placeholders count in the `PREPARE` statement to 65535, compatible with MySQL [#7250](https://github.com/pingcap/tidb/pull/7250) +- Bug Fixes + - Fix the issue that the `DROP USER` statement is incompatible with MySQL behavior in some cases [#7014](https://github.com/pingcap/tidb/pull/7014) + - Fix the issue that statements like `INSERT`/`LOAD DATA` meet OOM aftering opening `tidb_batch_insert` [#7092](https://github.com/pingcap/tidb/pull/7092) + - Fix the issue that the statistics fail to automatically update when the data of a table keeps updating [#7093](https://github.com/pingcap/tidb/pull/7093) + - Fix the issue that the firewall breaks inactive gPRC connections [#7099](https://github.com/pingcap/tidb/pull/7099) + - Fix the issue that prefix index returns a wrong result in some scenarios [#7126](https://github.com/pingcap/tidb/pull/7126) + - Fix the panic issue caused by outdated statistics in some scenarios [#7155](https://github.com/pingcap/tidb/pull/7155) + - Fix the issue that one piece of index data is missed after the `ADD INDEX` operation in some scenarios [#7156](https://github.com/pingcap/tidb/pull/7156) + - Fix the wrong result issue when querying `NULL` values using the unique index in some scenarios [#7172](https://github.com/pingcap/tidb/pull/7172) + - Fix the messy code issue of the `DECIMAL` multiplication result in some scenarios [#7212](https://github.com/pingcap/tidb/pull/7212) + - Fix the wrong result issue of `DECIMAL` modulo operation in some scenarios [#7245](https://github.com/pingcap/tidb/pull/7245) + - Fix the issue that the `UPDATE`/`DELETE` statement in a transaction returns a wrong result under some special sequence of statements [#7219](https://github.com/pingcap/tidb/pull/7219) + - Fix the panic issue of the `UNION ALL`/`UPDATE` statement during the process of building the execution plan in some scenarios [#7225](https://github.com/pingcap/tidb/pull/7225) + - Fix the issue that the range of prefix index is calculated incorrectly in some scenarios [#7231](https://github.com/pingcap/tidb/pull/7231) + - Fix the issue that the `LOAD DATA` statement fails to write the binlog in some scenarios [#7242](https://github.com/pingcap/tidb/pull/7242) + - Fix the wrong result issue of `SHOW CREATE TABLE` during the execution process of `ADD INDEX` in some scenarios [#7243](https://github.com/pingcap/tidb/pull/7243) + - Fix the issue that panic occurs when `Index Join` does not initialize timestamps in some scenarios [#7246](https://github.com/pingcap/tidb/pull/7246) + - Fix the false alarm issue when `ADMIN CHECK TABLE` mistakenly uses the timezone in the session [#7258](https://github.com/pingcap/tidb/pull/7258) + - Fix the issue that `ADMIN CLEANUP INDEX` does not clean up the index in some scenarios [#7265](https://github.com/pingcap/tidb/pull/7265) + - Disable the Read Committed isolation level [#7282](https://github.com/pingcap/tidb/pull/7282) + +## TiKV + +- Improvements + - Enlarge scheduler's default slots to reduce false conflicts + - Reduce continuous records of rollback transactions, to improve the Read performance when conflicts are extremely severe + - Limit the size and number of RocksDB log files, to reduce unnecessary disk usage in long-running condition +- Bug Fixes + - Fix the crash issue when converting the data type from string to decimal diff --git a/v2.0/releases/21beta.md b/v2.0/releases/21beta.md new file mode 100755 index 0000000000000..cd7e8d9080f6b --- /dev/null +++ b/v2.0/releases/21beta.md @@ -0,0 +1,85 @@ +--- +title: TiDB 2.1 Beta Release Notes +category: Releases +--- + +# TiDB 2.1 Beta Release Notes + +On June 29, 2018, TiDB 2.1 Beta is released! Compared with TiDB 2.0, this release has great improvement in stability, SQL optimizer, statistics information, and execution engine. + +## TiDB + +- SQL Optimizer + - Optimize the selection range of `Index Join` to improve the execution performance + - Optimize correlated subquery, push down `Filter`, and extend the index range, to improve the efficiency of some queries by orders of magnitude + - Support `Index Hint` and `Join Hint` in the `UPDATE` and `DELETE` statements + - Validate Hint `TIDM_SMJ` when no available index exists + - Support pushdown of the `ABS`, `CEIL`, `FLOOR`, `IS TRUE`, and `IS FALSE` functions + - Handle the `IF` and `IFNULL` functions especially in the constant folding process +- SQL Execution Engine + - Implement parallel `Hash Aggregate` operators and improve the computing performance of `Hash Aggregate` by 350% in some scenarios + - Implement parallel `Project` operators and improve the performance by 74% in some scenarios + - Read the data of the `Inner` table and `Outer` table of `Hash Join` concurrently to improve the execution performance + - Fix incorrect results of `INSERT … ON DUPLICATE KEY UPDATE …` in some scenarios + - Fix incorrect results of the `CONCAT_WS`, `FLOOR`, `CEIL`, and `DIV` built-in functions +- Server + - Add the HTTP API to scatter the distribution of table Regions in the TiKV cluster + - Add the `auto_analyze_ratio` system variable to control the threshold value of automatic `Analyze` + - Add the HTTP API to control whether to open the general log + - Add the HTTP API to modify the log level online + - Add the user information in the general log and the slow query log + - Support the server side cursor +- Compatibility + - Support more MySQL syntax + - Make the `bit` aggregate function support the `ALL` parameter + - Support the `SHOW PRIVILEGES` statement +- DML + - Decrease the memory usage of the `INSERT INTO SELECT` statement + - Fix the performance issue of `PlanCache` + - Add the `tidb_retry_limit` system variable to control the automatic retry times of transactions + - Add the `tidb_disable_txn_auto_retry` system variable to control whether the transaction tries automatically + - Fix the accuracy issue of the written data of the `time` type + - Support the queue of locally conflicted transactions to optimize the conflicted transaction performance + - Fix `Affected Rows` of the `UPDATE` statement + - Optimize the statement performance of `insert ignore on duplicate key update` +- DDL + - Optimize the execution speed of the `CreateTable` statement + - Optimize the execution speed of `ADD INDEX` and improve it greatly in some scenarios + - Fix the issue that the number of added columns by `Alter table add column` exceeds the limit of the number of table columns + - Fix the issue that DDL job retries lead to an increasing pressure on TiKV in abnormal conditions + - Fix the issue that TiDB continuously reloads the schema information in abnormal conditions + - Do not output the `FOREIGN KEY` related information in the result of `SHOW CREATE TABLE` + - Support the `select tidb_is_ddl_owner()` statement to facilitate judging whether TiDB is `DDL Owner` + - Fix the issue that the index is deleted in the `Year` type in some scenarios + - Fix the renaming table issue in the concurrent execution scenario + - Support the `AlterTableForce` syntax + - Support the `AlterTableRenameIndex` syntax with `FromKey` and `ToKey` + - Add the table name and database name in the output information of `admin show ddl jobs` + +## PD + +- Enable Raft PreVote between PD nodes to avoid leader reelection when network recovers after network isolation +- Optimize the issue that Balance Scheduler schedules small Regions frequently +- Optimize the hotspot scheduler to improve its adaptability in traffic statistics information jitters +- Skip the Regions with a large number of rows when scheduling `region merge` +- Enable `raft learner` by default to lower the risk of unavailable data caused by machine failure during scheduling +- Remove `max-replica` from `pd-recover` +- Add `Filter` metrics +- Fix the issue that Region information is not updated after tikv-ctl unsafe recovery +- Fix the issue that TiKV disk space is used up caused by replica migration in some scenarios +- Compatibility notes + - Do not support rolling back to v2.0.x or earlier due to update of the new version storage engine + - Enable `raft learner` by default in the new version of PD. If the cluster is upgraded from 1.x to 2.1, the machine should be stopped before upgrade or a rolling update should be first applied to TiKV and then PD + + +## TiKV + +- Upgrade Rust to the `nightly-2018-06-14` version +- Enable `Raft PreVote` to avoid leader reelection generated when network recovers after network isolation +- Add a metric to display the number of files and `ingest` related information in each layer of RocksDB +- Print `key` with too many versions when GC works +- Use `static metric` to optimize multi-label metric performance (YCSB `raw get` is improved by 3%) +- Remove `box` in multiple modules and use patterns to improve the operating performance (YCSB `raw get` is improved by 3%) +- Use `asynchronous log` to improve the performance of writing logs +- Add a metric to collect the thread status +- Decease memory copy times by decreasing `box` used in the application to improve the performance diff --git a/v2.0/releases/21rc1.md b/v2.0/releases/21rc1.md new file mode 100755 index 0000000000000..e48f20f3729ab --- /dev/null +++ b/v2.0/releases/21rc1.md @@ -0,0 +1,155 @@ +--- +title: TiDB 2.1 RC1 Release Notes +category: Releases +--- + +# TiDB 2.1 RC1 Release Notes + +On August 24, 2018, TiDB 2.1 RC1 is released! Compared with TiDB 2.1 Beta, this release has great improvement in stability, SQL optimizer, statistics information, and execution engine. + +## TiDB + +- SQL Optimizer + - Fix the issue that a wrong result is returned after the correlated subquery is decorrelated in some cases [#6972](https://github.com/pingcap/tidb/pull/6972) + - Optimize the output result of `Explain` [#7011](https://github.com/pingcap/tidb/pull/7011)[#7041](https://github.com/pingcap/tidb/pull/7041) + - Optimize the choosing strategy of the outer table for `IndexJoin` [#7019](https://github.com/pingcap/tidb/pull/7019) + - Remove the Plan Cache of the non-`PREPARE` statement [#7040](https://github.com/pingcap/tidb/pull/7040) + - Fix the issue that the `INSERT` statement is not parsed and executed correctly in some cases [#7068](https://github.com/pingcap/tidb/pull/7068) + - Fix the issue that the `IndexJoin` result is not correct in some cases [#7150](https://github.com/pingcap/tidb/pull/7150) + - Fix the issue that the `NULL` value cannot be found using the unique index in some cases [#7163](https://github.com/pingcap/tidb/pull/7163) + - Fix the range computing issue of the prefix index in UTF-8 [#7194](https://github.com/pingcap/tidb/pull/7194) + - Fix the issue that result is not correct caused by eliminating the `Project` operator in some cases [#7257](https://github.com/pingcap/tidb/pull/7257) + - Fix the issue that `USE INDEX(PRIMARY)` cannot be used when the primary key is an integer [#7316](https://github.com/pingcap/tidb/pull/7316) + - Fix the issue that the index range cannot be computed using the correlated column in some cases [#7357](https://github.com/pingcap/tidb/pull/7357) +- SQL Execution Engine + - Fix the issue that the daylight saving time is not computed correctly in some cases [#6823](https://github.com/pingcap/tidb/pull/6823) + - Refactor the aggregation function framework to improve the execution efficiency of the `Stream` and `Hash` aggregation operators [#6852](https://github.com/pingcap/tidb/pull/6852) + - Fix the issue that the `Hash` aggregation operator cannot exit normally in some cases [#6982](https://github.com/pingcap/tidb/pull/6982) + - Fix the issue that `BIT_AND`/`BIT_OR`/`BIT_XOR` does not handle the non-integer data correctly [#6994](https://github.com/pingcap/tidb/pull/6994) + - Optimize the execution speed of the `REPLACE INTO` statement and increase the performance nearly 10 times [#7027](https://github.com/pingcap/tidb/pull/7027) + - Optimize the memory usage of time type data and decrease the memory usage of the time type data by fifty percent [#7043](https://github.com/pingcap/tidb/pull/7043) + - Fix the issue that the returned result is mixed with signed and unsigned integers in the `UNION` statement is not compatible with MySQL [#7112](https://github.com/pingcap/tidb/pull/7112) + - Fix the panic issue caused by the too much memory applied by `LPAD`/`RPAD`/`TO_BASE64`/`FROM_BASE64`/`REPEAT` [#7171](https://github.com/pingcap/tidb/pull/7171) [#7266](https://github.com/pingcap/tidb/pull/7266) [#7409](https://github.com/pingcap/tidb/pull/7409) [#7431](https://github.com/pingcap/tidb/pull/7431) + - Fix the incorrect result when `MergeJoin`/`IndexJoin` handles the `NULL` value [#7255](https://github.com/pingcap/tidb/pull/7255) + - Fix the incorrect result of `Outer Join` in some cases [#7288](https://github.com/pingcap/tidb/pull/7288) + - Improve the error message of `Data Truncated` to facilitate locating the wrong data and the corresponding field in the table [#7401](https://github.com/pingcap/tidb/pull/7401) + - Fix the incorrect result for `decimal` in some cases [#7001](https://github.com/pingcap/tidb/pull/7001) [#7113](https://github.com/pingcap/tidb/pull/7113) [#7202](https://github.com/pingcap/tidb/pull/7202) [#7208](https://github.com/pingcap/tidb/pull/7208) + - Optimize the point select performance [#6937](https://github.com/pingcap/tidb/pull/6937) + - Prohibit the isolation level of `Read Commited` to avoid the underlying problem [#7211](https://github.com/pingcap/tidb/pull/7211) + - Fix the incorrect result of `LTRIM`/`RTRIM`/`TRIM` in some cases [#7291](https://github.com/pingcap/tidb/pull/7291) + - Fix the issue that the `MaxOneRow` operator cannot guarantee that the returned result does not exceed one row [#7375](https://github.com/pingcap/tidb/pull/7375) + - Divide the Coprocessor requests with too many ranges [#7454](https://github.com/pingcap/tidb/pull/7454) +- Statistics + - Optimize the mechanism of statistics dynamic collection [#6796](https://github.com/pingcap/tidb/pull/6796) + - Fix the issue that `Auto Analyze` does not work when data is updated frequently [#7022](https://github.com/pingcap/tidb/pull/7022) + - Decrease the Write conflicts during the statistics dynamic update process [#7124](https://github.com/pingcap/tidb/pull/7124) + - Optimize the cost estimation when the statistics is incorrect [#7175](https://github.com/pingcap/tidb/pull/7175) + - Optimize the `AccessPath` cost estimation strategy [#7233](https://github.com/pingcap/tidb/pull/7233) +- Server + - Fix the bug in loading privilege information [#6976](https://github.com/pingcap/tidb/pull/6976) + - Fix the issue that the `Kill` command is too strict with privilege check [#6954](https://github.com/pingcap/tidb/pull/6954) + - Fix the issue of removing some binary numeric types [#6922](https://github.com/pingcap/tidb/pull/6922) + - Shorten the output log [#7029](https://github.com/pingcap/tidb/pull/7029) + - Handle the `mismatchClusterID` issue [#7053](https://github.com/pingcap/tidb/pull/7053) + - Add the `advertise-address` configuration item [#7078](https://github.com/pingcap/tidb/pull/7078) + - Add the `GrpcKeepAlive` option [#7100](https://github.com/pingcap/tidb/pull/7100) + - Add the connection or `Token` time monitor [#7110](https://github.com/pingcap/tidb/pull/7110) + - Optimize the data decoding performance [#7149](https://github.com/pingcap/tidb/pull/7149) + - Add the `PROCESSLIST` table in `INFORMMATION_SCHEMA` [#7236](https://github.com/pingcap/tidb/pull/7236) + - Fix the order issue when multiple rules are hit in verifying the privilege [#7211](https://github.com/pingcap/tidb/pull/7211) + - Change some default values of encoding related system variables to UTF-8 [#7198](https://github.com/pingcap/tidb/pull/7198) + - Make the slow query log show more detailed information [#7302](https://github.com/pingcap/tidb/pull/7302) + - Support registering tidb-server related information in PD and obtaining this information by HTTP API [#7082](https://github.com/pingcap/tidb/pull/7082) +- Compatibility + - Support Session variables `warning_count` and `error_count` [#6945](https://github.com/pingcap/tidb/pull/6945) + - Add `Scope` check when reading the system variables [#6958](https://github.com/pingcap/tidb/pull/6958) + - Support the `MAX_EXECUTION_TIME` syntax [#7012](https://github.com/pingcap/tidb/pull/7012) + - Support more statements of the `SET` syntax [#7020](https://github.com/pingcap/tidb/pull/7020) + - Add validity check when setting system variables [#7117](https://github.com/pingcap/tidb/pull/7117) + - Add the verification of the number of `PlaceHolder`s in the `Prepare` statement [#7162](https://github.com/pingcap/tidb/pull/7162) + - Support `set character_set_results = null` [#7353](https://github.com/pingcap/tidb/pull/7353) + - Support the `flush status` syntax [#7369](https://github.com/pingcap/tidb/pull/7369) + - Fix the column size of `SET` and `ENUM` types in `information_schema` [#7347](https://github.com/pingcap/tidb/pull/7347) + - Support the `NATIONAL CHARACTER` syntax of statements for creating a table [#7378](https://github.com/pingcap/tidb/pull/7378) + - Support the `CHARACTER SET` syntax in the `LOAD DATA` statement [#7391](https://github.com/pingcap/tidb/pull/7391) + - Fix the column information of the `SET` and `ENUM` types [#7417](https://github.com/pingcap/tidb/pull/7417) + - Support the `IDENTIFIED WITH` syntax in the `CREATE USER` statement [#7402](https://github.com/pingcap/tidb/pull/7402) + - Fix the precision losing issue during `TIMESTAMP` computing process [#7418](https://github.com/pingcap/tidb/pull/7418) + - Support the validity verification of more `SYSTEM` variables [#7196](https://github.com/pingcap/tidb/pull/7196) + - Fix the incorrect result when the `CHAR_LENGTH` function computes the binary string [#7410](https://github.com/pingcap/tidb/pull/7410) + - Fix the incorrect `CONCAT` result in a statement involving `GROUP BY` [#7448](https://github.com/pingcap/tidb/pull/7448) + - Fix the imprecise type length issue when casting the `DECIMAL` type to the `STRING` type [#7451](https://github.com/pingcap/tidb/pull/7451) +- DML + - Fix the stability issue of the `Load Data` statement [#6927](https://github.com/pingcap/tidb/pull/6927) + - Fix the memory usage issue when performing some `Batch` operations [#7086](https://github.com/pingcap/tidb/pull/7086) + - Improve the performance of the `Replace Into` statement [#7027](https://github.com/pingcap/tidb/pull/7027) + - Fix the inconsistent precision issue when writing `CURRENT_TIMESTAMP` [#7355](https://github.com/pingcap/tidb/pull/7355) +- DDL + - Improve the method of DDL judging whether `Schema` is synchronized to avoid misjudgement in some cases [#7319](https://github.com/pingcap/tidb/pull/7319) + - Fix the `SHOW CREATE TABLE` result in adding index process [#6993](https://github.com/pingcap/tidb/pull/6993) + - Allow the default value of `text`/`blob`/`json` to be NULL in non-restrict `sql-mode` [#7230](https://github.com/pingcap/tidb/pull/7230) + - Fix the `ADD INDEX` issue in some cases [#7142](https://github.com/pingcap/tidb/pull/7142) + - Increase the speed of adding `UNIQUE-KEY` index operation largely [#7132](https://github.com/pingcap/tidb/pull/7132) + - Fix the truncating issue of the prefix index in UTF-8 character set [#7109](https://github.com/pingcap/tidb/pull/7109) + - Add the environment variable `tidb_ddl_reorg_priority` to control the priority of the `add-index` operation [#7116](https://github.com/pingcap/tidb/pull/7116) + - Fix the display issue of `AUTO-INCREMENT` in `information_schema.tables` [#7037](https://github.com/pingcap/tidb/pull/7037) + - Support the `admin show ddl jobs ` command and support output specified number of DDL jobs [#7028](https://github.com/pingcap/tidb/pull/7028) + - Support parallel DDL job execution [#6955](https://github.com/pingcap/tidb/pull/6955) +- [Table Partition](https://github.com/pingcap/tidb/projects/6) (Experimental) + - Support top level partition + - Support `Range Partition` + +## PD + +- Features + - Introduce the version control mechanism and support rolling update of the cluster with compatibility + - Enable the `region merge` feature + - Support the `GetPrevRegion` interface + - Support splitting Regions in batch + - Support storing the GC safepoint +- Improvements + - Optimize the issue that TSO allocation is affected by the system clock going backwards + - Optimize the performance of handling Region heartbeats + - Optimize the Region tree performance + - Optimize the performance of computing hotspot statistics + - Optimize returning the error code of API interface + - Add options of controlling scheduling strategies + - Prohibit using special characters in `label` + - Improve the scheduling simulator + - Support splitting Regions using statistics in pd-ctl + - Support formatting JSON output by calling `jq` in pd-ctl + - Add metrics about etcd Raft state machine +- Bug fixes + - Fix the issue that the namespace is not reloaded after switching Leader + - Fix the issue that namespace scheduling exceeds the schedule limit + - Fix the issue that hotspot scheduling exceeds the schedule limit + - Fix the issue that wrong logs are output when the PD client closes + - Fix the wrong statistics of Region heartbeat latency + +## TiKV + +- Features + - Support `batch split` to avoid too large Regions caused by the Write operation on hot Regions + - Support splitting Regions based on the number of rows to improve the index scan efficiency +- Performance + - Use `LocalReader` to separate the Read operation from the raftstore thread to lower the Read latency + - Refactor the MVCC framework, optimize the memory usage and improve the scan Read performance + - Support splitting Regions based on statistics estimation to reduce the I/O usage + - Optimize the issue that the Read performance is affected by continuous Write operations on the rollback record + - Reduce the memory usage of pushdown aggregation computing +- Improvements + - Add the pushdown support for a large number of built-in functions and better charset support + - Optimize the GC workflow, improve the GC speed and decrease the impact of GC on the system + - Enable `prevote` to speed up service recovery when the network is abnormal + - Add the related configuration items of RocksDB log files + - Adjust the default configuration of `scheduler_latch` + - Support setting whether to compact the data in the bottom layer of RocksDB when using tikv-ctl to compact data manually + - Add the check for environment variables when starting TiKV + - Support dynamically configuring the `dynamic_level_bytes` parameter based on the existing data + - Support customizing the log format + - Integrate tikv-fail in tikv-ctl + - Add I/O metrics of threads +- Bug fixes + - Fix decimal related issues + - Fix the issue that `gRPC max_send_message_len` is set mistakenly + - Fix the issue caused by misconfiguration of `region_size` diff --git a/v2.0/releases/2rc1.md b/v2.0/releases/2rc1.md new file mode 100755 index 0000000000000..3c742e8353a20 --- /dev/null +++ b/v2.0/releases/2rc1.md @@ -0,0 +1,39 @@ +--- +title: TiDB 2.0 RC1 Release Notes +category: Releases +--- + +# TiDB 2.0 RC1 Release Notes + +On March 9, 2018, TiDB 2.0 RC1 is released. This release has great improvement in MySQL compatibility, SQL optimization and stability. + +## TiDB: + +- Support limiting the memory usage by a single SQL statement, to reduce the risk of OOM +- Support pushing the Stream Aggregate operator down to TiKV +- Support validating the configuration file +- Support obtaining the information of TiDB configuration through HTTP API +- Compatible with more MySQL syntax in Parser +- Improve the compatibility with Navicat +- Improve the optimizer and extract common expressions with multiple OR conditions, to choose better query plan +- Improve the optimizer and convert subqueries to Join operators in more scenarios, to choose better query plan +- Resolve Lock in the Batch mode to increase the garbage collection speed +- Fix the length of Boolean field to improve compatibility +- Optimize the Add Index operation and give lower priority to all write and read operations, to reduce the impact on online business + +## PD: + +- Optimize the logic of code used to check the Region status to improve performance +- Optimize the output of log information in abnormal conditions to facilitate debugging +- Fix the monitor statistics that the disk space of TiKV nodes is not enough +- Fix the wrong reporting issue of the health interface when TLS is enabled +- Fix the issue that concurrent addition of replicas might exceed the threshold value of configuration, to improve stability + +## TiKV: + +- Fix the issue that gRPC call is not cancelled when PD leaders switch +- Protect important configuration which cannot be changed after initial configuration +- Add gRPC APIs used to obtain metrics +- Check whether SSD is used when you start the cluster +- Optimize the read performance using ReadPool, and improve the performance by 30% in the `raw get` test +- Improve metrics and optimize the usage of metrics \ No newline at end of file diff --git a/v2.0/releases/2rc3.md b/v2.0/releases/2rc3.md new file mode 100755 index 0000000000000..03365e6a5b24e --- /dev/null +++ b/v2.0/releases/2rc3.md @@ -0,0 +1,59 @@ +--- +title: TiDB 2.0 RC3 Release Notes +category: Releases +--- + +# TiDB 2.0 RC3 Release Notes + +On March 23, 2018, TiDB 2.0 RC3 is released. This release has great improvement in MySQL compatibility, SQL optimization and stability. + +## TiDB: + +- Fix the wrong result issue of `MAX/MIN` in some scenarios +- Fix the issue that the result of `Sort Merge Join` does not show in order of Join Key in some scenarios +- Fix the error of comparison between `uint` and `int` in boundary conditions +- Optimize checks on length and precision of the floating point type, to improve compatibility with MySQL +- Improve the parsing error log of time type and add more error information +- Improve memory control and add statistics about `IndexLookupExecutor` memory +- Optimize the execution speed of `ADD INDEX` to greatly increase the speed in some scenarios +- Use the Stream Aggregation operator when the `GROUP BY` substatement is empty, to increase the speed +- Support closing the `Join Reorder` optimization in the optimizer using `STRAIGHT_JOIN` +- Output more detailed status information of DDL jobs in `ADMIN SHOW DDL JOBS` +- Support querying the original statements of currently running DDL jobs using `ADMIN SHOW DDL JOB QUERIES` +- Support recovering the index data using `ADMIN RECOVER INDEX` for disaster recovery +- Attach a lower priority to the `ADD INDEX` operation to reduce the impact on online business +- Support aggregation functions with JSON type parameters, such as `SUM/AVG` +- Support modifying the `lower_case_table_names` system variable in the configuration file, to support the OGG data synchronization tool +- Improve compatibility with the Navicat management tool +- Support using implicit RowID in CRUD operations + +## PD: + +- Support Region Merge, to merge empty Regions or small Regions after deleting data +- Ignore the nodes that have a lot of pending peers during adding replicas, to improve the speed of restoring replicas or making nodes offline +- Fix the frequent scheduling issue caused by a large number of empty Regions +- Optimize the scheduling speed of leader balance in scenarios of unbalanced resources within different labels +- Add more statistics about abnormal Regions + +## TiKV: + +- Support Region Merge +- Inform PD immediately once the Raft snapshot process is completed, to speed up balancing +- Add the Raw DeleteRange API +- Add the GetMetric API +- Reduce the I/O fluctuation caused by RocksDB sync files +- Optimize the space reclaiming mechanism after deleting data +- Improve the data recovery tool `tikv-ctl` +- Fix the issue that it is slow to make nodes down caused by snapshot +- Support streaming in Coprocessor +- Support Readpool and increase the `raw_get/get/batch_get` by 30% +- Support configuring the request timeout of Coprocessor +- Support streaming aggregation in Coprocessor +- Carry time information in Region heartbeats +- Limit the space usage of snapshot files to avoid consuming too much disk space +- Record and report the Regions that cannot elect a leader for a long time +- Speed up garbage cleaning when starting the server +- Update the size information about the corresponding Region according to compaction events +- Limit the size of `scan lock` to avoid request timeout +- Use `DeleteRange` to speed up Region deletion +- Support modifying RocksDB parameters online \ No newline at end of file diff --git a/v2.0/releases/2rc4.md b/v2.0/releases/2rc4.md new file mode 100755 index 0000000000000..3c1d98503689e --- /dev/null +++ b/v2.0/releases/2rc4.md @@ -0,0 +1,38 @@ +--- +title: TiDB 2.0 RC4 Release Notes +category: Releases +--- + +# TiDB 2.0 RC4 Release Notes + +On March 30, 2018, TiDB 2.0 RC4 is released. This release has great improvement in MySQL compatibility, SQL optimization and stability. + +## TiDB: + +- Support `SHOW GRANTS FOR CURRENT_USER();` +- Fix the issue that the `Expression` in `UnionScan` is not cloned +- Support the `SET TRANSACTION` syntax +- Fix the potential goroutine leak issue in `copIterator` +- Fix the issue that `admin check table` misjudges the unique index including null +- Support displaying floating point numbers using scientific notation +- Fix the type inference issue during binary literal computing +- Fix the issue in parsing the `CREATE VIEW` statement +- Fix the panic issue when one statement contains both `ORDER BY` and `LIMIT 0` +- Improve the execution performance of `DecodeBytes` +- Optimize `LIMIT 0` to `TableDual`, to avoid building useless execution plans + +## PD: + +- Support splitting Region manually to handle the hot spot in a single Region +- Fix the issue that the label property is not displayed when `pdctl` runs `config show all` +- Optimize metrics and code structure + +## TiKV: + +- Limit the memory usage during receiving snapshots, to avoid OOM in extreme conditions +- Support configuring the behavior of Coprocessor when it encounters warnings +- Support importing the data pattern in TiKV +- Support splitting Region in the middle +- Increase the speed of CI test +- Use `crossbeam channel` +- Fix the issue that too many logs are output caused by leader missing when TiKV is isolated \ No newline at end of file diff --git a/v2.0/releases/2rc5.md b/v2.0/releases/2rc5.md new file mode 100755 index 0000000000000..c35688334d858 --- /dev/null +++ b/v2.0/releases/2rc5.md @@ -0,0 +1,46 @@ +--- +title: TiDB 2.0 RC5 Release Notes +category: Releases +--- + +# TiDB 2.0 RC5 Release Notes + +On April 17, 2018, TiDB 2.0 RC5 is released. This release has great improvement in MySQL compatibility, SQL optimization and stability. + +## TiDB + +- Fix the issue about applying the `Top-N` pushdown rule +- Fix the estimation of the number of rows for the columns that contain NULL values +- Fix the zero value of the Binary type +- Fix the `BatchGet` issue within a transaction +- Clean up the written data while rolling back the `Add Index` operation, to reduce consumed space +- Optimize the `insert on duplicate key update` statement to improve the performance by 10 times +- Fix the issue about the type of the results returned by the `UNIX_TIMESTAMP` function +- Fix the issue that the NULL value is inserted while adding NOT NULL columns +- Support showing memory usage of the executing statements in the `Show Process List` statement +- Fix the issue that `Alter Table Modify Column` reports an error in extreme conditions +- Support setting the table comment using the `Alter` statement + +## PD + +- Add support for Raft Learner +- Optimize the Balance Region Scheduler to reduce scheduling overhead +- Adjust the default value of `schedule-limit` configuration +- Fix the issue of allocating ID frequently +- Fix the compatibility issue when adding a new scheduler + +## TiKV + +- Support the Region specified by `compact` in `tikv-ctl` +- Support Batch Put, Batch Get, Batch Delete and Batch Scan in the RawKVClient +- Fix the OOM issue caused by too many snapshots +- Return more detailed error information in Coprocessor +- Support dynamically modifying the `block-cache-size` in TiKV through `tikv-ctl` +- Further improve `importer` +- Simplify the `ImportSST::Upload` interface +- Configure the `keepalive` property of gRPC +- Split `tikv-importer` from TiKV as an independent binary +- Provide statistics about the number of rows scanned by each `scan range` in Coprocessor +- Fix the compilation issue on the macOS system +- Fix the issue of misusing a RocksDB metric +- Support the `overflow as warning` option in Coprocessor \ No newline at end of file diff --git a/v2.0/releases/ga.md b/v2.0/releases/ga.md new file mode 100755 index 0000000000000..e0859994f5328 --- /dev/null +++ b/v2.0/releases/ga.md @@ -0,0 +1,269 @@ +--- +title: TiDB 1.0 release notes +category: Releases +--- + +# TiDB 1.0 Release Notes + +On October 16, 2017, TiDB 1.0 is now released! This release is focused on MySQL compatibility, SQL optimization, stability, and performance. + +## TiDB: + +- The SQL query optimizer: + - Adjust the cost model + - Analyze pushdown + - Function signature pushdown +- Optimize the internal data format to reduce the interim data size +- Enhance the MySQL compatibility +- Support the `NO_SQL_CACHE` syntax and limit the cache usage in the storage engine +- Refactor the Hash Aggregator operator to reduce the memory usage +- Support the Stream Aggregator operator + +## PD: + +- Support read flow based balancing +- Support setting the Store weight and weight based balancing + +## TiKV: + +- Coprocessor now supports more pushdown functions +- Support pushing down the sampling operation +- Support manually triggering data compact to collect space quickly +- Improve the performance and stability +- Add a Debug API for debugging +- TiSpark Beta Release: +- Support configuration framework +- Support ThriftSever/JDBC and Spark SQL + +## Acknowledgement + +### Special thanks to the following enterprises and teams! + +- Archon +- Mobike +- Samsung Electronics +- SpeedyCloud +- Tencent Cloud +- UCloud + +### Thanks to the open source software and services from the following organizations and individuals: + +- Asta Xie +- CNCF +- CoreOS +- Databricks +- Docker +- Github +- Grafana +- gRPC +- Jepsen +- Kubernetes +- Namazu +- Prometheus +- RedHat +- RocksDB Team +- Rust Team + +### Thanks to the individual contributors: + +- 8cbx +- Akihiro Suda +- aliyx +- alston111111 +- andelf +- Andy Librian +- Arthur Yang +- astaxie +- Bai, Yang +- bailaohe +- Bin Liu +- Blame cosmos +- Breezewish +- Carlos Ferreira +- Ce Gao +- Changjian Zhang +- Cheng Lian +- Cholerae Hu +- Chu Chao +- coldwater +- Cole R Lawrence +- cuiqiu +- cuiyuan +- Cwen +- Dagang +- David Chen +- David Ding +- dawxy +- dcadevil +- Deshi Xiao +- Di Tang +- disksing +- dongxu +- dreamquster +- Drogon +- Du Chuan +- Dylan Wen +- eBoyy +- Eric Romano +- Ewan Chou +- Fiisio +- follitude +- Fred Wang +- fud +- fudali +- gaoyangxiaozhu +- Gogs +- goroutine +- Gregory Ian +- Guanqun Lu +- Guilherme Hübner Franco +- Haibin Xie +- Han Fei +- hawkingrei +- Hiroaki Nakamura +- hiwjd +- Hongyuan Wang +- Hu Ming +- Hu Ziming +- Huachao Huang +- HuaiyuXu +- Huxley Hu +- iamxy +- Ian +- insion +- iroi44 +- Ivan.Yang +- Jack Yu +- jacky liu +- Jan Mercl +- Jason W +- Jay +- Jay Lee +- Jianfei Wang +- Jiaxing Liang +- Jie Zhou +- jinhelin +- Jonathan Boulle +- Karl Ostendorf +- knarfeh +- Kuiba +- leixuechun +- li +- Li Shihai +- Liao Qiang +- Light +- lijian +- Lilian Lee +- Liqueur Librazy +- Liu Cong +- Liu Shaohui +- liubo0127 +- liyanan +- lkk2003rty +- Louis +- louishust +- luckcolors +- Lynn +- Mae Huang +- maiyang +- maxwell +- mengshangqi +- Michael Belenchenko +- mo2zie +- morefreeze +- MQ +- mxlxm +- Neil Shen +- netroby +- ngaut +- Nicole Nie +- nolouch +- onlymellb +- overvenus +- PaladinTyrion +- paulg +- Priya Seth +- qgxiaozhan +- qhsong +- Qiannan +- qiukeren +- qiuyesuifeng +- queenypingcap +- qupeng +- Rain Li +- ranxiaolong +- Ray +- Rick Yu +- shady +- ShawnLi +- Shen Li +- Sheng Tang +- Shirly +- Shuai Li +- ShuNing +- ShuYu Wang +- siddontang +- silenceper +- Simon J Mudd +- Simon Xia +- skimmilk6877 +- sllt +- soup +- Sphinx +- Steffen +- sumBug +- sunhao2017 +- Tao Meng +- Tao Zhou +- tennix +- tiancaiamao +- TianGuangyu +- Tristan Su +- ueizhou +- UncP +- Unknwon +- v01dstar +- Van +- WangXiangUSTC +- wangyanjun +- wangyisong1996 +- weekface +- wegel +- Wei Fu +- Wenbin Xiao +- Wenting Li +- Wenxuan Shi +- winkyao +- woodpenker +- wuxuelian +- Xiang Li +- xiaojian cai +- Xuanjia Yang +- Xuanwo +- XuHuaiyu +- Yang Zhexuan +- Yann Autissier +- Yanzhe Chen +- Yiding Cui +- Yim +- youyouhu +- Yu Jun +- Yuwen Shen +- Zejun Li +- Zhang Yuning +- zhangjinpeng1987 +- ZHAO Yijun +- Zhe-xuan Yang +- ZhengQian +- ZhengQianFang +- zhengwanbo +- ZhiFeng Hu +- Zhiyuan Zheng +- Zhou Tao +- Zhoubirdblue +- zhouningnan +- Ziyi Yan +- zs634134578 +- zxylvlp +- zyguan +- zz-jason diff --git a/v2.0/releases/prega.md b/v2.0/releases/prega.md new file mode 100755 index 0000000000000..d66c2a9a42712 --- /dev/null +++ b/v2.0/releases/prega.md @@ -0,0 +1,39 @@ +--- +title: Pre-GA release notes +category: releases +--- + +# Pre-GA Release Notes + +On August 30, 2017, TiDB Pre-GA is released! This release is focused on MySQL compatibility, SQL optimization, stability, and performance. + +## TiDB: + ++ The SQL query optimizer: + - Adjust the cost model + - Use index scan to handle the `where` clause with the `compare` expression which has different types on each side + - Support the Greedy algorithm based Join Reorder ++ Many enhancements have been introduced to be more compatible with MySQL ++ Support `Natural Join` ++ Support the JSON type (Experimental), including the query, update and index of the JSON fields ++ Prune the useless data to reduce the consumption of the executor memory ++ Support configuring prioritization in the SQL statements and automatically set the prioritization for some of the statements according to the query type ++ Completed the expression refactor and the speed is increased by about 30% + +## Placement Driver (PD): + ++ Support manually changing the leader of the PD cluster + +## TiKV: + ++ Use dedicated Rocksdb instance to store Raft log ++ Use `DeleteRange` to speed up the deleting of replicas ++ Coprocessor now supports more pushdown operators ++ Improve the performance and stability + +## TiDB Connector for Spark Beta Release: + ++ Implement the predicates pushdown ++ Implement the aggregation pushdown ++ Implement range pruning ++ Capable of running full set of TPC+H except for one query that needs view support \ No newline at end of file diff --git a/v2.0/releases/rc1.md b/v2.0/releases/rc1.md new file mode 100755 index 0000000000000..089201df4542e --- /dev/null +++ b/v2.0/releases/rc1.md @@ -0,0 +1,43 @@ +--- +title: TiDB RC1 Release Notes +category: releases +--- + +# TiDB RC1 Release Notes + +On December 23, 2016, TiDB RC1 is released. See the following updates in this release: + +## TiKV: ++ The write speed has been improved. ++ The disk space usage is reduced. ++ Hundreds of TBs of data can be supported. ++ The stability is improved and TiKV can support a cluster with 200 nodes. ++ Supports the Raw KV API and the Golang client. + +Placement Driver (PD): ++ The scheduling strategy framework is optimized and now the strategy is more flexible and reasonable. ++ The support for `label` is added to support Cross Data Center scheduling. ++ PD Controller is provided to operate the PD cluster more easily. + +## TiDB: ++ The following features are added or improved in the SQL query optimizer: + - Eager aggregation + - More detailed `EXPLAIN` information + - Parallelization of the `UNION` operator + - Optimization of the subquery performance + - Optimization of the conditional push-down + - Optimization of the Cost Based Optimizer (CBO) framework ++ The implementation of the time related data types are refactored to improve the compatibility with MySQL. ++ More built-in functions in MySQL are supported. ++ The speed of the `add index` statement is enhanced. ++ The following statements are supported: + - Use the `CHANGE COLUMN` statement to change the name of a column. + - Use `MODIFY COLUMN` and `CHANGE COLUMN` of the `ALTER TABLE` statement for some of the column type transfer. + +## New tools: ++ `Loader` is added to be compatible with the `mydumper` data format in Percona and provides the following functions: + - Multi-thread import + - Retry if error occurs + - Breakpoint resume + - Targeted optimization for TiDB ++ The tool for one-click deployment is added. diff --git a/v2.0/releases/rc2.md b/v2.0/releases/rc2.md new file mode 100755 index 0000000000000..8490b46fbf841 --- /dev/null +++ b/v2.0/releases/rc2.md @@ -0,0 +1,50 @@ +--- +title: TiDB RC2 Release Notes +category: releases +--- + +# TiDB RC2 Release Notes + +On August 4, 2017, TiDB RC4 is released! This release is focused on the compatibility with MySQL, SQL query optimizer, system stability and performance in this version. What’s more, a new permission management mechanism is added and users can control data access in the same way as the MySQL privilege management system. + +## TiDB: + ++ Query optimizer + - Collect column/index statistics and use them in the query optimizer + - Optimize the correlated subquery + - Optimize the Cost Based Optimizer (CBO) framework + - Eliminate aggregation using unique key information + - Refactor expression evaluation framework + - Convert Distinct to GroupBy + - Support the topn operation push-down ++ Support basic privilege management ++ Add lots of MySQL built-in functions ++ Improve the Alter Table statement and support the modification of table name, default value and comment ++ Support the Create Table Like statement ++ Support the Show Warnings statement ++ Support the Rename Table statement ++ Restrict the size of a single transaction to avoid the cluster blocking of large transactions ++ Automatically split data in the process of Load Data ++ Optimize the performance of the AddIndex and Delete statement ++ Support "ANSI_QUOTES" sql_mode ++ Improve the monitoring system ++ Fix Bugs ++ Solve the problem of memory leak + +## PD: ++ Support location aware replica scheduling ++ Conduct fast scheduling based on the number of region ++ pd-ctl support more features + - Add or delete PD + - Obtain Region information with Key + - Add or delete scheduler and operator + - Obtain cluster label information + +## TiKV: ++ Support Async Apply to improve the entire write performance ++ Use prefix seek to improve the read performance of Write CF ++ Use memory hint prefix to improve the insert performance of Raft CF ++ Optimize the single read transaction performance ++ Support more push-down expressions ++ Improve the monitoring system ++ Fix Bugs diff --git a/v2.0/releases/rc3.md b/v2.0/releases/rc3.md new file mode 100755 index 0000000000000..103569ceddb6e --- /dev/null +++ b/v2.0/releases/rc3.md @@ -0,0 +1,61 @@ +--- +title: TiDB RC3 Release Notes +category: releases +--- + +# TiDB RC3 Release Notes + +On June 20, 2017, TiDB RC4 is released!This release is focused on MySQL compatibility, SQL optimization, stability, and performance. + +## Highlight: + +- The privilege management is refined to enable users to manage the data access privileges using the same way as in MySQL. +- DDL is accelerated. +- The load balancing policy and process are optimized for performance. +- TiDB-Ansible is open sourced. By using TiDB-Ansilbe, you can deploy, upgrade, start and shutdown a TiDB cluster with one click. + +## Detailed updates: + +## TiDB: + ++ The following features are added or improved in the SQL query optimizer: + - Support incremental statistics + - Support the ` Merge Sort Join ` operator + - Support the ` Index Lookup Join` operator + - Support the ` Optimizer Hint` Syntax + - Optimize the memory consumption of the `Scan`, `Join`, `Aggregation` operators + - Optimize the Cost Based Optimizer (CBO) framework + - Refactor `Expression` ++ Support more complete privilege management ++ DDL acceleration ++ Support using HTTP API to get the data distribution information of tables ++ Support using system variables to control the query concurrency ++ Add more MySQL built-in functions ++ Support using system variables to automatically split a big transaction into smaller ones to commit + +## Placement Driver (PD): + ++ Support gRPC ++ Provide the Disaster Recovery Toolkit ++ Use Garbage Collection to clear stale data automatically ++ Support more efficient data balance ++ Support hot Region scheduling to enable load balancing and speed up the data importing ++ Performance + - Accelerate getting Client TSO + - Improve the efficiency of Region Heartbeat processing ++ Improve the `pd-ctl` function + - Update the Replica configuration dynamically + - Get the Timestamp Oracle (TSO) + - Use ID to get the Region information + +## TiKV: + ++ Support gRPC ++ Support the Sorted String Table (SST) format snapshot to improve the load balancing speed of a cluster ++ Support using the Heap Profile to uncover memory leaks ++ Support Streaming SIMD Extensions (SSE) and speed up the CRC32 calculation ++ Accelerate transferring leader for faster load balancing ++ Use Batch Apply to reduce CPU usage and improve the write performance ++ Support parallel Prewrite to improve the transaction write speed ++ Optimize the scheduling of the coprocessor thread pool to reduce the impact of big queries on point get ++ The new Loader supports data importing at the table level, as well as splitting a big table into smaller logical blocks to import concurrently to improve the data importing speed. diff --git a/v2.0/releases/rc4.md b/v2.0/releases/rc4.md new file mode 100755 index 0000000000000..cde179064aa4e --- /dev/null +++ b/v2.0/releases/rc4.md @@ -0,0 +1,56 @@ +--- +title: TiDB RC4 Release Notes +category: releases +--- + +# TiDB RC4 Release Notes + +On August 4, 2017, TiDB RC4 is released! This release is focused on MySQL compatibility, SQL optimization, stability, and performance. + +## Highlight: + ++ For performance, the write performance is improved significantly, and the computing task scheduling supports prioritizing to avoid the impact of OLAP on OLTP. ++ The optimizer is revised for a more accurate query cost estimating and for an automatic choice of the `Join` physical operator based on the cost. ++ Many enhancements have been introduced to be more compatible with MySQL. ++ TiSpark is now released to better support the OLAP business scenarios. You can now use Spark to access the data in TiKV. + +## Detailed updates: + +### TiDB: + ++ The SQL query optimizer refactoring: + - Better support for TopN queries + - Support the automatic choice of the of the `Join` physical operator based on the cost + - Improved Projection Elimination ++ The version check of schema is based on Table to avoid the impact of DDL on the ongoing transactions ++ Support ` BatchIndexJoin` ++ Improve the `Explain` statement ++ Improve the `Index Scan` performance ++ Many enhancements have been introduced to be more compatible with MySQL ++ Support the JSON type and operations ++ Support the configuration of query prioritizing and isolation level + +### Placement Driver (PD): + ++ Support using PD to set the TiKV location labels ++ Optimize the scheduler + - PD is now supported to initialize the scheduling commands to TiKV. + - Accelerate the response speed of the region heartbeat. + - Optimize the `balance` algorithm ++ Optimize data loading to speed up failover + +### TiKV: + ++ Support the configuration of query prioritizing ++ Support the RC isolation level ++ Improve Jepsen test results and the stability ++ Support Document Store ++ Coprocessor now supports more pushdown functions ++ Improve the performance and stability + +### TiSpark Beta Release: + ++ Implement the prediction pushdown ++ Implement the aggregation pushdown ++ Implement range pruning ++ Capable of running full set of TPC-H except one query that needs view support diff --git a/v2.0/releases/rn.md b/v2.0/releases/rn.md new file mode 100755 index 0000000000000..66aca510dbe2a --- /dev/null +++ b/v2.0/releases/rn.md @@ -0,0 +1,36 @@ +--- +title: Release Notes +category: release +--- + +# TiDB Release Notes + + - [2.1 RC1](21rc1.md) + - [2.0.6](206.md) + - [2.0.5](205.md) + - [2.1 Beta](21beta.md) + - [2.0.4](204.md) + - [2.0.3](203.md) + - [2.0.2](202.md) + - [2.0.1](201.md) + - [2.0](2.0ga.md) + - [2.0 RC5](2rc5.md) + - [2.0 RC4](2rc4.md) + - [2.0 RC3](2rc3.md) + - [2.0 RC1](2rc1.md) + - [1.1 Beta](11beta.md) + - [1.0.8](108.md) + - [1.0.7](107.md) + - [1.1 Alpha](11alpha.md) + - [1.0.6](106.md) + - [1.0.5](105.md) + - [1.0.4](104.md) + - [1.0.3](103.md) + - [1.0.2](102.md) + - [1.0.1](101.md) + - [1.0](ga.md) + - [Pre-GA](prega.md) + - [RC4](rc4.md) + - [RC3](rc3.md) + - [RC2](rc2.md) + - [RC1](rc1.md) diff --git a/v2.0/scripts/build.sh b/v2.0/scripts/build.sh new file mode 100755 index 0000000000000..4ede96547ad43 --- /dev/null +++ b/v2.0/scripts/build.sh @@ -0,0 +1,60 @@ +#!/bin/bash + +set -e + +# Use current path for building and installing TiDB. +TIDB_PATH=`pwd` +echo "building TiDB components in $TIDB_PATH" + +# All the binaries are installed in the `bin` directory. +mkdir -p $TIDB_PATH/bin + +# Assume we install go in /usr/local/go +export PATH=$PATH:/usr/local/go/bin + +echo "checking if go is installed" +# Go is required +go version +# The output might be like: go version go1.6 darwin/amd64 + +echo "checking if rust is installed" +# Rust nightly is required +rustc -V +# The output might be like: rustc 1.12.0-nightly (7ad125c4e 2016-07-11) + +# Set the GOPATH correctly. +export GOPATH=$TIDB_PATH/deps/go + +# Build TiDB +echo "building TiDB..." +rm -rf $GOPATH/src/github.com/pingcap/tidb +git clone --depth=1 https://github.com/pingcap/tidb.git $GOPATH/src/github.com/pingcap/tidb +cd $GOPATH/src/github.com/pingcap/tidb + +make +cp -f ./bin/tidb-server $TIDB_PATH/bin +cd $TIDB_PATH +echo "TiDB is built" + +# Build PD +echo "building PD..." +rm -rf $GOPATH/src/github.com/pingcap/pd +git clone --depth=1 https://github.com/pingcap/pd.git $GOPATH/src/github.com/pingcap/pd +cd $GOPATH/src/github.com/pingcap/pd + +make +cp -f ./bin/pd-server $TIDB_PATH/bin +cd $TIDB_PATH +echo "PD is built" + +# Build TiKV +echo "building TiKV..." +rm -rf $TIDB_PATH/deps/tikv +git clone --depth=1 https://github.com/pingcap/tikv.git $TIDB_PATH/deps/tikv +cd $TIDB_PATH/deps/tikv + +make release + +cp -f ./bin/tikv-server $TIDB_PATH/bin +cd $TIDB_PATH +echo "TiKV is built" diff --git a/v2.0/scripts/check_requirement.sh b/v2.0/scripts/check_requirement.sh new file mode 100755 index 0000000000000..5e159e0c2aab1 --- /dev/null +++ b/v2.0/scripts/check_requirement.sh @@ -0,0 +1,118 @@ +#!/bin/bash + +set -e + +echo "Checking requirements..." + +SUDO= +if which sudo &>/dev/null; then + SUDO=sudo +fi + +function get_linux_platform { + if [ -f /etc/redhat-release ]; then + # For CentOS or redhat, we treat all as CentOS. + echo "CentOS" + elif [ -f /etc/lsb-release ]; then + DIST=`cat /etc/lsb-release | grep '^DISTRIB_ID' | awk -F= '{ print $2 }'` + echo "$DIST" + else + echo "Unknown" + fi +} + +function install_go { + echo "Intall go ..." + case "$OSTYPE" in + linux*) + curl -L https://storage.googleapis.com/golang/go1.10.2.linux-amd64.tar.gz -o golang.tar.gz + ${SUDO} tar -C /usr/local -xzf golang.tar.gz + rm golang.tar.gz + ;; + + darwin*) + curl -L https://storage.googleapis.com/golang/go1.10.2.darwin-amd64.tar.gz -o golang.tar.gz + ${SUDO} tar -C /usr/local -xzf golang.tar.gz + rm golang.tar.gz + ;; + + *) + echo "unsupported $OSTYPE" + exit 1 + ;; + esac +} + +function install_gpp { + echo "Install g++ ..." + case "$OSTYPE" in + linux*) + dist=$(get_linux_platform) + case $dist in + Ubuntu) + ${SUDO} apt-get install -y g++ + ;; + CentOS) + ${SUDO} yum install -y gcc-c++ libstdc++-static + ;; + *) + echo "unsupported platform $dist, you may install g++ manually" + exit 1 + ;; + esac + ;; + + darwin*) + # refer to https://github.com/facebook/rocksdb/blob/master/INSTALL.md + xcode-select --install + brew update + brew tap homebrew/versions + brew install gcc48 --use-llvm + ;; + + *) + echo "unsupported $OSTYPE" + exit 1 + ;; + esac +} + +# Check rust +if which rustc &>/dev/null; then + if ! rustc --version | grep nightly &>/dev/null; then + printf "Please run following command to upgrade Rust to nightly: \n\ +\t curl -sSf https://static.rust-lang.org/rustup.sh | sh -s -- --channel=nightly\n" + exit 1 + fi +else + echo "Install Rust ..." + ${SUDO} curl -sSf https://static.rust-lang.org/rustup.sh | sh -s -- --channel=nightly +fi + +# Check go +if which go &>/dev/null; then + # requires go >= 1.8 + GO_VER_1=`go version | awk 'match($0, /([0-9])+(\.[0-9]+)+/) { ver = substr($0, RSTART, RLENGTH); split(ver, n, "."); print n[1];}'` + GO_VER_2=`go version | awk 'match($0, /([0-9])+(\.[0-9]+)+/) { ver = substr($0, RSTART, RLENGTH); split(ver, n, "."); print n[2];}'` + if [[ (($GO_VER_1 -eq 1 && $GO_VER_2 -lt 10)) || (($GO_VER_1 -lt 1)) ]]; then + echo "Please upgrade Go to 1.10 or later." + exit 1 + fi +else + install_go +fi + +# Check g++ +if which g++ &>/dev/null; then + # Check g++ version, RocksDB requires g++ 4.8 or later. + G_VER_1=`g++ -dumpversion | awk '{split($0, n, "."); print n[1];}'` + G_VER_2=`g++ -dumpversion | awk '{split($0, n, "."); print n[2];}'` + if [[ (($G_VER_1 -eq 4 && $G_VER_2 -lt 8)) || (($G_VER_1 -lt 4)) ]]; then + echo "Please upgrade g++ to 4.8 or later." + exit 1 + fi +else + install_gpp +fi + +echo OK diff --git a/v2.0/scripts/generate_pdf.sh b/v2.0/scripts/generate_pdf.sh new file mode 100755 index 0000000000000..6e4caa8a092b2 --- /dev/null +++ b/v2.0/scripts/generate_pdf.sh @@ -0,0 +1,29 @@ +#!/bin/bash + +set -e +# test passed in pandoc 1.19.1 + +MAINFONT="WenQuanYi Micro Hei" +MONOFONT="WenQuanYi Micro Hei Mono" + +# MAINFONT="Tsentsiu Sans HG" +# MONOFONT="Tsentsiu Sans Console HG" + +#_version_tag="$(date '+%Y%m%d').$(git rev-parse --short HEAD)" +_version_tag="$(date '+%Y%m%d')" + +# default version: `pandoc --latex-engine=xelatex doc.md -s -o output2.pdf` +# used to debug template setting error + +pandoc -N --toc --smart --latex-engine=xelatex \ + --template=templates/template.tex \ + --columns=80 \ + --listings \ + -V title="TiDB Documentation" \ + -V author="PingCAP Inc." \ + -V date="${_version_tag}" \ + -V CJKmainfont="${MAINFONT}" \ + -V fontsize=12pt \ + -V geometry:margin=1in \ + -V include-after="\\input{templates/copyright.tex}" \ + doc.md -s -o output.pdf diff --git a/v2.0/scripts/merge_by_toc.py b/v2.0/scripts/merge_by_toc.py new file mode 100755 index 0000000000000..d7d9595e29744 --- /dev/null +++ b/v2.0/scripts/merge_by_toc.py @@ -0,0 +1,169 @@ +#!/usr/bin/env python3 +# coding: utf8 +# +# Generate all-in-one Markdown file for ``doc-cn`` +# Tip: 不支持中文文件名 +# readme.md 中的目录引用的md多次(或者md的sub heading),以第一次出现为主 + +from __future__ import print_function, unicode_literals + +import re +import os + + +entry_file = "README.md" +followups = [] +in_toc = False +contents = [] + +hyper_link_pattern = re.compile(r'\[(.*?)\]\((.*?)(#.*?)?\)') +toc_line_pattern = re.compile(r'([\-\+]+)\s\[(.*?)\]\((.*?)(#.*?)?\)') +image_link_pattern = re.compile(r'!\[(.*?)\]\((.*?)\)') +level_pattern = re.compile(r'(\s*[\-\+]+)\s') +# match all headings +heading_patthern = re.compile(r'(^#+|\n#+)\s') + +# stage 1, parse toc +with open(entry_file) as fp: + level = 0 + current_level = "" + for line in fp: + if not in_toc and line.startswith("## "): + in_toc = True + print("in toc") + elif in_toc and line.startswith('## '): + in_toc = False + # yes, toc processing done + # contents.append(line[1:]) # skip 1 level TOC + break + elif in_toc and not line.startswith('#') and line.strip(): + ## get level from space length + print(line) + level_space_str = level_pattern.findall(line)[0][:-1] + level = len(level_space_str) // 2 + 1 ## python divide get integer + + matches = toc_line_pattern.findall(line) + if matches: + for match in matches: + fpath = match[2] + if fpath.endswith('.md'): + key = ('FILE', level, fpath) + if key not in followups: + print(key) + followups.append(key) + elif fpath.startswith('http'): + ## remove list format character `- `, `+ ` + followups.append(('TOC', level, line.strip()[2:])) + else: + name = line.strip().split(None, 1)[-1] + key = ('TOC', level, name) + if key not in followups: + print(key) + followups.append(key) + + else: + pass + + # overview part in README.md + followups.insert(1, ("RAW", 0, fp.read())) + +for k in followups: + print(k) + +# stage 2, get file heading +file_link_name = {} +title_pattern = re.compile(r'(^#+)\s.*') +for tp, lv, f in followups: + if tp != 'FILE': + continue + try: + for line in open(f).readlines(): + if line.startswith("#"): + tag = line.strip() + break + except Exception as e: + print(e) + tag = "" + if tag.startswith('# '): + tag = tag[2:] + elif tag.startswith('## '): + tag = tag[3:] + file_link_name[f] = tag.lower().replace(' ', '-') + +print(file_link_name) + +def replace_link_wrap(chapter, name): + + # Note: 仅仅支持 hash 匹配,如果在多个文档中有同名 heading 会碰撞 + # 支持 chapter 文档中的 ./ddd.md, xxx.md, xxx.md#xxx 等 + def replace_link(match): + full = match.group(0) + link_name = match.group(1) + link = match.group(2) + frag = match.group(3) + if link.endswith('.md') or '.md#' in link: + if not frag: + relative_path = '' + if not link.startswith('.'): + relative_path = '../' + _rel_path = os.path.normpath(os.path.join(name, relative_path, link)) + for fpath in file_link_name: + if _rel_path == fpath: + frag = '#' + file_link_name[fpath] + return '[%s](%s)' % (link_name, frag) + elif link.endswith('.png'): + # special handing for pic + fname = os.path.basename(link) + return '[%s](./media/%s)' % (link_name, fname) + else: + return full + + return hyper_link_pattern.sub(replace_link, chapter) + +def replace_heading_func(diff_level=0): + + def replace_heading(match): + if diff_level == 0: + return match.group(0) + else: + return '\n' + '#' * (match.group(0).count('#') + diff_level) + ' ' + + + return replace_heading + +def replace_img_link(match): + full = match.group(0) + link_name = match.group(1) + link = match.group(2) + + if link.endswith('.png'): + fname = os.path.basename(link) + return '![%s](./media/%s)' % (link_name, fname) + +# stage 3, concat files +for type_, level, name in followups: + if type_ == 'TOC': + contents.append("\n{} {}\n".format('#' * level, name)) + elif type_ == 'RAW': + contents.append(name) + elif type_ == 'FILE': + try: + with open(name) as fp: + chapter = fp.read() + chapter = replace_link_wrap(chapter, name) + chapter = image_link_pattern.sub(replace_img_link, chapter) + + # fix heading level + diff_level = level - heading_patthern.findall(chapter)[0].count('#') + + print(name, type_, level, diff_level) + chapter = heading_patthern.sub(replace_heading_func(diff_level), chapter) + contents.append(chapter) + contents.append('') # add an empty line + except Exception as e: + print(e) + print("generate file error: ignore!") + +# stage 4, generage final doc.md +with open("doc.md", 'w') as fp: + fp.write('\n'.join(contents)) diff --git a/v2.0/scripts/update.sh b/v2.0/scripts/update.sh new file mode 100755 index 0000000000000..f44f8f530fedb --- /dev/null +++ b/v2.0/scripts/update.sh @@ -0,0 +1,57 @@ +#!/bin/bash + +set -e + +# Use current path for building and installing TiDB. +TIDB_PATH=`pwd` +echo "updating and building TiDB components in $TIDB_PATH" + +# All the binaries are installed in the `bin` directory. +mkdir -p $TIDB_PATH/bin + +# Assume we install go in /usr/local/go +export PATH=$PATH:/usr/local/go/bin + +echo "checking if go is installed" +# Go is required +go version +# The output might be like: go version go1.6 darwin/amd64 + +echo "checking if rust is installed" +# Rust nightly is required +rustc -V +# The output might be like: rustc 1.12.0-nightly (7ad125c4e 2016-07-11) + +# Set the GOPATH correctly. +export GOPATH=$TIDB_PATH/deps/go + +# Build TiDB +echo "updating and building TiDB..." +cd $GOPATH/src/github.com/pingcap/tidb +git pull + +make +cp -f ./bin/tidb-server $TIDB_PATH/bin +cd $TIDB_PATH +echo "TiDB is built" + +# Build PD +echo "updating and building PD..." +cd $GOPATH/src/github.com/pingcap/pd +git pull + +make +cp -f ./bin/pd-server $TIDB_PATH/bin +cd $TIDB_PATH +echo "PD is built" + +# Build TiKV +echo "updating and building TiKV..." +cd $TIDB_PATH/deps/tikv +git pull + +make release + +cp -f ./bin/tikv-server $TIDB_PATH/bin +cd $TIDB_PATH +echo "TiKV is built" diff --git a/v2.0/scripts/upload.py b/v2.0/scripts/upload.py new file mode 100755 index 0000000000000..d43a1042c5fb3 --- /dev/null +++ b/v2.0/scripts/upload.py @@ -0,0 +1,39 @@ +#!/usr/bin/env python3 +#-*- coding:utf-8 -*- + +import sys +import os +from qiniu import Auth, put_file, etag, urlsafe_base64_encode +import qiniu.config + + +ACCESS_KEY = os.getenv('QINIU_ACCESS_KEY') +SECRET_KEY = os.getenv('QINIU_SECRET_KEY') +BUCKET_NAME = os.getenv('QINIU_BUCKET_NAME') + +assert(ACCESS_KEY and SECRET_KEY and BUCKET_NAME) + +def progress_handler(progress, total): + print("{}/{} {:.2f}".format(progress, total, progress/total*100)) + +# local_file: local file path +# remote_name: 上传到七牛后保存的文件名 +def upload(local_file, remote_name, ttl=3600): + print(local_file, remote_name, ttl) + #构建鉴权对象 + q = Auth(ACCESS_KEY, SECRET_KEY) + + #生成上传 Token,可以指定过期时间等 + token = q.upload_token(BUCKET_NAME, remote_name, ttl) + + ret, info = put_file(token, remote_name, local_file, progress_handler=progress_handler) + print(info) + assert ret['key'] == remote_name + assert ret['hash'] == etag(local_file) + +if __name__ == "__main__": + local_file = sys.argv[1] + remote_name = sys.argv[2] + upload(local_file, remote_name) + + print("http://download.pingcap.org/{}".format(remote_name)) diff --git a/v2.0/sql/admin.md b/v2.0/sql/admin.md new file mode 100755 index 0000000000000..7057bf69529ac --- /dev/null +++ b/v2.0/sql/admin.md @@ -0,0 +1,135 @@ +--- +title: Database Administration Statements +summary: Use administration statements to manage the TiDB database. +category: user guide +--- + +# Database Administration Statements + +TiDB manages the database using a number of statements, including granting privileges, modifying system variables, and querying database status. + +## Privilege management + +See [Privilege Management](privilege.md). + +## `SET` statement + +The `SET` statement has multiple functions and forms. + +### Assign values to variables + +```sql +SET variable_assignment [, variable_assignment] ... + +variable_assignment: + user_var_name = expr + | param_name = expr + | local_var_name = expr + | [GLOBAL | SESSION] + system_var_name = expr + | [@@global. | @@session. | @@] + system_var_name = expr +``` + +You can use the above syntax to assign values to variables in TiDB, which include system variables and user-defined variables. All user-defined variables are session variables. The system variables set using `@@global.` or `GLOBAL` are global variables, otherwise session variables. For more information, see [The System Variables](variable.md). + +### `SET CHARACTER` statement and `SET NAMES` + +```sql +SET {CHARACTER SET | CHARSET} + {'charset_name' | DEFAULT} + +SET NAMES {'charset_name' + [COLLATE 'collation_name'] | DEFAULT} +``` + +This statement sets three session system variables (`character_set_client`, `character_set_results` and `character_set_connection`) as given character set. Currently, the value of `character_set_connection` differs from MySQL and is set as the value of `character_set_database` in MySQL. + +### Set the password + +```sql +SET PASSWORD [FOR user] = password_option + +password_option: { + 'auth_string' + | PASSWORD('auth_string') +} +``` + +This statement is used to set user passwords. For more information, see [Privilege Management](privilege.md). + +### Set the isolation level + +```sql +SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED; +``` + +This statement is used to set the transaction isolation level. For more information, see [Transaction Isolation Level](transaction.md#transaction-isolation-level). + +## `SHOW` statement + +TiDB supports part of `SHOW` statements, used to view the Database/Table/Column information and the internal status of the database. Currently supported statements: + +```sql +# Supported and similar to MySQL +SHOW CHARACTER SET [like_or_where] +SHOW COLLATION [like_or_where] +SHOW [FULL] COLUMNS FROM tbl_name [FROM db_name] [like_or_where] +SHOW CREATE {DATABASE|SCHEMA} db_name +SHOW CREATE TABLE tbl_name +SHOW DATABASES [like_or_where] +SHOW GRANTS FOR user +SHOW INDEX FROM tbl_name [FROM db_name] +SHOW PRIVILEGES +SHOW [FULL] PROCESSLIST +SHOW [GLOBAL | SESSION] STATUS [like_or_where] +SHOW TABLE STATUS [FROM db_name] [like_or_where] +SHOW [FULL] TABLES [FROM db_name] [like_or_where] +SHOW [GLOBAL | SESSION] VARIABLES [like_or_where] +SHOW WARNINGS + +# Supported to improve compatibility but return null results +SHOW ENGINE engine_name {STATUS | MUTEX} +SHOW [STORAGE] ENGINES +SHOW PLUGINS +SHOW PROCEDURE STATUS [like_or_where] +SHOW TRIGGERS [FROM db_name] [like_or_where] +SHOW EVENTS +SHOW FUNCTION STATUS [like_or_where] + +# TiDB-specific statements for viewing statistics +SHOW STATS_META [like_or_where] +SHOW STATS_HISTOGRAMS [like_or_where] +SHOW STATS_BUCKETS [like_or_where] + + +like_or_where: + LIKE 'pattern' + | WHERE expr +``` + +> **Note**: +> +> - To view statistics using the `SHOW` statement, see [View Statistics](statistics.md#view-statistics). +> - For more information about the `SHOW` statement, see [SHOW Syntax in MySQL](https://dev.mysql.com/doc/refman/5.7/en/show.html). + +## `ADMIN` statement + +This statement is a TiDB extension syntax, used to view the status of TiDB. + +```sql +ADMIN SHOW DDL +ADMIN SHOW DDL JOBS +ADMIN SHOW DDL JOB QUERIES job_id [, job_id] ... +ADMIN CANCEL DDL JOBS job_id [, job_id] ... +``` + +- `ADMIN SHOW DDL`: To view the currently running DDL jobs. +- `ADMIN SHOW DDL JOBS`: To view all the results in the current DDL job queue (including tasks that are running and waiting to be run) and the last ten results in the completed DDL job queue. +- `ADMIN SHOW DDL JOB QUERIES job_id [, job_id] ...`: To view the original SQL statement of the DDL task corresponding to the `job_id`; the `job_id` only searches the running DDL job and the last ten results in the DDL history job queue +- `ADMIN CANCEL DDL JOBS job_id [, job_id] ...`: To cancel the currently running DDL jobs and return whether the corresponding jobs are successfully cancelled. If the operation fails to cancel the jobs, specific reasons are displayed. + + > **Note**: + > + > - This operation can cancel multiple DDL jobs at the same time. You can get the ID of DDL jobs using the `ADMIN SHOW DDL JOBS` statement. + > - If the jobs you want to cancel are finished, the cancellation operation fails. diff --git a/v2.0/sql/aggregate-group-by-functions.md b/v2.0/sql/aggregate-group-by-functions.md new file mode 100755 index 0000000000000..ebc4244bd0fb1 --- /dev/null +++ b/v2.0/sql/aggregate-group-by-functions.md @@ -0,0 +1,95 @@ +--- +title: Aggregate (GROUP BY) Functions +summary: Learn about the supported aggregate functions in TiDB. +category: user guide +--- + +# Aggregate (GROUP BY) Functions + +This document describes details about the supported aggregate functions in TiDB. + +## Aggregate (GROUP BY) function descriptions + +This section describes the supported MySQL group (aggregate) functions in TiDB. + +| Name | Description | +|:--------------------------------------------------------------------------------------------------------------|:--------------------------------------------------| +| [`COUNT()`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_count) | Return a count of the number of rows returned | +| [`COUNT(DISTINCT)`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_count-distinct) | Return the count of a number of different values | +| [`SUM()`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_sum) | Return the sum | +| [`AVG()`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_avg) | Return the average value of the argument | +| [`MAX()`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_max) | Return the maximum value | +| [`MIN()`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_min) | Return the minimum value | +| [`GROUP_CONCAT()`](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_group-concat) | Return a concatenated string | + +- Unless otherwise stated, group functions ignore `NULL` values. +- If you use a group function in a statement containing no `GROUP BY` clause, it is equivalent to grouping on all rows. For more information see [TiDB handling of GROUP BY](#tidb-handling-of-group-by). + +## GROUP BY modifiers + +TiDB dose not support any `GROUP BY` modifiers currently. We'll do it in the future. For more information, see [#4250](https://github.com/pingcap/tidb/issues/4250). + +## TiDB handling of GROUP BY + +TiDB performs equivalent to MySQL with sql mode [`ONLY_FULL_GROUP_BY`](https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_only_full_group_by) being disabled: permits the `SELECT` list, `HAVING` condition, or `ORDER BY` list to refer to non-aggregated columns even if the columns are not functionally dependent on `GROUP BY` columns. + +For example, this query is illegal in MySQL 5.7.5 with `ONLY_FULL_GROUP_BY` enabled because the non-aggregated column "b" in the `SELECT` list does not appear in the `GROUP BY`: + +```sql +drop table if exists t; +create table t(a bigint, b bigint, c bigint); +insert into t values(1, 2, 3), (2, 2, 3), (3, 2, 3); +select a, b, sum(c) from t group by a; +``` + +The preceding query is legal in TiDB. TiDB does not support SQL mode `ONLY_FULL_GROUP_BY` currently. We'll do it in the future. For more inmormation, see [#4248](https://github.com/pingcap/tidb/issues/4248). + +Suppose that we execute the following query, expecting the results to be ordered by "c": +```sql +drop table if exists t; +create table t(a bigint, b bigint, c bigint); +insert into t values(1, 2, 1), (1, 2, 2), (1, 3, 1), (1, 3, 2); +select distinct a, b from t order by c; +``` + +To order the result, duplicates must be eliminated first. But to do so, which row should we keep? This choice influences the retained value of "c", which in turn influences ordering and makes it arbitrary as well. + +In MySQL, a query that has `DISTINCT` and `ORDER BY` is rejected as invalid if any `ORDER BY` expression does not satisfy at least one of these conditions: +- The expression is equal to one in the `SELECT` list +- All columns referenced by the expression and belonging to the query's selected tables are elements of the `SELECT` list + +But in TiDB, the above query is legal, for more information see [#4254](https://github.com/pingcap/tidb/issues/4254). + +Another TiDB extension to standard SQL permits references in the `HAVING` clause to aliased expressions in the `SELECT` list. For example, the following query returns "name" values that occur only once in table "orders": +```sql +select name, count(name) from orders +group by name +having count(name) = 1; +``` + +The TiDB extension permits the use of an alias in the `HAVING` clause for the aggregated column: +```sql +select name, count(name) as c from orders +group by name +having c = 1; +``` + +Standard SQL permits only column expressions in `GROUP BY` clauses, so a statement such as this is invalid because "FLOOR(value/100)" is a noncolumn expression: +```sql +select id, floor(value/100) +from tbl_name +group by id, floor(value/100); +``` + +TiDB extends standard SQL to permit noncolumn expressions in `GROUP BY` clauses and considers the preceding statement valid. + +Standard SQL also does not permit aliases in `GROUP BY` clauses. TiDB extends standard SQL to permit aliases, so another way to write the query is as follows: +```sql +select id, floor(value/100) as val +from tbl_name +group by id, val; +``` + +## Detection of functional dependence + +TiDB does not support SQL mode `ONLY_FULL_GROUP_BY` and detection of functional dependence. We'll do it in the future. For more information, see [#4248](https://github.com/pingcap/tidb/issues/4248). diff --git a/v2.0/sql/bit-functions-and-operators.md b/v2.0/sql/bit-functions-and-operators.md new file mode 100755 index 0000000000000..ae91676fe0568 --- /dev/null +++ b/v2.0/sql/bit-functions-and-operators.md @@ -0,0 +1,21 @@ +--- +title: Bit Functions and Operators +summary: Learn about the bit functions and operators. +category: user guide +--- + +# Bit Functions and Operators + +In TiDB, the usage of bit functions and operators is similar to MySQL. See [Bit Functions and Operators](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html). + +**Bit functions and operators** + +| Name | Description | +| :------| :------------- | +| [`BIT_COUNT()`](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#function_bit-count) | Return the number of bits that are set as 1 | +| [&](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-and) | Bitwise AND | +| [~](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-invert) | Bitwise inversion | +| [\|](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-or) | Bitwise OR | +| [^](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-xor) | Bitwise XOR | +| [<<](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_left-shift) | Left shift | +| [>>](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_right-shift) | Right shift | diff --git a/v2.0/sql/cast-functions-and-operators.md b/v2.0/sql/cast-functions-and-operators.md new file mode 100755 index 0000000000000..092da0003c587 --- /dev/null +++ b/v2.0/sql/cast-functions-and-operators.md @@ -0,0 +1,18 @@ +--- +title: Cast Functions and Operators +summary: Learn about the cast functions and operators. +category: user guide +--- + +# Cast Functions and Operators + + +| Name | Description | +| ---------------------------------------- | -------------------------------- | +| [`BINARY`](https://dev.mysql.com/doc/refman/5.7/en/cast-functions.html#operator_binary) | Cast a string to a binary string | +| [`CAST()`](https://dev.mysql.com/doc/refman/5.7/en/cast-functions.html#function_cast) | Cast a value as a certain type | +| [`CONVERT()`](https://dev.mysql.com/doc/refman/5.7/en/cast-functions.html#function_convert) | Cast a value as a certain type | + +Cast functions and operators enable conversion of values from one data type to another. + +For details, see [here](https://dev.mysql.com/doc/refman/5.7/en/cast-functions.html). \ No newline at end of file diff --git a/v2.0/sql/character-set-configuration.md b/v2.0/sql/character-set-configuration.md new file mode 100755 index 0000000000000..069acd00ca94d --- /dev/null +++ b/v2.0/sql/character-set-configuration.md @@ -0,0 +1,11 @@ +--- +title: Character Set Configuration +summary: Learn about the character set configuration. +category: user guide +--- + +# Character Set Configuration + +Currently, TiDB does not support configuring the character set. The default character set is `utf8mb4`. + +For more information, see [Character Set Configuration in MySQL](https://dev.mysql.com/doc/refman/5.7/en/charset-configuration.html). \ No newline at end of file diff --git a/v2.0/sql/character-set-support.md b/v2.0/sql/character-set-support.md new file mode 100755 index 0000000000000..2263be55940bb --- /dev/null +++ b/v2.0/sql/character-set-support.md @@ -0,0 +1,201 @@ +--- +title: Character Set Support +summary: Learn about the supported character sets in TiDB. +category: user guide +--- + +# Character Set Support + +A character set is a set of symbols and encodings. A collation is a set of rules for comparing characters in a character set. + +Currently, TiDB supports the following character sets: + +```sql +mysql> SHOW CHARACTER SET; ++---------|---------------|-------------------|--------+ +| Charset | Description | Default collation | Maxlen | ++---------|---------------|-------------------|--------+ +| utf8 | UTF-8 Unicode | utf8_bin | 3 | +| utf8mb4 | UTF-8 Unicode | utf8mb4_bin | 4 | +| ascii | US ASCII | ascii_bin | 1 | +| latin1 | Latin1 | latin1_bin | 1 | +| binary | binary | binary | 1 | ++---------|---------------|-------------------|--------+ +5 rows in set (0.00 sec) +``` + +> **Note**: In TiDB, utf8 is treated as utf8mb4. + +Each character set has at least one collation. Most of the character sets have several collations. You can use the following statement to display the available character sets: + +```sql +mysql> SHOW COLLATION WHERE Charset = 'latin1'; ++-------------------|---------|------|---------|----------|---------+ +| Collation | Charset | Id | Default | Compiled | Sortlen | ++-------------------|---------|------|---------|----------|---------+ +| latin1_german1_ci | latin1 | 5 | | Yes | 1 | +| latin1_swedish_ci | latin1 | 8 | Yes | Yes | 1 | +| latin1_danish_ci | latin1 | 15 | | Yes | 1 | +| latin1_german2_ci | latin1 | 31 | | Yes | 1 | +| latin1_bin | latin1 | 47 | | Yes | 1 | +| latin1_general_ci | latin1 | 48 | | Yes | 1 | +| latin1_general_cs | latin1 | 49 | | Yes | 1 | +| latin1_spanish_ci | latin1 | 94 | | Yes | 1 | ++-------------------|---------|------|---------|----------|---------+ +8 rows in set (0.00 sec) +``` + +The `latin1` collations have the following meanings: + +| Collation | Meaning | +|:--------------------|:----------------------------------------------------| +| `latin1_bin` | Binary according to `latin1` encoding | +| `latin1_danish_ci` | Danish/Norwegian | +| `latin1_general_ci` | Multilingual (Western European) | +| `latin1_general_cs` | Multilingual (ISO Western European), case sensitive | +| `latin1_german1_ci` | German DIN-1 (dictionary order) | +| `latin1_german2_ci` | German DIN-2 (phone book order) | +| `latin1_spanish_ci` | Modern Spanish | +| `latin1_swedish_ci` | Swedish/Finnish | + +Each character set has a default collation. For example, the default collation for utf8 is `utf8_bin`. + +> **Note**: The collations in TiDB are case sensitive. + +## Collation naming conventions + +The collation names in TiDB follow these conventions: + +- The prefix of a collation is its corresponding character set, generally followed by one or more suffixes indicating other collation characteristic. For example, `utf8_general_ci` and `latin1_swedish_ci` are collations for the utf8 and latin1 character sets, respectively. The `binary` character set has a single collation, also named `binary`, with no suffixes. +- A language-specific collation includes a language name. For example, `utf8_turkish_ci` and `utf8_hungarian_ci` sort characters for the utf8 character set using the rules of Turkish and Hungarian, respectively. +- Collation suffixes indicate whether a collation is case and accent sensitive, or binary. The following table shows the suffixes used to indicate these characteristics. + + | Suffix | Meaning | + |:-------|:-------------------| + | \_ai | Accent insensitive | + | \_as | Accent insensitive | + | \_ci | Case insensitive | + | \_cs | Case sensitive | + | \_bin | Binary | + +> **Note**: For now, TiDB supports on some of the collations in the above table. + +## Database character set and collation + +Each database has a character set and a collation. You can use the `CREATE DATABASE` statement to specify the database character set and collation: + +```sql +CREATE DATABASE db_name + [[DEFAULT] CHARACTER SET charset_name] + [[DEFAULT] COLLATE collation_name] +``` +Where `DATABASE` can be replaced with `SCHEMA`. + +Different databases can use different character sets and collations. Use the `character_set_database` and `collation_database` to see the character set and collation of the current database: + +```sql +mysql> create schema test1 character set utf8 COLLATE uft8_general_ci; +Query OK, 0 rows affected (0.09 sec) + +mysql> use test1; +Database changed +mysql> SELECT @@character_set_database, @@collation_database; ++--------------------------|----------------------+ +| @@character_set_database | @@collation_database | ++--------------------------|----------------------+ +| utf8 | uft8_general_ci | ++--------------------------|----------------------+ +1 row in set (0.00 sec) + +mysql> create schema test2 character set latin1 COLLATE latin1_general_ci; +Query OK, 0 rows affected (0.09 sec) + +mysql> use test2; +Database changed +mysql> SELECT @@character_set_database, @@collation_database; ++--------------------------|----------------------+ +| @@character_set_database | @@collation_database | ++--------------------------|----------------------+ +| latin1 | latin1_general_ci | ++--------------------------|----------------------+ +1 row in set (0.00 sec) +``` + +You can also see the two values in INFORMATION_SCHEMA: + +```sql +SELECT DEFAULT_CHARACTER_SET_NAME, DEFAULT_COLLATION_NAME +FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'db_name'; +``` + +## Table character set and collation + +You can use the following statement to specify the character set and collation for tables: + +```sql +CREATE TABLE tbl_name (column_list) + [[DEFAULT] CHARACTER SET charset_name] + [COLLATE collation_name]] + +ALTER TABLE tbl_name + [[DEFAULT] CHARACTER SET charset_name] + [COLLATE collation_name] +``` + +For example: + +```sql +mysql> CREATE TABLE t1(a int) CHARACTER SET utf8 COLLATE utf8_general_ci; +Query OK, 0 rows affected (0.08 sec) +``` +The table character set and collation are used as the default values for column definitions if the column character set and collation are not specified in individual column definitions. + +## Column character set and collation + +See the following table for the character set and collation syntax for columns: + +```sql +col_name {CHAR | VARCHAR | TEXT} (col_length) + [CHARACTER SET charset_name] + [COLLATE collation_name] + +col_name {ENUM | SET} (val_list) + [CHARACTER SET charset_name] + [COLLATE collation_name] +``` + +## Connection character sets and collations + +- The server character set and collation are the values of the `character_set_server` and `collation_server` system variables. + +- The character set and collation of the default database are the values of the `character_set_database` and `collation_database` system variables. + You can use `character_set_connection` and `collation_connection` to specify the character set and collation for each connection. + The `character_set_client` variable is to set the client character set. Before returning the result, the `character_set_results` system variable indicates the character set in which the server returns query results to the client, including the metadata of the result. + +You can use the following statement to specify a particular collation that is related to the client: + +- `SET NAMES 'charset_name' [COLLATE 'collation_name']` + + `SET NAMES` indicates what character set the client will use to send SQL statements to the server. `SET NAMES utf8` indicates that all the requests from the client use utf8, as well as the results from the server. + + The `SET NAMES 'charset_name'` statement is equivalent to the following statement combination: + + ```sql + SET character_set_client = charset_name; + SET character_set_results = charset_name; + SET character_set_connection = charset_name; + ``` + + `COLLATE` is optional, if absent, the default collation of the `charset_name` is used. + +- `SET CHARACTER SET 'charset_name'` + + Similar to `SET NAMES`, the `SET NAMES 'charset_name'` statement is equivalent to the following statement combination: + + ```sql + SET character_set_client = charset_name; + SET character_set_results = charset_name; + SET collation_connection = @@collation_database; + ``` + +For more information, see [Connection Character Sets and Collations in MySQL](https://dev.mysql.com/doc/refman/5.7/en/charset-connection.html). diff --git a/v2.0/sql/comment-syntax.md b/v2.0/sql/comment-syntax.md new file mode 100755 index 0000000000000..0471ad459559f --- /dev/null +++ b/v2.0/sql/comment-syntax.md @@ -0,0 +1,102 @@ +--- +title: Comment Syntax +summary: Learn about the three comment styles in TiDB. +category: user guide +--- + +# Comment Syntax + +TiDB supports three comment styles: + +- Use `#` to comment a line. +- Use `--` to comment a line, and this style requires at least one whitespace after `--`. +- Use `/* */` to comment a block or multiple lines. + +Example: + +``` +mysql> SELECT 1+1; # This comment continues to the end of line ++------+ +| 1+1 | ++------+ +| 2 | ++------+ +1 row in set (0.00 sec) + +mysql> SELECT 1+1; -- This comment continues to the end of line ++------+ +| 1+1 | ++------+ +| 2 | ++------+ +1 row in set (0.00 sec) + +mysql> SELECT 1 /* this is an in-line comment */ + 1; ++--------+ +| 1 + 1 | ++--------+ +| 2 | ++--------+ +1 row in set (0.01 sec) + +mysql> SELECT 1+ + -> /* + /*> this is a + /*> multiple-line comment + /*> */ + -> 1; ++-------+ +| 1+ + +1 | ++-------+ +| 2 | ++-------+ +1 row in set (0.00 sec) + +mysql> SELECT 1+1--1; ++--------+ +| 1+1--1 | ++--------+ +| 3 | ++--------+ +1 row in set (0.01 sec) +``` + +Similar to MySQL, TiDB supports a variant of C comment style: + +``` +/*! Specific code */ +``` + +In this comment style, TiDB runs the statements in the comment. The syntax is used to make these SQL statements ignored in other databases and run only in TiDB. + +For example: + +``` +SELECT /*! STRAIGHT_JOIN */ col1 FROM table1,table2 WHERE ... +``` + +In TiDB, you can also use another version: + +``` +SELECT STRAIGHT_JOIN col1 FROM table1,table2 WHERE ... +``` + +If the server version number is specified in the comment, for example, `/*!50110 KEY_BLOCK_SIZE=1024 */`, in MySQL it means that the contents in this comment is processed only when the MySQL version is or higher than 5.1.10. But in TiDB, the version number does not work and all contents in the comment are processed. + +Another type of comment is specially treated as the Hint optimizer: + +``` +SELECT /*+ hint */ FROM ...; +``` + +Since Hint is involved in comments like `/*+ xxx */`, the MySQL client clears the comment by default in versions earlier than 5.7.7. To use Hint in those earlier versions, add the `--comments` option when you start the client. For example: + +``` +mysql -h 127.0.0.1 -P 4000 -uroot --comments +``` + +For details about the optimizer hints that TiDB supports, see [Optimizer hint](tidb-specific.md#optimizer-hint). + +For more information, see [Comment Syntax](https://dev.mysql.com/doc/refman/5.7/en/comments.html). diff --git a/v2.0/sql/connection-and-APIs.md b/v2.0/sql/connection-and-APIs.md new file mode 100755 index 0000000000000..2e147a18b3dd6 --- /dev/null +++ b/v2.0/sql/connection-and-APIs.md @@ -0,0 +1,96 @@ +--- +title: Connectors and APIs +summary: Learn about the connectors and APIs. +category: user guide +--- + +# Connectors and APIs + +Database Connectors provide connectivity to the TiDB server for client programs. APIs provide low-level access to the MySQL protocol and MySQL resources. Both Connectors and the APIs enable you to connect and execute MySQL statements from another language or environment, including ODBC, Java (JDBC), Perl, Python, PHP, Ruby and C. + +TiDB is compatible with all Connectors and APIs of MySQL (5.6, 5.7), including: + +- [MySQL Connector/C](https://dev.mysql.com/doc/refman/5.7/en/connector-c-info.html) +- [MySQL Connector/C++](https://dev.mysql.com/doc/refman/5.7/en/connector-cpp-info.html) +- [MySQL Connector/J](https://dev.mysql.com/doc/refman/5.7/en/connector-j-info.html) +- [MySQL Connector/Net](https://dev.mysql.com/doc/refman/5.7/en/connector-net-info.html) +- [MySQL Connector/ODBC](https://dev.mysql.com/doc/refman/5.7/en/connector-odbc-info.html) +- [MySQL Connector/Python](https://dev.mysql.com/doc/refman/5.7/en/connector-python-info.html) +- [MySQL C API](https://dev.mysql.com/doc/refman/5.7/en/c-api.html) +- [MySQL PHP API](https://dev.mysql.com/doc/refman/5.7/en/apis-php-info.html) +- [MySQL Perl API](https://dev.mysql.com/doc/refman/5.7/en/apis-perl.html) +- [MySQL Python API](https://dev.mysql.com/doc/refman/5.7/en/apis-python.html) +- [MySQL Ruby APIs](https://dev.mysql.com/doc/refman/5.7/en/apis-ruby.html) +- [MySQL Tcl API](https://dev.mysql.com/doc/refman/5.7/en/apis-tcl.html) +- [MySQL Eiffel Wrapper](https://dev.mysql.com/doc/refman/5.7/en/apis-eiffel.html) +- [Mysql Go API](https://github.com/go-sql-driver/mysql) + +## Connect to TiDB using MySQL Connectors + +Oracle develops the following APIs and TiDB is compatible with all of them: + +- [MySQL Connector/C](https://dev.mysql.com/doc/refman/5.7/en/connector-c-info.html): a standalone replacement for the `libmysqlclient`, to be used for C applications +- [MySQL Connector/C++](https://dev.mysql.com/doc/refman/5.7/en/connector-cpp-info.html):to enable C++ applications to connect to MySQL +- [MySQL Connector/J](https://dev.mysql.com/doc/refman/5.7/en/connector-j-info.html):to enable Java applications to connect to MySQL using the standard JDBC API +- [MySQL Connector/Net](https://dev.mysql.com/doc/refman/5.7/en/connector-net-info.html):to enable .Net applications to connect to MySQL; [MySQL for Visual Studio](https://dev.mysql.com/doc/visual-studio/en/) uses this; support Microsoft Visual Studio 2012, 2013, 2015 and 2017 versions +- [MySQL Connector/ODBC](https://dev.mysql.com/doc/refman/5.7/en/connector-odbc-info.html):the standard ODBC API; support Windows, Unix, and OS X platforms +- [MySQL Connector/Python](https://dev.mysql.com/doc/refman/5.7/en/connector-python-info.html):to enable Python applications to connect to MySQL, compliant with the [Python DB API version 2.0](http://www.python.org/dev/peps/pep-0249/) + +## Connect to TiDB using MySQL C API + +If you use C language programs to connect to TiDB, you can connect to `libmysqlclient` directly and use the MySQL [C API](https://dev.mysql.com/doc/refman/5.7/en/c-api.html). This is one of the major connection methods using C language, widely used by various clients and APIs, including Connector/C. + +## Connect to TiDB using third-party MySQL APIs + +The third-party APIs are not developed by Oracle. The following table lists the commonly used third-party APIs: + +| Environment | API | Type | Notes | +| -------------- | ---------------------------------------- | -------------------------------- | ---------------------------------------- | +| Ada | GNU Ada MySQL Bindings | `libmysqlclient` | See [MySQL Bindings for GNU Ada](http://gnade.sourceforge.net/) | +| C | C API | `libmysqlclient` | See [Section 27.8, “MySQL C API”](https://dev.mysql.com/doc/refman/5.7/en/c-api.html) | +| C | Connector/C | Replacement for `libmysqlclient` | See [MySQL Connector/C Developer Guide](https://dev.mysql.com/doc/connector-c/en/) | +| C++ | Connector/C++ | `libmysqlclient` | See [MySQL Connector/C++ Developer Guide](https://dev.mysql.com/doc/connector-cpp/en/) | +| | MySQL++ | `libmysqlclient` | See [MySQL++ Web site](http://tangentsoft.net/mysql++/doc/) | +| | MySQL wrapped | `libmysqlclient` | See [MySQL wrapped](http://www.alhem.net/project/mysql/) | +| Go | go-sql-driver | Native Driver | See [Mysql Go API](https://github.com/go-sql-driver/mysql) | +| Cocoa | MySQL-Cocoa | `libmysqlclient` | Compatible with the Objective-C Cocoa environment. See | +| D | MySQL for D | `libmysqlclient` | See [MySQL for D](http://www.steinmole.de/d/) | +| Eiffel | Eiffel MySQL | `libmysqlclient` | See [Section 27.14, “MySQL Eiffel Wrapper”](https://dev.mysql.com/doc/refman/5.7/en/apis-eiffel.html) | +| Erlang | `erlang-mysql-driver` | `libmysqlclient` | See [`erlang-mysql-driver`](http://code.google.com/p/erlang-mysql-driver/) | +| Haskell | Haskell MySQL Bindings | Native Driver | See [Brian O'Sullivan's pure Haskell MySQL bindings](http://www.serpentine.com/blog/software/mysql/) | +| | `hsql-mysql` | `libmysqlclient` | See [MySQL driver for Haskell ](http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hsql-mysql-1.7) | +| Java/JDBC | Connector/J | Native Driver | See [MySQL Connector/J 5.1 Developer Guide](https://dev.mysql.com/doc/connector-j/5.1/en/) | +| Kaya | MyDB | `libmysqlclient` | See [MyDB](http://kayalang.org/library/latest/MyDB) | +| Lua | LuaSQL | `libmysqlclient` | See [LuaSQL](http://keplerproject.github.io/luasql/doc/us/) | +| .NET/Mono | Connector/Net | Native Driver | See [MySQL Connector/Net Developer Guide](https://dev.mysql.com/doc/connector-net/en/) | +| Objective Caml | OBjective Caml MySQL Bindings | `libmysqlclient` | See [MySQL Bindings for Objective Caml](http://raevnos.pennmush.org/code/ocaml-mysql/) | +| Octave | Database bindings for GNU Octave | `libmysqlclient` | See [Database bindings for GNU Octave](http://octave.sourceforge.net/database/index.html) | +| ODBC | Connector/ODBC | `libmysqlclient` | See [MySQL Connector/ODBC Developer Guide](https://dev.mysql.com/doc/connector-odbc/en/) | +| Perl | `DBI`/`DBD::mysql` | `libmysqlclient` | See [Section 27.10, “MySQL Perl API”](https://dev.mysql.com/doc/refman/5.7/en/apis-perl.html) | +| | `Net::MySQL` | Native Driver | See [`Net::MySQL`](http://search.cpan.org/dist/Net-MySQL/MySQL.pm) at CPAN | +| PHP | `mysql`, `ext/mysql`interface (deprecated) | `libmysqlclient` | See [Original MySQL API](https://dev.mysql.com/doc/apis-php/en/apis-php-mysql.html) | +| | `mysqli`, `ext/mysqli`interface | `libmysqlclient` | See [MySQL Improved Extension](https://dev.mysql.com/doc/apis-php/en/apis-php-mysqli.html) | +| | `PDO_MYSQL` | `libmysqlclient` | See [MySQL Functions (PDO_MYSQL)](https://dev.mysql.com/doc/apis-php/en/apis-php-pdo-mysql.html) | +| | PDO mysqlnd | Native Driver | | +| Python | Connector/Python | Native Driver | See [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/) | +| Python | Connector/Python C Extension | `libmysqlclient` | See [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/) | +| | MySQLdb | `libmysqlclient` | See [Section 27.11, “MySQL Python API”](https://dev.mysql.com/doc/refman/5.7/en/apis-python.html) | +| Ruby | MySQL/Ruby | `libmysqlclient` | Uses `libmysqlclient`. See [Section 27.12.1, “The MySQL/Ruby API”](https://dev.mysql.com/doc/refman/5.7/en/apis-ruby-mysqlruby.html) | +| | Ruby/MySQL | Native Driver | See [Section 27.12.2, “The Ruby/MySQL API”](https://dev.mysql.com/doc/refman/5.7/en/apis-ruby-rubymysql.html) | +| Scheme | `Myscsh` | `libmysqlclient` | See [`Myscsh`](https://github.com/aehrisch/myscsh) | +| SPL | `sql_mysql` | `libmysqlclient` | See [`sql_mysql` for SPL](http://www.clifford.at/spl/spldoc/sql_mysql.html) | +| Tcl | MySQLtcl | `libmysqlclient` | See [Section 27.13, “MySQL Tcl API”](https://dev.mysql.com/doc/refman/5.7/en/apis-tcl.html) | + +## Connector versions supported by TiDB + +| Connector | Connector Version | +| ---------------- | ---------------------------- | +| Connector/C | 6.1.0 GA | +| Connector/C++ | 1.0.5 GA | +| Connector/J | 5.1.8 | +| Connector/Net | 6.9.9 GA | +| Connector/Net | 6.8.8 GA | +| Connector/ODBC | 5.1 | +| Connector/ODBC | 3.51 (Unicode not supported) | +| Connector/Python | 2.0 | +| Connector/Python | 1.2 | diff --git a/v2.0/sql/control-flow-functions.md b/v2.0/sql/control-flow-functions.md new file mode 100755 index 0000000000000..9a27ece2977e8 --- /dev/null +++ b/v2.0/sql/control-flow-functions.md @@ -0,0 +1,15 @@ +--- +title: Control Flow Functions +summary: Learn about the Control Flow functions. +category: user guide +--- + +# Control Flow Functions + +| Name | Description | +|:--------------------------------------------------------------------------------------------------|:----------------------------------| +| [`CASE`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#operator_case) | Case operator | +| [`IF()`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#function_if) | If/else construct | +| [`IFNULL()`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#function_ifnull) | Null if/else construct | +| [`NULLIF()`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#function_nullif) | Return NULL if expr1 = expr2 | + diff --git a/v2.0/sql/datatype.md b/v2.0/sql/datatype.md new file mode 100755 index 0000000000000..4b0a6c1a109af --- /dev/null +++ b/v2.0/sql/datatype.md @@ -0,0 +1,333 @@ +--- +title: TiDB Data Type +summary: Learn about the data types supported in TiDB. +category: user guide +--- + +# TiDB Data Type + +TiDB supports all the data types in MySQL except the Spatial type, including numeric type, string type, date & time type, and JSON type. + +The definition of the data type is: `T(M[, D])`. In this format: + +- `T` indicates the specific data type. +- `M` indicates the maximum display width for integer types. For floating-point and fixed-point types, `M` is the total number of digits that can be stored (the precision). For string types, `M` is the maximum length. The maximum permissible value of M depends on the data type. +- `D` applies to floating-point and fixed-point types and indicates the number of digits following the decimal point (the scale). +- `fsp` applies to the TIME, DATETIME, and TIMESTAMP types and represents the fractional seconds precision. The `fsp` value, if given, must be in the range 0 to 6. A value of 0 signifies that there is no fractional part. If omitted, the default precision is 0. + +## Numeric types + +### Overview + +TiDB supports all the MySQL numeric types, including: + ++ Integer Types (Exact Value) ++ Floating-Point Types (Approximate Value) ++ Fixed-Point Types (Exact Value) + +### Integer types (exact value) + +TiDB supports all the MySQL integer types, including INTEGER/INT, TINYINT, SMALLINT, MEDIUMINT, and BIGINT. For more information, see [Numeric Type Overview in MySQL](https://dev.mysql.com/doc/refman/5.7/en/numeric-type-overview.html). + +#### Type definition + +Syntax: + +```sql +BIT[(M)] +> The BIT data type. A type of BIT(M) enables storage of M-bit values. M can range from 1 to 64. + +TINYINT[(M)] [UNSIGNED] [ZEROFILL] +> The TINYINT data type. The value range for signed: [-128, 127] and the range for unsigned is [0, 255]. + +BOOL, BOOLEAN +> BOOLEAN and is equivalent to TINYINT(1). If the value is "0", it is considered as False; otherwise, it is considered True. In TiDB, True is "1" and False is "0". + + +SMALLINT[(M)] [UNSIGNED] [ZEROFILL] +> SMALLINT. The signed range is: [-32768, 32767], and the unsigned range is [0, 65535]. + +MEDIUMINT[(M)] [UNSIGNED] [ZEROFILL] +> MEDIUMINT. The signed range is: [-8388608, 8388607], and the unsigned range is [0, 16777215]. + +INT[(M)] [UNSIGNED] [ZEROFILL] +> INT. The signed range is: [-2147483648, 2147483647], and the unsigned range is [0, 4294967295]. + +INTEGER[(M)] [UNSIGNED] [ZEROFILL] +> Same as INT. + +BIGINT[(M)] [UNSIGNED] [ZEROFILL] +> BIGINT. The signed range is: [-9223372036854775808, 9223372036854775807], and the unsigned range is [0, 18446744073709551615]. + +``` +The meaning of the fields: + +| Syntax Element | Description | +| -------- | ------------------------------- | +| M | the length of the type. Optional. | +| UNSIGNED | UNSIGNED. If omitted, it is SIGNED. | +| ZEROFILL | If you specify ZEROFILL for a numeric column, TiDB automatically adds the UNSIGNED attribute to the column. | + +#### Storage and range + +See the following for the requirements of the storage and minimum value/maximim value of each data type: + +| Type | Storage Required (bytes) | Minimum Value (Signed/Unsigned) | Maximum Value (Signed/Unsigned) | +| ----------- |----------|-----------------------| --------------------- | +| `TINYINT` | 1 | -128 / 0 | 127 / 255 | +| `SMALLINT` | 2 | -32768 / 0 | 32767 / 65535 | +| `MEDIUMINT` | 3 | -8388608 / 0 | 8388607 / 16777215 | +| `INT` | 4 | -2147483648 / 0 | 2147483647 / 4294967295 | +| `BIGINT` | 8 | -9223372036854775808 / 0 | 9223372036854775807 / 18446744073709551615 | + +### Floating-point types (approximate value) + +TiDB supports all the MySQL floating-point types, including FLOAT, and DOUBLE. For more information, [Floating-Point Types (Approximate Value) - FLOAT, DOUBLE in MySQL](https://dev.mysql.com/doc/refman/5.7/en/floating-point-types.html). + +#### Type definition + +Syntax: + +```sql +FLOAT[(M,D)] [UNSIGNED] [ZEROFILL] +> A small (single-precision) floating-point number. Permissible values are -3.402823466E+38 to -1.175494351E-38, 0, and 1.175494351E-38 to 3.402823466E+38. These are the theoretical limits, based on the IEEE standard. The actual range might be slightly smaller depending on your hardware or operating system. + +DOUBLE[(M,D)] [UNSIGNED] [ZEROFILL] +> A normal-size (double-precision) floating-point number. Permissible values are -1.7976931348623157E+308 to -2.2250738585072014E-308, 0, and 2.2250738585072014E-308 to 1.7976931348623157E+308. These are the theoretical limits, based on the IEEE standard. The actual range might be slightly smaller depending on your hardware or operating system. + +DOUBLE PRECISION [(M,D)] [UNSIGNED] [ZEROFILL], REAL[(M,D)] [UNSIGNED] [ZEROFILL] +> Synonym for DOUBLE. + +FLOAT(p) [UNSIGNED] [ZEROFILL] +> A floating-point number. p represents the precision in bits, but TiDB uses this value only to determine whether to use FLOAT or DOUBLE for the resulting data type. If p is from 0 to 24, the data type becomes FLOAT with no M or D values. If p is from 25 to 53, the data type becomes DOUBLE with no M or D values. The range of the resulting column is the same as for the single-precision FLOAT or double-precision DOUBLE data types described earlier in this section. + +``` + +The meaning of the fields: + +| Syntax Element | Description | +| -------- | ------------------------------- | +| M | the total number of digits | +| D | the number of digits following the decimal point | +| UNSIGNED | UNSIGNED. If omitted, it is SIGNED. | +| ZEROFILL | If you specify ZEROFILL for a numeric column, TiDB automatically adds the UNSIGNED attribute to the column. | + +#### Storage + +See the following for the requirements of the storage: + +| Data Type | Storage Required (bytes)| +| ----------- |----------| +| `FLOAT` | 4 | +| `FLOAT(p)` | If 0 <= p <= 24, it is 4; if 25 <= p <= 53, it is 8| +| `DOUBLE` | 8 | + + +### Fixed-point types (exact value) + +TiDB supports all the MySQL floating-point types, including DECIMAL, and NUMERIC. For more information, [Fixed-Point Types (Exact Value) - DECIMAL, NUMERIC in MySQL](https://dev.mysql.com/doc/refman/5.7/en/fixed-point-types.html). + +#### Type definition + +Syntax + +```sql +DECIMAL[(M[,D])] [UNSIGNED] [ZEROFILL] +> A packed “exact” fixed-point number. M is the total number of digits (the precision), and D is the number of digits after the decimal point (the scale). The decimal point and (for negative numbers) the - sign are not counted in M. If D is 0, values have no decimal point or fractional part. The maximum number of digits (M) for DECIMAL is 65. The maximum number of supported decimals (D) is 30. If D is omitted, the default is 0. If M is omitted, the default is 10. + +NUMERIC[(M[,D])] [UNSIGNED] [ZEROFILL] +> Synonym for DECIMAL. +``` + +The meaning of the fields: + +| Syntax Element | Description | +| -------- | ------------------------------- | +| M | the total number of digits | +| D | the number of digits after the decimal point | +| UNSIGNED | UNSIGNED. If omitted, it is SIGNED. | +| ZEROFILL | If you specify ZEROFILL for a numeric column, TiDB automatically adds the UNSIGNED attribute to the column. | + +## Date and time types + +### Overview + +TiDB supports all the MySQL floating-point types, including DATE, DATETIME, TIMESTAMP, TIME, and YEAR. For more information, [Date and Time Types in MySQL](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-types.html). + +#### Type definition + +Syntax: + +```sql +DATE +> A date. The supported range is '1000-01-01' to '9999-12-31'. TiDB displays DATE values in 'YYYY-MM-DD' format. + +DATETIME[(fsp)] +> A date and time combination. The supported range is '1000-01-01 00:00:00.000000' to '9999-12-31 23:59:59.999999'. TiDB displays DATETIME values in 'YYYY-MM-DD HH:MM:SS[.fraction]' format, but permits assignment of values to DATETIME columns using either strings or numbers. +An optional fsp value in the range from 0 to 6 may be given to specify fractional seconds precision. If omitted, the default precision is 0. + +TIMESTAMP[(fsp)] +> A timestamp. The range is '1970-01-01 00:00:01.000000' to '2038-01-19 03:14:07.999999'. +An optional fsp value in the range from 0 to 6 may be given to specify fractional seconds precision. If omitted, the default precision is 0. +An optional fsp value in the range from 0 to 6 may be given to specify fractional seconds precision. If omitted, the default precision is 0. + +TIME[(fsp)] +> A time. The range is '-838:59:59.000000' to '838:59:59.000000'. TiDB displays TIME values in 'HH:MM:SS[.fraction]' format. +An optional fsp value in the range from 0 to 6 may be given to specify fractional seconds precision. If omitted, the default precision is 0. + +YEAR[(2|4)] +> A year in two-digit or four-digit format. The default is the four-digit format. In four-digit format, values display as 1901 to 2155, and 0000. In two-digit format, values display as 70 to 69, representing years from 1970 to 2069. + +``` + +## String types + +### Overview + +TiDB supports all the MySQL string types, including CHAR, VARCHAR, BINARY, VARBINARY, BLOB, TEXT, ENUM, and SET. For more information, [String Types in MySQL](https://dev.mysql.com/doc/refman/5.7/en/string-types.html). + +#### Type definition + +Syntax: + +```sql +[NATIONAL] CHAR[(M)] [CHARACTER SET charset_name] [COLLATE collation_name] +> A fixed-length string. If stored as CHAR, it is right-padded with spaces to the specified length. M represents the column length in characters. The range of M is 0 to 255. + +[NATIONAL] VARCHAR(M) [CHARACTER SET charset_name] [COLLATE collation_name] +> A variable-length string. M represents the maximum column length in characters. The range of M is 0 to 65,535. The effective maximum length of a VARCHAR is subject to the maximum row size (65,535 bytes, which is shared among all columns) and the character set used. + +BINARY(M) +> The BINARY type is similar to the CHAR type, but stores binary byte strings rather than nonbinary character strings. + +VARBINARY(M) +> The VARBINARY type is similar to the VARCHAR type, but stores binary byte strings rather than nonbinary character strings. + +BLOB[(M)] +> A BLOB column with a maximum length of 65,535 bytes. M represents the maximum column length. + +TINYBLOB +> A BLOB column with a maximum length of 255 bytes. + +MEDIUMBLOB +> A BLOB column with a maximum length of 16,777,215 bytes. + +LONGBLOB +> A BLOB column with a maximum length of 4,294,967,295 bytes. + +TEXT[(M)] [CHARACTER SET charset_name] [COLLATE collation_name] +> A TEXT column. M represents the maximum column length ranging from 0 to 65,535. The maximum length of TEXT is based on the size of the longest row and the character set. + +TINYTEXT[(M)] [CHARACTER SET charset_name] [COLLATE collation_name] +> A TEXT column with a maximum length of 255 characters. + +MEDIUMTEXT [CHARACTER SET charset_name] [COLLATE collation_name] +> A TEXT column with a maximum length of 16,777,215 characters. + +LONGTEXT [CHARACTER SET charset_name] [COLLATE collation_name] +> A TEXT column with a maximum length of 4,294,967,295 characters. + +ENUM('value1','value2',...) [CHARACTER SET charset_name] [COLLATE collation_name] +> An enumeration. A string object that can have only one value, chosen from the list of values 'value1', 'value2', ..., NULL or the special '' error value. + +SET('value1','value2',...) [CHARACTER SET charset_name] [COLLATE collation_name] +> A set. A string object that can have zero or more values, each of which must be chosen from the list of values 'value1', 'value2', ... +``` + +## JSON types + +TiDB supports the JSON (JavaScript Object Notation) data type. +The JSON type can store semi-structured data like JSON documents. The JSON data type provides the following advantages over storing JSON-format strings in a string column: + +- Use the Binary format for serialization. The internal format permits quick read access to JSON document elements. +- Automatic validation of the JSON documents stored in JSON columns.Only valid documents can be stored. + +JSON columns, like columns of other binary types, are not indexed directly, but you can index the fields in the JSON document in the form of generated column: + +```sql +CREATE TABLE city ( +id INT PRIMARY KEY, +detail JSON, +population INT AS (JSON_EXTRACT(detail, '$.population') +); +INSERT INTO city VALUES (1, '{"name": "Beijing", "population": 100}'); +SELECT id FROM city WHERE population >= 100; +``` + +For more information, see [JSON Functions and Generated Column](json-functions-generated-column.md). + +## The ENUM data type + +An ENUM is a string object with a value chosen from a list of permitted values that are enumerated explicitly in the column specification when the table is created. The syntax is: + +```sql +ENUM('value1','value2',...) [CHARACTER SET charset_name] [COLLATE collation_name] + +# For example: +ENUM('apple', 'orange', 'pear') +``` + +The value of the ENUM data type is stored as numbers. Each value is converted to a number according the definition order. In the previous example, each string is mapped to a number: + +| Value | Number | +| ---- | ---- | +| NULL | NULL | +| '' | 0 | +| 'apple' | 1 | +| 'orange' | 2 | +| 'pear' | 3 | + +For more information, see [the ENUM type in MySQL](https://dev.mysql.com/doc/refman/5.7/en/enum.html). + +## The SET type + +A SET is a string object that can have zero or more values, each of which must be chosen from a list of permitted values specified when the table is created. The syntax is: + +```sql +SET('value1','value2',...) [CHARACTER SET charset_name] [COLLATE collation_name] + +# For example: +SET('1', '2') NOT NULL +``` +In the example, any of the following values can be valid: + +``` +'' +'1' +'2' +'1,2' +``` +In TiDB, the values of the SET type is internally converted to Int64. The existence of each element is represented using a binary: 0 or 1. For a column specified as `SET('a','b','c','d')`, the members have the following decimal and binary values. + +| Member | Decimal Value | Binary Value | +| ---- | ---- | ------ | +| 'a' | 1 | 0001 | +| 'b' | 2 | 0010 | +| 'c' | 4 | 0100 | +| 'd' | 8 | 1000 | + +In this case, for an element of `('a', 'c')`, it is 0101 in binary. + +For more information, see [the SET type in MySQL](https://dev.mysql.com/doc/refman/5.7/en/set.html)。 + +## Data type default values + +The DEFAULT value clause in a data type specification indicates a default value for a column. The default value must be a constant and cannot be a function or an expression. But for the time type, you can specify the `NOW`, `CURRENT_TIMESTAMP`, `LOCALTIME`, and `LOCALTIMESTAMP` functions as the default for TIMESTAMP and DATETIME columns + +The BLOB, TEXT, and JSON columns cannot be assigned a default value. + +If a column definition includes no explicit DEFAULT value, TiDB determines the default value as follows: + +- If the column can take NULL as a value, the column is defined with an explicit DEFAULT NULL clause. +- If the column cannot take NULL as the value, TiDB defines the column with no explicit DEFAULT clause. + +For data entry into a NOT NULL column that has no explicit DEFAULT clause, if an INSERT or REPLACE statement includes no value for the column, TiDB handles the column according to the SQL mode in effect at the time: + +- If strict SQL mode is enabled, an error occurs for transactional tables, and the statement is rolled back. For nontransactional tables, an error occurs. +- If strict mode is not enabled, TiDB sets the column to the implicit default value for the column data type. + +Implicit defaults are defined as follows: + +- For numeric types, the default is 0. If declared with the AUTO_INCREMENT attribute, the default is the next value in the sequence. +- For date and time types other than TIMESTAMP, the default is the appropriate “zero” value for the type. For TIMESTAMP, the default value is the current date and time. +- For string types other than ENUM, the default value is the empty string. For ENUM, the default is the first enumeration value. \ No newline at end of file diff --git a/v2.0/sql/date-and-time-functions.md b/v2.0/sql/date-and-time-functions.md new file mode 100755 index 0000000000000..5e3685dc43eb7 --- /dev/null +++ b/v2.0/sql/date-and-time-functions.md @@ -0,0 +1,76 @@ +--- +title: Date and Time Functions +summary: Learn how to use the data and time functions. +category: user guide +--- + +# Date and Time Functions + +The usage of date and time functions is similar to MySQL. For more information, see [here](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-types.html). + +**Date/Time functions** + +| Name | Description | +| ---------------------------------------- | ---------------------------------------- | +| [`ADDDATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_adddate) | Add time values (intervals) to a date value | +| [`ADDTIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_addtime) | Add time | +| [`CONVERT_TZ()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_convert-tz) | Convert from one time zone to another | +| [`CURDATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_curdate) | Return the current date | +| [`CURRENT_DATE()`, `CURRENT_DATE`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_current-date) | Synonyms for CURDATE() | +| [`CURRENT_TIME()`, `CURRENT_TIME`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_current-time) | Synonyms for CURTIME() | +| [`CURRENT_TIMESTAMP()`, `CURRENT_TIMESTAMP`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_current-timestamp) | Synonyms for NOW() | +| [`CURTIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_curtime) | Return the current time | +| [`DATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_date) | Extract the date part of a date or datetime expression | +| [`DATE_ADD()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_date-add) | Add time values (intervals) to a date value | +| [`DATE_FORMAT()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_date-format) | Format date as specified | +| [`DATE_SUB()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_date-sub) | Subtract a time value (interval) from a date | +| [`DATEDIFF()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_datediff) | Subtract two dates | +| [`DAY()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_day) | Synonym for DAYOFMONTH() | +| [`DAYNAME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_dayname) | Return the name of the weekday | +| [`DAYOFMONTH()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_dayofmonth) | Return the day of the month (0-31) | +| [`DAYOFWEEK()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_dayofweek) | Return the weekday index of the argument | +| [`DAYOFYEAR()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_dayofyear) | Return the day of the year (1-366) | +| [`EXTRACT()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_extract) | Extract part of a date | +| [`FROM_DAYS()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_from-days) | Convert a day number to a date | +| [`FROM_UNIXTIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_from-unixtime) | Format Unix timestamp as a date | +| [`GET_FORMAT()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_get-format) | Return a date format string | +| [`HOUR()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_hour) | Extract the hour | +| [`LAST_DAY`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_last-day) | Return the last day of the month for the argument | +| [`LOCALTIME()`, `LOCALTIME`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_localtime) | Synonym for NOW() | +| [`LOCALTIMESTAMP`, `LOCALTIMESTAMP()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_localtimestamp) | Synonym for NOW() | +| [`MAKEDATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_makedate) | Create a date from the year and day of year | +| [`MAKETIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_maketime) | Create time from hour, minute, second | +| [`MICROSECOND()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_microsecond) | Return the microseconds from argument | +| [`MINUTE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_minute) | Return the minute from the argument | +| [`MONTH()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_month) | Return the month from the date passed | +| [`MONTHNAME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_monthname) | Return the name of the month | +| [`NOW()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_now) | Return the current date and time | +| [`PERIOD_ADD()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_period-add) | Add a period to a year-month | +| [`PERIOD_DIFF()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_period-diff) | Return the number of months between periods | +| [`QUARTER()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_quarter) | Return the quarter from a date argument | +| [`SEC_TO_TIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_sec-to-time) | Converts seconds to 'HH:MM:SS' format | +| [`SECOND()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_second) | Return the second (0-59) | +| [`STR_TO_DATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_str-to-date) | Convert a string to a date | +| [`SUBDATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_subdate) | Synonym for DATE_SUB() when invoked with three arguments | +| [`SUBTIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_subtime) | Subtract times | +| [`SYSDATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_sysdate) | Return the time at which the function executes | +| [`TIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_time) | Extract the time portion of the expression passed | +| [`TIME_FORMAT()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_time-format) | Format as time | +| [`TIME_TO_SEC()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_time-to-sec) | Return the argument converted to seconds | +| [`TIMEDIFF()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_timediff) | Subtract time | +| [`TIMESTAMP()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_timestamp) | With a single argument, this function returns the date or datetime expression; with two arguments, the sum of the arguments | +| [`TIMESTAMPADD()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_timestampadd) | Add an interval to a datetime expression | +| [`TIMESTAMPDIFF()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_timestampdiff) | Subtract an interval from a datetime expression | +| [`TO_DAYS()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_to-days) | Return the date argument converted to days | +| [`TO_SECONDS()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_to-seconds) | Return the date or datetime argument converted to seconds since Year 0 | +| [`UNIX_TIMESTAMP()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_unix-timestamp) | Return a Unix timestamp | +| [`UTC_DATE()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_utc-date) | Return the current UTC date | +| [`UTC_TIME()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_utc-time) | Return the current UTC time | +| [`UTC_TIMESTAMP()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_utc-timestamp) | Return the current UTC date and time | +| [`WEEK()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_week) | Return the week number | +| [`WEEKDAY()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_weekday) | Return the weekday index | +| [`WEEKOFYEAR()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_weekofyear) | Return the calendar week of the date (1-53) | +| [`YEAR()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_year) | Return the year | +| [`YEARWEEK()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_yearweek) | Return the year and week | + +For details, see [here](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html). diff --git a/v2.0/sql/ddl.md b/v2.0/sql/ddl.md new file mode 100755 index 0000000000000..951b59a6757df --- /dev/null +++ b/v2.0/sql/ddl.md @@ -0,0 +1,348 @@ +--- +title: Data Definition Statements +summary: Learn how to use DDL (Data Definition Language) in TiDB. +category: user guide +--- + +# Data Definition Statements + +DDL (Data Definition Language) is used to define the database structure or schema, and to manage the database and statements of various objects in the database. + +## CREATE DATABASE syntax + +```sql +CREATE {DATABASE | SCHEMA} [IF NOT EXISTS] db_name + [create_specification] ... + +create_specification: + [DEFAULT] CHARACTER SET [=] charset_name + | [DEFAULT] COLLATE [=] collation_name +``` + +The `CREATE DATABASE` statement is used to create a database, and to specify the default properties of the database, such as the default character set and validation rules. `CREATE SCHEMA` is a synonym for `CREATE DATABASE`. + +If you create an existing database and does not specify `IF NOT EXISTS`, an error is displayed. + +The `create_specification` option is used to specify the specific `CHARACTER SET` and `COLLATE` in the database. Currently, the option is only supported in syntax. + +## DROP DATABASE syntax + +```sql +DROP {DATABASE | SCHEMA} [IF EXISTS] db_name +``` + +The `DROP DATABASE` statement is used to delete the specified database and its tables. + +The `IF EXISTS` statement is used to prevent an error if the database does not exist. + +## CREATE TABLE syntax + +```sql +CREATE TABLE [IF NOT EXISTS] tbl_name + (create_definition,...) + [table_options] + +CREATE TABLE [IF NOT EXISTS] tbl_name + { LIKE old_tbl_name | (LIKE old_tbl_name) } + +create_definition: + col_name column_definition + | [CONSTRAINT [symbol]] PRIMARY KEY [index_type] (index_col_name,...) + [index_option] ... + | {INDEX|KEY} [index_name] [index_type] (index_col_name,...) + [index_option] ... + | [CONSTRAINT [symbol]] UNIQUE [INDEX|KEY] + [index_name] [index_type] (index_col_name,...) + [index_option] ... + | {FULLTEXT} [INDEX|KEY] [index_name] (index_col_name,...) + [index_option] ... + | [CONSTRAINT [symbol]] FOREIGN KEY + [index_name] (index_col_name,...) reference_definition + +column_definition: + data_type [NOT NULL | NULL] [DEFAULT default_value] + [AUTO_INCREMENT] [UNIQUE [KEY] | [PRIMARY] KEY] + [COMMENT 'string'] + [reference_definition] + | data_type [GENERATED ALWAYS] AS (expression) + [VIRTUAL | STORED] [UNIQUE [KEY]] [COMMENT comment] + [NOT NULL | NULL] [[PRIMARY] KEY] + +data_type: + BIT[(length)] + | TINYINT[(length)] [UNSIGNED] [ZEROFILL] + | SMALLINT[(length)] [UNSIGNED] [ZEROFILL] + | MEDIUMINT[(length)] [UNSIGNED] [ZEROFILL] + | INT[(length)] [UNSIGNED] [ZEROFILL] + | INTEGER[(length)] [UNSIGNED] [ZEROFILL] + | BIGINT[(length)] [UNSIGNED] [ZEROFILL] + | REAL[(length,decimals)] [UNSIGNED] [ZEROFILL] + | DOUBLE[(length,decimals)] [UNSIGNED] [ZEROFILL] + | FLOAT[(length,decimals)] [UNSIGNED] [ZEROFILL] + | DECIMAL[(length[,decimals])] [UNSIGNED] [ZEROFILL] + | NUMERIC[(length[,decimals])] [UNSIGNED] [ZEROFILL] + | DATE + | TIME[(fsp)] + | TIMESTAMP[(fsp)] + | DATETIME[(fsp)] + | YEAR + | CHAR[(length)] [BINARY] + [CHARACTER SET charset_name] [COLLATE collation_name] + | VARCHAR(length) [BINARY] + [CHARACTER SET charset_name] [COLLATE collation_name] + | BINARY[(length)] + | VARBINARY(length) + | TINYBLOB + | BLOB + | MEDIUMBLOB + | LONGBLOB + | TINYTEXT [BINARY] + [CHARACTER SET charset_name] [COLLATE collation_name] + | TEXT [BINARY] + [CHARACTER SET charset_name] [COLLATE collation_name] + | MEDIUMTEXT [BINARY] + [CHARACTER SET charset_name] [COLLATE collation_name] + | LONGTEXT [BINARY] + [CHARACTER SET charset_name] [COLLATE collation_name] + | ENUM(value1,value2,value3,...) + [CHARACTER SET charset_name] [COLLATE collation_name] + | SET(value1,value2,value3,...) + [CHARACTER SET charset_name] [COLLATE collation_name] + | JSON + +index_col_name: + col_name [(length)] [ASC | DESC] + +index_type: + USING {BTREE | HASH} + +index_option: + KEY_BLOCK_SIZE [=] value + | index_type + | COMMENT 'string' + +reference_definition: + REFERENCES tbl_name (index_col_name,...) + [MATCH FULL | MATCH PARTIAL | MATCH SIMPLE] + [ON DELETE reference_option] + [ON UPDATE reference_option] + +reference_option: + RESTRICT | CASCADE | SET NULL | NO ACTION | SET DEFAULT + +table_options: + table_option [[,] table_option] ... + +table_option: + AUTO_INCREMENT [=] value + | AVG_ROW_LENGTH [=] value + | [DEFAULT] CHARACTER SET [=] charset_name + | CHECKSUM [=] {0 | 1} + | [DEFAULT] COLLATE [=] collation_name + | COMMENT [=] 'string' + | COMPRESSION [=] {'ZLIB'|'LZ4'|'NONE'} + | CONNECTION [=] 'connect_string' + | DELAY_KEY_WRITE [=] {0 | 1} + | ENGINE [=] engine_name + | KEY_BLOCK_SIZE [=] value + | MAX_ROWS [=] value + | MIN_ROWS [=] value + | ROW_FORMAT [=] {DEFAULT|DYNAMIC|FIXED|COMPRESSED|REDUNDANT|COMPACT} + | STATS_PERSISTENT [=] {DEFAULT|0|1} +``` + +The `CREATE TABLE` statement is used to create a table. Currently, it does not support temporary tables, `CHECK` constraints, or importing data from other tables while creating tables. It supports some of the `Partition_options` in syntax. + +- When you create an existing table and if you specify `IF NOT EXIST`, it does not report an error. Otherwise, it reports an error. +- Use `LIKE` to create an empty table based on the definition of another table including its column and index properties. +- The `FULLTEXT` and `FOREIGN KEY` in `create_definition` are currently only supported in syntax. +- For the `data_type`, see [Data Types](datatype.md). +- The `[ASC | DESC]` in `index_col_name` is currently only supported in syntax. +- The `index_type` is currently only supported in syntax. +- The `KEY_BLOCK_SIZE` in `index_option` is currently only supported in syntax. +- The `table_option` currently only supports `AUTO_INCREMENT`, `CHARACTER SET` and `COMMENT`, while the others are only supported in syntax. The clauses are separated by a comma `,`. See the following table for details: + + | Parameters | Description | Example | + | ---------- | ---------- | ------- | + | `AUTO_INCREMENT` | The initial value of the increment field | `AUTO_INCREMENT` = 5 | + | `CHARACTER SET` | To specify the string code for the table; currently only support UTF8MB4 | `CHARACTER SET` = 'utf8mb4' | + | `COMMENT` | The comment information | `COMMENT` = 'comment info' | + +### AUTO_INCREMENT description + +The TiDB automatic increment ID (`AUTO_INCREMENT` ID) only guarantees automatic increment and uniqueness and does not guarantee continuous allocation. Currently, TiDB adopts bulk allocation. If you insert data into multiple TiDB servers at the same time, the allocated automatic increment ID is not continuous. + +You can specify the `AUTO_INCREMENT` for integer fields. A table only supports one field with the `AUTO_INCREMENT` property. + +## DROP TABLE syntax + +```sql +DROP TABLE [IF EXISTS] + tbl_name [, tbl_name] ... + [RESTRICT | CASCADE] +``` + +You can delete multiple tables at the same time. The tables are separated by a comma `,`. + +If you delete a table that does not exist and does not specify the use of `IF EXISTS`, an error is displayed. + +The RESTRICT and CASCADE keywords do nothing. They are permitted to make porting easier from other database systems. + +## TRUNCATE TABLE syntax + +```sql +TRUNCATE [TABLE] tbl_name +``` + +The `TRUNCATE TABLE` statement is used to clear all the data in the specified table but keeps the table structure. + +This operation is similar to deleting all the data of a specified table, but it is much faster and is not affected by the number of rows in the table. + +> **Note**: If you use the `TRUNCATE TABLE` statement, the value of `AUTO_INCREMENT` in the original table is reset to its starting value. + +## RENAME TABLE syntax + +```sql +RENAME TABLE + tbl_name TO new_tbl_name +``` + +The `RENAME TABLE` statement is used to rename a table. + +This statement is equivalent to the following `ALTER TABLE` statement: + +```sql +ALTER TABLE old_table RENAME new_table; +``` + +## ALTER TABLE syntax + +```sql +ALTER TABLE tbl_name + [alter_specification] + +alter_specification: + table_options + | ADD [COLUMN] col_name column_definition + [FIRST | AFTER col_name] + | ADD [COLUMN] (col_name column_definition,...) + | ADD {INDEX|KEY} [index_name] + [index_type] (index_col_name,...) [index_option] ... + | ADD [CONSTRAINT [symbol]] PRIMARY KEY + [index_type] (index_col_name,...) [index_option] ... + | ADD [CONSTRAINT [symbol]] + UNIQUE [INDEX|KEY] [index_name] + [index_type] (index_col_name,...) [index_option] ... + | ADD FULLTEXT [INDEX|KEY] [index_name] + (index_col_name,...) [index_option] ... + | ADD [CONSTRAINT [symbol]] + FOREIGN KEY [index_name] (index_col_name,...) + reference_definition + | ALTER [COLUMN] col_name {SET DEFAULT literal | DROP DEFAULT} + | CHANGE [COLUMN] old_col_name new_col_name column_definition + [FIRST|AFTER col_name] + | {DISABLE|ENABLE} KEYS + | DROP [COLUMN] col_name + | DROP {INDEX|KEY} index_name + | DROP PRIMARY KEY + | DROP FOREIGN KEY fk_symbol + | LOCK [=] {DEFAULT|NONE|SHARED|EXCLUSIVE} + | MODIFY [COLUMN] col_name column_definition + [FIRST | AFTER col_name] + | RENAME [TO|AS] new_tbl_name + | {WITHOUT|WITH} VALIDATION + +index_col_name: + col_name [(length)] [ASC | DESC] + +index_type: + USING {BTREE | HASH} + +index_option: + KEY_BLOCK_SIZE [=] value + | index_type + | COMMENT 'string' + +table_options: + table_option [[,] table_option] ... + +table_option: + AVG_ROW_LENGTH [=] value + | [DEFAULT] CHARACTER SET [=] charset_name + | CHECKSUM [=] {0 | 1} + | [DEFAULT] COLLATE [=] collation_name + | COMMENT [=] 'string' + | COMPRESSION [=] {'ZLIB'|'LZ4'|'NONE'} + | CONNECTION [=] 'connect_string' + | DELAY_KEY_WRITE [=] {0 | 1} + | ENGINE [=] engine_name + | KEY_BLOCK_SIZE [=] value + | MAX_ROWS [=] value + | MIN_ROWS [=] value + | ROW_FORMAT [=] {DEFAULT|DYNAMIC|FIXED|COMPRESSED|REDUNDANT|COMPACT} + | STATS_PERSISTENT [=] {DEFAULT|0|1} +``` + +The `ALTER TABLE` statement is used to update the structure of an existing table, such as updating the table or table properties, adding or deleting columns, creating or deleting indexes, updating columns or column properties. The descriptions of several field types are as follows: + +- For `index_col_name`, `index_type`, and `index_option`, see [CREATE INDEX Syntax](#create-index-syntax). +- Currently, the `table_option` supports `AUTO_INCREMENT` and `COMMENT`, while the others are only supported in syntax. + +The support for specific operation types is as follows: + +- `ADD/DROP INDEX/COLUMN`: currently, does not support the creation or deletion of multiple indexes or columns at the same time +- `ADD/DROP PRIMARY KEY`: currently not supported +- `DROP COLUMN`: currently does not support the deletion of columns that are primary key columns or index columns +- `ADD COLUMN`: currently, does not support setting the newly added column as the primary key or unique index at the same time, and does not support setting the column property to `AUTO_INCREMENT` +- `CHANGE/MODIFY COLUMN`: currently supports some of the syntaxes, and the details are as follows: + - In updating data types, the `CHANGE/MODIFY COLUMN` only supports updates between integer types, updates between string types, and updates between Blob types. You can only extend the length of the original type. The column properties of `unsigned`/`charset`/`collate` cannot be changed. The specific supported types are classified as follows: + - Integer types: `TinyInt`, `SmallInt`, `MediumInt`, `Int`, `BigInt` + - String types: `Char`, `Varchar`, `Text`, `TinyText`, `MediumText`, `LongText` + - Blob types: `Blob`, `TinyBlob`, `MediumBlob`, `LongBlob` + - In updating type definition, the `CHANGE/MODIFY COLUMN` supports `default value`, `comment`, `null`, `not null` and `OnUpdate`, but does not support the update from `null` to `not null`. + - The `CHANGE/MODIFY COLUMN` does not support the update of `enum` type column. +- `LOCK [=] {DEFAULT|NONE|SHARED|EXCLUSIVE}`: is currently only supported in syntax + +## CREATE INDEX syntax + +```sql +CREATE [UNIQUE] INDEX index_name + [index_type] + ON tbl_name (index_col_name,...) + [index_option] ... + +index_col_name: + col_name [(length)] [ASC | DESC] + +index_option: + KEY_BLOCK_SIZE [=] value + | index_type + | COMMENT 'string' + +index_type: + USING {BTREE | HASH} +``` + +The `CREATE INDEX` statement is used to create the index for an existing table. In function, `CREATE INDEX` corresponds to the index creation of `ALTER TABLE`. Similar to MySQL, the `CREATE INDEX` cannot create a primary key index. + +### Difference from MySQL + +- The `CREATE INDEX` supports the `UNIQUE` index and does not support `FULLTEXT` and `SPATIAL` indexes. +- The `index_col_name` supports the length option with a maximum length limit of 3072 bytes. The length limit does not change depending on the storage engine, and character set used when building the table. This is because TiDB does not use storage engines like InnoDB and MyISAM, and only provides syntax compatibility with MySQL for the storage engine options when creating tables. Similarly, TiDB uses the utf8mb4 character set, and only provides syntax compatibility with MySQL for the character set options when creating tables. For more information, see [Compatibility with MySQL](mysql-compatibility.md). +- The `index_col_name` supports the index sorting options of `ASC` and `DESC`. The behavior of sorting options is similar to MySQL, and only syntax parsing is supported. All the internal indexes are stored in ascending order. For more information, see [CREATE INDEX Syntax](https://dev.mysql.com/doc/refman/5.7/en/create-index.html). +- The `index_option` supports `KEY_BLOCK_SIZE`, `index_type` and `COMMENT`. The `COMMENT` supports a maximum of 1024 characters and does not support the `WITH PARSER` option. +- The `index_type` supports `BTREE` and `HASH` only in MySQL syntax, which means the index type is independent of the storage engine option in the creating table statement. For example, in MySQL, when you use `CREATE INDEX` on a table using InnoDB, it only supports the `BTREE` index, while TiDB supports both `BTREE` and `HASH` indexes. +- TiDB supports `algorithm_option` and `lock_option` only in MySQL syntax. +- TiDB supports at most 512 columns in a single table. The corresponding number limit in InnoDB is 1017, and the hard limit in MySQL is 4096. For more details, see [Limits on Table Column Count and Row Size](https://dev.mysql.com/doc/refman/5.7/en/column-count-limit.html). + +## DROP INDEX syntax + +```sql +DROP INDEX index_name ON tbl_name +``` + +The `DROP INDEX` statement is used to delete a table index. Currently, it does not support deleting the primary key index. + +## ADMIN statement + +You can use the `ADMIN` statement to view the information related to DDL job. For details, see [here](admin.md#admin-statement). diff --git a/v2.0/sql/dml.md b/v2.0/sql/dml.md new file mode 100755 index 0000000000000..719928da548f3 --- /dev/null +++ b/v2.0/sql/dml.md @@ -0,0 +1,271 @@ +--- +title: TiDB Data Manipulation Language +summary: Use DML (Data Manipulation Language) to select, insert, delete and update the data. +category: user guide +--- + +# TiDB Data Manipulation Language + +Data manipulation language (DML) is a family of syntax elements used for selecting, inserting, deleting and updating data in a database. + +## SELECT + +`SELECT` is used to retrieve rows selected from one or more tables. + +### Syntax + +```sql +SELECT + [ALL | DISTINCT | DISTINCTROW ] + [HIGH_PRIORITY] + [STRAIGHT_JOIN] + [SQL_CACHE | SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS] + select_expr [, select_expr ...] + [FROM table_references + [WHERE where_condition] + [GROUP BY {col_name | expr | position} + [ASC | DESC], ...] + [HAVING where_condition] + [ORDER BY {col_name | expr | position} + [ASC | DESC], ...] + [LIMIT {[offset,] row_count | row_count OFFSET offset}] + [FOR UPDATE | LOCK IN SHARE MODE]] +``` + +### Description of the syntax elements + +|Syntax Element|Description| +| --------------------- | -------------------------------------------------- | +|`ALL`, `DISTINCT`, `DISTINCTROW` | The `ALL`, `DISTINCT`/`DISTINCTROW` modifiers specify whether duplicate rows should be returned. ALL (the default) specifies that all matching rows should be returned.| +|`HIGH_PRIORITY` | `HIGH_PRIORITY` gives the current statement higher priority than other statements. | +|`SQL_CACHE`, `SQL_NO_CACHE`, `SQL_CALC_FOUND_ROWS` | To guarantee compatibility with MySQL, TiDB parses these three modifiers, but will ignore them.| +| `STRAIGHT_JOIN` | `STRAIGHT_JOIN` forces the optimizer to execute a Join query in the order of the tables used in the `FROM` clause. You can use this syntax to speed up queries execution when the Join order chosen by the optimizer is not good. | +|`select_expr` | Each `select_expr` indicates a column to retrieve. including the column names and expressions. `\*` represents all the columns.| +\|`FROM table_references` | The `FROM table_references` clause indicates the table (such as `(select * from t;)`) , or tables(such as `select * from t1 join t2;)') or even 0 tables (such as `select 1+1 from dual;` (which is equivalent to `select 1+1;')) from which to retrieve rows.| +|`WHERE where_condition` | The `WHERE` clause, if given, indicates the condition or conditions that rows must satisfy to be selected. The result contains only the data that meets the condition(s).| +|`GROUP BY` | The `GROUP BY` statement is used to group the result-set.| +|`HAVING where_condition` |The `HAVING` clause and the `WHERE` clause are both used to filter the results. The `HAVING` clause filters the results of `GROUP BY`, while the `WHERE` clause filter the results before aggregation。| +|`ORDER BY` | The `ORDER BY` clause is used to sort the data in ascending or descending order, based on columns, expressions or items in the `select_expr` list.| +|`LIMIT` | The `LIMIT` clause can be used to constrain the number of rows. `LIMIT` takes one or two numeric arguments. With one argument, the argument specifies the maximum number of rows to return, the first row to return is the first row of the table by default; with two arguments, the first argument specifies the offset of the first row to return, and the second specifies the maximum number of rows to return.| +|`FOR UPDATE` | All the data in the result sets are read-locked, in order to detect the concurrent updates. TiDB uses the [Optimistic Transaction Model](mysql-compatibility.md#transaction). The transaction conflicts are detected in the commit phase instead of statement execution phase. while executing the `SELECT FOR UPDATE` statement, if there are other transactions trying to update relavant data, the `SELECT FOR UPDATE` transaction will fail.| +|`LOCK IN SHARE MODE` | To guarantee compatibility, TiDB parses these three modifiers, but will ignore them.| + +## INSERT + +`INSERT` inserts new rows into an existing table. TiDB is compatible with all the `INSERT` syntaxes of MySQL. + +### Syntax + +```sql + Insert Statement: + INSERT [LOW_PRIORITY | DELAYED | HIGH_PRIORITY] [IGNORE] + [INTO] tbl_name + insert_values + [ON DUPLICATE KEY UPDATE assignment_list] + + insert_values: + [(col_name [, col_name] ...)] + {VALUES | VALUE} (expr_list) [, (expr_list)] ... +| SET assignment_list +| [(col_name [, col_name] ...)] + SELECT ... + + expr_list: + expr [, expr] ... + + assignment: + col_name = expr + + assignment_list: + assignment [, assignment] ... +``` + +### Description of the syntax elements + +| Syntax Elements | Description | +| -------------- | --------------------------------------------------------- | +| `LOW_PRIORITY` | `LOW_PRIORITY` gives the statement lower priority. TiDB lowers the priority of the current statement. | +| `DELAYED` | To guarantee compatibility, TiDB parses this modifier, but will ignore it. | +| `HIGH_PRIORITY` | `HIGH_PRIORITY` gives the current statement higher priority than other statements. TiDB raises the priority of the current statement.| +| `IGNORE` | If `IGNORE` modifier is specified and there is a duplicate key error, the data cannot be inserted without an error. | +| `tbl_name` | `tbl_name` is the table into which the rows should be inserted. | +| `insert_values` | The `insert_values` is the value to be inserted. For more information, see [insert_values](#insert_values). | +| `ON DUPLICATE KEY UPDATE assignment_list` | If `ON DUPLICATE KEY UPDATE` is specified, and there is a conflict in a `UNIQUE` index or `PRIMARY` KEY, the data cannot be inserted, instead, the existing row will be updated using `assignment_list`. | + +### insert_values + +You can use the following ways to specify the data set: + +- Value List + + Place the values to be inserted in a Value List. + + ```sql + CREATE TABLE tbl_name ( + a int, + b int, + c int + ); + INSERT INTO tbl_name VALUES(1,2,3),(4,5,6),(7,8,9); + ``` + + In the example above, `(1,2,3),(4,5,6),(7,8,9)` are the Value Lists enclosed within parentheses and separated by commas. Each Values List means a row of data, in this example, 3 rows are inserted. You can also specify the `ColumnName List` to insert rows to some of the columns. + and contains exactly as many values as are to be inserted per row. + + ```sql + INSERT INTO tbl_name (a,c) VALUES(1,2),(4,5),(7,8); + ``` + + In the example above, only the `a` and `c` columns are listed, the the `b` of each row will be set to `Null`. + +- Assignment List + + Insert the values by using Assignment Statements, for example: + + ```sql + INSERT INTO tbl_name a=1, b=2, c=3; + ``` + + In this way, only one row of data can be inserted at a time, and the value of each column needs the assignment statement. + +- Select Statement + + The data set to be inserted is obtained using a `SELECT` statement. The column to be inserted into is obtained from the Schema in the `SELECT` statement. + ```sql + CREATE TABLE tbl_name1 ( + a int, + b int, + c int + ); + INSERT INTO tbl_name SELECT * from tbl_name1; + ``` + In the example above, the data is selected from `tal_name1`, and then inserted into `tbl_name`. + +## DELETE + +`DELETE` is a DML statement that removes rows from a table. TiDB is compatible with all the `DELETE` syntaxes of MySQL except for `PARTITION`. There are two kinds of `DELETE`, [`Single-Table DELETE`](#single-table-delete-syntax) and [`Multiple-Table DELETE`](#multiple-table-delete-syntax). + +### Single-Table DELETE syntax + +The `Single_Table DELETE` syntax deletes rows from a single table. + +### DELETE syntax + +```sql +DELETE [LOW_PRIORITY] [QUICK] [IGNORE] FROM tbl_name + [WHERE where_condition] + [ORDER BY ...] + [LIMIT row_count] +``` + +### Multiple-Table DELETE syntax + +The `Multiple_Table DELETE` syntax deletes rows of multiple tables, and has the following two kinds of formats: + +```sql +DELETE [LOW_PRIORITY] [QUICK] [IGNORE] + tbl_name[.*] [, tbl_name[.*]] ... + FROM table_references + [WHERE where_condition] + +DELETE [LOW_PRIORITY] [QUICK] [IGNORE] + FROM tbl_name[.*] [, tbl_name[.*]] ... + USING table_references + [WHERE where_condition] +``` + +Both of the two syntax formats can be used to delete multiple tables, or delete the selected results from multiple tables. There are still differences between the two formats. The first one will delete data of every table in the table list before `FROM`. The second one will delete the data of the tables in the table list which is after `FROM` and before `USING`. + +### Description of the syntax elements + +| Syntax Elements | Description| +| -------------- | --------------------------------------------------------- | +| `LOW_PRIORITY` | `LOW_PRIORITY` gives the statement lower priority. TiDB lowers the priority of the current statement. | +| `QUICK` | To guarantee compatibility with MySQL, TiDB parses these three modifiers, but will ignore them. | +| `IGNORE` | To guarantee compatibility with MySQL, TiDB parses these three modifiers, but will ignore them.| +| `tbl_name` | the table names to be deleted| +| `WHERE where_condition` | the `Where` expression, which deletes rows that meets the expression | +| `ORDER BY` | To sort the data set which are to be deleted| +| `LIMIT row_count` | the top number of rows to be deleted as specified in`row_count` | + +## Update + +`UPDATE` is used to update data of the tables. + +### Syntax + +There are two kinds of `UPDATE` syntax, [Single-table UPDATE](#single-table-update) and [Multi-Table UPDATE](#multi-table-update). + +### Single-table UPDATE + +```sql +UPDATE [LOW_PRIORITY] [IGNORE] table_reference + SET assignment_list + [WHERE where_condition] + [ORDER BY ...] + [LIMIT row_count] + +assignment: + col_name = value + +assignment_list: + assignment [, assignment] ... +``` + +For the single-table syntax, the `UPDATE` statement updates columns of existing rows in the named table with new values. The `SET assignment_list` clause indicates which columns to modify and the values they should be given. The `WHERE/Orderby/Limit` clause, if given, specifies the conditions that identify which rows to update. + +### Multi-table UPDATE + +```sql +UPDATE [LOW_PRIORITY] [IGNORE] table_references + SET assignment_list + [WHERE where_condition] +``` + +For the multiple-table syntax, `UPDATE` updates rows in each table named in `table_references` that satisfy the conditions. + +### Description of the syntax elements + +| Syntax Elements | Description | +| -------------- | --------------------------------------------------------- | +| `LOW_PRIORITY` | `LOW_PRIORITY` gives the statement lower priority. TiDB lowers the priority of the current statement. | +| `IGNORE` | To guarantee compatibility with MySQL, TiDB parses these three modifiers, but will ignore them.| +| `table_reference` | The Table Name to be updated | +| `table_references` | The Table Names to be updated | +| `SET assignment_list` | ColumnName and value to be updated | +| `WHERE where_condition` | The WHERE clause, if given, specifies the conditions that identify which rows to update. | +| `ORDER BY` | $the rows are updated in the order that is specified$ | +| `LIMIT row_count` | $The LIMIT clause places a limit on the number of rows that can be updated.$ | + +## REPLACE + +`REPLACE` is a MySQL extension to the SQL standard. `REPLACE` works exactly like `INSERT`, except that if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE index, the old row is deleted before the new row is inserted. + +### Syntax + +```sql +REPLACE [LOW_PRIORITY | DELAYED] + [INTO] tbl_name + [(col_name [, col_name] ...)] + {VALUES | VALUE} (value_list) [, (value_list)] ... + +REPLACE [LOW_PRIORITY | DELAYED] + [INTO] tbl_name + SET assignment_list + +REPLACE [LOW_PRIORITY | DELAYED] + [INTO] tbl_name + [(col_name [, col_name] ...)] + SELECT ... +``` + +### Description of the syntax elements + +|Syntax Element|Description| +| -------------- | --------------------------------------------------------- | +| `LOW_PRIORITY` | `LOW_PRIORITY` gives the statement lower priority. TiDB lowers the priority of the current statement. | +| `DELAYED` | To guarantee compatibility with MySQL, TiDB parses these three modifiers, but will ignore them.| +| `tbl_name` | `tbl_name` is the table into which the rows should be inserted. | +| `value_list` | data to be inserted | +| `SET assignment_list` | ColumnName and value to be updated | +| `SELECT ...` | results selected by 'SELECT' and to be inserted | diff --git a/v2.0/sql/encrypted-connections.md b/v2.0/sql/encrypted-connections.md new file mode 100755 index 0000000000000..7894bf64156fc --- /dev/null +++ b/v2.0/sql/encrypted-connections.md @@ -0,0 +1,156 @@ +--- +title: Use Encrypted Connections +summary: Use the encrypted connection to ensure data security. +category: user guide +--- + +# Use Encrypted Connections + +It is recommended to use the encrypted connection to ensure data security because non-encrypted connection might lead to information leak. + +The TiDB server supports the encrypted connection based on the TLS (Transport Layer Security). The protocol is consistent with MySQL encrypted connections and is directly supported by existing MySQL clients such as MySQL operation tools and MySQL drivers. TLS is sometimes referred to as SSL (Secure Sockets Layer). Because the SSL protocol has [known security vulnerabilities](https://en.wikipedia.org/wiki/Transport_Layer_Security), TiDB does not support it. TiDB supports the following versions: TLS 1.0, TLS 1.1, and TLS 1.2. + +After using an encrypted connection, the connection has the following security properties: + +- Confidentiality: the traffic plaintext cannot be eavesdropped +- Integrity: the traffic plaintext cannot be tampered +- Authentication: (optional) the client and the server can verify the identity of both parties to avoid man-in-the-middle attacks + +The encrypted connections in TiDB are disabled by default. To use encrypted connections in the client, you must first configure the TiDB server and enable encrypted connections. In addition, similar to MySQL, the encrypted connections in TiDB consist of single optional connection. For a TiDB server with encrypted connections enabled, you can choose to securely connect to the TiDB server through an encrypted connection, or to use a generally unencrypted connection. Most MySQL clients do not use encrypted connections by default, so generally the client is explicitly required to use an encrypted connection. + +In short, to use encrypted connections, both of the following conditions must be met: + +1. Enable encrypted connections in the TiDB server. +2. The client specifies to use an encrypted connection. + +## Configure TiDB to use encrypted connections + +See the following desrciptions about the related parameters to enable encrypted connections: + +- [`ssl-cert`](server-command-option.md#ssl-cert): specifies the file path of the SSL certificate +- [`ssl-key`](server-command-option.md#ssl-key): specifies the private key that matches the certificate +- [`ssl-ca`](server-command-option.md#ssl-ca): (optional) specifies the file path of the trusted CA certificate + +To enable encrypted connections in the TiDB server, you must specify both of the `ssl-cert` and `ssl-key` parameters in the configuration file when you start the TiDB server. You can also specify the `ssl-ca` parameter for client authentication (see [Enable authentication](#enable-authentication)). + +All the files specified by the parameters are in PEM (Privacy Enhanced Mail) format. Currently, TiDB does not support the import of a password-protected private key, so it is required to provide a private key file without a password. If the certificate or private key is invalid, the TiDB server starts as usual, but the client cannot connect to the TiDB server through an encrypted connection. + +The certificate or key is signed and generated using OpenSSL, or quickly generated using the `mysql_ssl_rsa_setup` tool in MySQL: + +```bash +mysql_ssl_rsa_setup --datadir=./certs +``` + +This command generates the following files in the `certs` directory: + +``` +certs +├── ca-key.pem +├── ca.pem +├── client-cert.pem +├── client-key.pem +├── private_key.pem +├── public_key.pem +├── server-cert.pem +└── server-key.pem +``` + +The corresponding TiDB configuration file parameters are: + +```toml +[security] +ssl-cert = "certs/server-cert.pem" +ssl-key = "certs/server-key.pem" +``` + +If the certificate parameters are correct, TiDB outputs `Secure connection is enabled` when started, otherwise it outputs `Secure connection is NOT ENABLED`. + +## Configure the MySQL client to use encrypted connections + +The client of MySQL 5.7 or later versions attempts to establish an encrypted connection by default. If the server does not support encrypted connections, it automatically returns to unencrypted connections. The client of MySQL earlier than version 5.7 uses the unencrypted connection by default. + +You can change the connection behavior of the client using the following `--ssl-mode` parameters: + +- `--ssl-mode=REQUIRED`: The client requires an encrypted connection. The connection cannot be established if the server side does not support encrypted connections. +- In the absence of the `--ssl-mode` parameter: The client attempts to use an encrypted connection, but the encrypted connection cannot be established if the server side does not support encrypted connections. Then the client uses an unencrypted connection. +- `--ssl-mode=DISABLED`: The client uses an unencrypted connection. + +For more information, see [Client-Side Configuration for Encrypted Connections](https://dev.mysql.com/doc/refman/5.7/en/using-encrypted-connections.html#using-encrypted-connections-client-side-configuration) in MySQL. + +## Enable authentication + +If the `ssl-ca` parameter is not specified in the TiDB server or MySQL client, the client or the server does not perform authentication by default and cannot prevent man-in-the-middle attack. For example, the client might "securely" connect to a disguised client. You can configure the `ssl-ca` parameter for authentication in the server and client. Generally, you only need to authenticate the server, but you can also authenticate the client to further enhance the security. + ++ To authenticate the TiDB server from the MySQL client: + 1. Specify the `ssl-cert` and` ssl-key` parameters in the TiDB server. + 2. Specify the `--ssl-ca` parameter in the MySQL client. + 3. Specify the `--ssl-mode` to `VERIFY_IDENTITY` in the MySQL client. + 4. Make sure that the certificate (`ssl-cert`) configured by the TiDB server is signed by the CA specified by the client `--ssl-ca` parameter, otherwise the authentication fails. + ++ To authenticate the MySQL client from the TiDB server: + 1. Specify the `ssl-cert`, `ssl-key`, and `ssl-ca` parameters in the TiDB server. + 2. Specify the `--ssl-cert` and `--ssl-key` parameters in the client. + 3. Make sure the server-configured certificate and the client-configured certificate are both signed by the `ssl-ca` specified by the server. + +- To perform mutual authentication, meet both of the above requirements. + +> **Note**: Currently, it is optional that TiDB server authenticates the client. If the client does not present its identity certificate in the TLS handshake, the TLS connection can also be successfully established. + +## Check whether the current connection uses encryption + +Use the `SHOW STATUS LIKE "%Ssl%";` statement to get the details of the current connection, including whether encryption is used, the encryption protocol used by encrypted connections, the TLS version number and so on. + +See the following example of the result in an encrypted connection. The results change according to different TLS versions or encryption protocols supported by the client. + +``` +mysql> SHOW STATUS LIKE "%Ssl%"; +...... +| Ssl_verify_mode | 5 | +| Ssl_version | TLSv1.2 | +| Ssl_cipher | ECDHE-RSA-AES128-GCM-SHA256 | +...... +``` + +For the official MySQL client, you can also use the `STATUS` or `\s` statement to view the connection status: + +``` +mysql> \s +... +SSL: Cipher in use is ECDHE-RSA-AES128-GCM-SHA256 +... +``` + +## Supported TLS versions, key exchange protocols, and encryption algorithms + +The TLS versions, key exchange protocols and encryption algorithms supported by TiDB are determined by the official Golang libraries. + +### Supported TLS versions + +- TLS 1.0 +- TLS 1.1 +- TLS 1.2 + +### Supported key exchange protocols and encryption algorithms + +- TLS\_RSA\_WITH\_RC4\_128\_SHA +- TLS\_RSA\_WITH\_3DES\_EDE\_CBC\_SHA +- TLS\_RSA\_WITH\_AES\_128\_CBC\_SHA +- TLS\_RSA\_WITH\_AES\_256\_CBC\_SHA +- TLS\_RSA\_WITH\_AES\_128\_CBC\_SHA256 +- TLS\_RSA\_WITH\_AES\_128\_GCM\_SHA256 +- TLS\_RSA\_WITH\_AES\_256\_GCM\_SHA384 +- TLS\_ECDHE\_ECDSA\_WITH\_RC4\_128\_SHA +- TLS\_ECDHE\_ECDSA\_WITH\_AES\_128\_CBC\_SHA +- TLS\_ECDHE\_ECDSA\_WITH\_AES\_256\_CBC\_SHA +- TLS\_ECDHE\_RSA\_WITH\_RC4\_128\_SHA +- TLS\_ECDHE\_RSA\_WITH\_3DES\_EDE\_CBC\_SHA +- TLS\_ECDHE\_RSA\_WITH\_AES\_128\_CBC\_SHA +- TLS\_ECDHE\_RSA\_WITH\_AES\_256\_CBC\_SHA +- TLS\_ECDHE\_ECDSA\_WITH\_AES\_128\_CBC\_SHA256 +- TLS\_ECDHE\_RSA\_WITH\_AES\_128\_CBC\_SHA256 +- TLS\_ECDHE\_RSA\_WITH\_AES\_128\_GCM\_SHA256 +- TLS\_ECDHE\_ECDSA\_WITH\_AES\_128\_GCM\_SHA256 +- TLS\_ECDHE\_RSA\_WITH\_AES\_256\_GCM\_SHA384 +- TLS\_ECDHE\_ECDSA\_WITH\_AES\_256\_GCM\_SHA384 +- TLS\_ECDHE\_RSA\_WITH\_CHACHA20\_POLY1305 +- TLS\_ECDHE\_ECDSA\_WITH\_CHACHA20\_POLY1305 diff --git a/v2.0/sql/encryption-and-compression-functions.md b/v2.0/sql/encryption-and-compression-functions.md new file mode 100755 index 0000000000000..c5d414a182d3c --- /dev/null +++ b/v2.0/sql/encryption-and-compression-functions.md @@ -0,0 +1,29 @@ +--- +title: Encryption and Compression Functions +summary: Learn about the encryption and compression functions. +category: user guide +--- + +# Encryption and Compression Functions + +| Name | Description | +|:------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------| +| [`MD5()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_md5) | Calculate MD5 checksum | +| [`PASSWORD()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_password) (deprecated 5.7.6) | Calculate and return a password string | +| [`RANDOM_BYTES()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_random-bytes) | Return a random byte vector | +| [`SHA1(), SHA()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_sha1) | Calculate an SHA-1 160-bit checksum | +| [`SHA2()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_sha2) | Calculate an SHA-2 checksum | +| [`AES_DECRYPT()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_aes-decrypt) | Decrypt using AES | +| [`AES_ENCRYPT()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_aes-encrypt) | Encrypt using AES | +| [`COMPRESS()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_compress) | Return result as a binary string | +| [`UNCOMPRESS()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_uncompress) | Uncompress a string compressed | +| [`UNCOMPRESSED_LENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_uncompressed-length) | Return the length of a string before compression | +| [`CREATE_ASYMMETRIC_PRIV_KEY()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_create-asymmetric-priv-key) | Create private key | +| [`CREATE_ASYMMETRIC_PUB_KEY()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_create-asymmetric-pub-key) | Create public key | +| [`CREATE_DH_PARAMETERS()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_create-dh-parameters) | Generate shared DH secret | +| [`CREATE_DIGEST()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_create-digest) | Generate digest from string | +| [`ASYMMETRIC_DECRYPT()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_asymmetric-decrypt) | Decrypt ciphertext using private or public key | +| [`ASYMMETRIC_DERIVE()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_asymmetric-derive) | Derive symmetric key from asymmetric keys | +| [`ASYMMETRIC_ENCRYPT()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_asymmetric-encrypt) | Encrypt cleartext using private or public key | +| [`ASYMMETRIC_SIGN()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_asymmetric-sign) | Generate signature from digest | +| [`ASYMMETRIC_VERIFY()`](https://dev.mysql.com/doc/refman/5.7/en/enterprise-encryption-functions.html#function_asymmetric-verify) | Verify that signature matches digest | diff --git a/v2.0/sql/error.md b/v2.0/sql/error.md new file mode 100755 index 0000000000000..4fe4c56a53548 --- /dev/null +++ b/v2.0/sql/error.md @@ -0,0 +1,30 @@ +--- +title: Error Codes and Troubleshooting +summary: Learn about the error codes and solutions in TiDB. +category: user guide +--- + +# Error Codes and Troubleshooting + +This document describes the problems encountered during the use of TiDB and provides the solutions. + +## Error codes + +TiDB is compatible with the error codes in MySQL, and in most cases returns the same error code as MySQL. In addition, TiDB has the following unique error codes: + +| Error code | Description | Solution | +| ---- | ------- | --------- | +| 8001 | The memory used by the request exceeds the threshold limit for the TiDB memory usage. | Increase the value of the system variable with the `tidb_mem_quota` prefix. | +| 8002 | To guarantee consistency, a transaction with the `SELECT FOR UPDATE` statement cannot be retried when it encounters a commit conflict. TiDB rolls back the transaction and returns this error. | Retry the failed transaction. | +| 8003 | If the data in a row is not consistent with the index when executing the `ADMIN CHECK TABLE` command, TiDB returns this error. | +| 9001 | The PD request timed out. | Check the state/monitor/log of the PD server and the network between the TiDB server and the PD server. | +| 9002 | The TiKV request timed out. | Check the state/monitor/log of the TiKV server and the network between the TiDB server and the TiKV server. | +| 9003 | The TiKV server is busy and this usually occurs when the workload is too high. | Check the state/monitor/log of the TiKV server. | +| 9004 | This error occurs when a large number of transactional conflicts exist in the database. | Check the code of application. | +| 9005 | A certain Raft Group is not available, such as the number of replicas is not enough. This error usually occurs when the TiKV server is busy or the TiKV node is down. | Check the state/monitor/log of the TiKV server. | +| 9006 | The interval of GC Life Time is too short and the data that should be read by the long transactions might be cleared. | Extend the interval of GC Life Time. | +| 9500 | A single transaction is too large. | See [here](../FAQ.md#the-error-message-transaction-too-large-is-displayed) for the solution. | + +## Troubleshooting + +See the [troubleshooting](../trouble-shooting.md) and [FAQ](../FAQ.md) documents. diff --git a/v2.0/sql/expression-syntax.md b/v2.0/sql/expression-syntax.md new file mode 100755 index 0000000000000..4d58c2d49f096 --- /dev/null +++ b/v2.0/sql/expression-syntax.md @@ -0,0 +1,68 @@ +--- +title: Expression Syntax +summary: Learn about the expression syntax in TiDB. +category: user guide +--- + +# Expression Syntax + +The following rules define the expression syntax in TiDB. You can find the definition in `parser/parser.y`. The syntax parsing in TiDB is based on Yacc. + +``` +Expression: + singleAtIdentifier assignmentEq Expression + | Expression logOr Expression + | Expression "XOR" Expression + | Expression logAnd Expression + | "NOT" Expression + | Factor IsOrNotOp trueKwd + | Factor IsOrNotOp falseKwd + | Factor IsOrNotOp "UNKNOWN" + | Factor + +Factor: + Factor IsOrNotOp "NULL" + | Factor CompareOp PredicateExpr + | Factor CompareOp singleAtIdentifier assignmentEq PredicateExpr + | Factor CompareOp AnyOrAll SubSelect + | PredicateExpr + +PredicateExpr: + PrimaryFactor InOrNotOp '(' ExpressionList ')' + | PrimaryFactor InOrNotOp SubSelect + | PrimaryFactor BetweenOrNotOp PrimaryFactor "AND" PredicateExpr + | PrimaryFactor LikeOrNotOp PrimaryExpression LikeEscapeOpt + | PrimaryFactor RegexpOrNotOp PrimaryExpression + | PrimaryFactor + +PrimaryFactor: + PrimaryFactor '|' PrimaryFactor + | PrimaryFactor '&' PrimaryFactor + | PrimaryFactor "<<" PrimaryFactor + | PrimaryFactor ">>" PrimaryFactor + | PrimaryFactor '+' PrimaryFactor + | PrimaryFactor '-' PrimaryFactor + | PrimaryFactor '*' PrimaryFactor + | PrimaryFactor '/' PrimaryFactor + | PrimaryFactor '%' PrimaryFactor + | PrimaryFactor "DIV" PrimaryFactor + | PrimaryFactor "MOD" PrimaryFactor + | PrimaryFactor '^' PrimaryFactor + | PrimaryExpression + +PrimaryExpression: + Operand + | FunctionCallKeyword + | FunctionCallNonKeyword + | FunctionCallAgg + | FunctionCallGeneric + | Identifier jss stringLit + | Identifier juss stringLit + | SubSelect + | '!' PrimaryExpression + | '~' PrimaryExpression + | '-' PrimaryExpression + | '+' PrimaryExpression + | "BINARY" PrimaryExpression + | PrimaryExpression "COLLATE" StringName +``` diff --git a/v2.0/sql/functions-and-operators-reference.md b/v2.0/sql/functions-and-operators-reference.md new file mode 100755 index 0000000000000..c5cc0f6910e5e --- /dev/null +++ b/v2.0/sql/functions-and-operators-reference.md @@ -0,0 +1,13 @@ +--- +title: Function and Operator Reference +summary: Learn how to use the functions and operators. +category: user guide +--- + +# Function and Operator Reference + +The usage of the functions and operators in TiDB is similar to MySQL. See [Functions and Operators in MySQL](https://dev.mysql.com/doc/refman/5.7/en/functions.html). + +In SQL statements, expressions can be used on the `ORDER BY` and `HAVING` clauses of the `SELECT` statement, the `WHERE` clause of `SELECT`/`DELETE`/`UPDATE` statements, and `SET` statements. + +You can write expressions using literals, column names, NULL, built-in functions, operators and so on. diff --git a/v2.0/sql/information-functions.md b/v2.0/sql/information-functions.md new file mode 100755 index 0000000000000..5aff30cbba0e6 --- /dev/null +++ b/v2.0/sql/information-functions.md @@ -0,0 +1,25 @@ +--- +title: Information Functions +summary: Learn about the information functions. +category: user guide +--- + +# Information Functions + +In TiDB, the usage of information functions is similar to MySQL. For more information, see [Information Functions](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html). + +## Information function descriptions + +| Name | Description | +|:-----|:------------| +| [`CONNECTION_ID()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_connection-id) | Return the connection ID (thread ID) for the connection | +| [`CURRENT_USER()`, `CURRENT_USER`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_current-user) | Return the authenticated user name and host name | +| [`DATABASE()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_database) | Return the default (current) database name | +| [`FOUND_ROWS()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_found-rows) | For a `SELECT` with a `LIMIT` clause, the number of the rows that are returned if there is no `LIMIT` clause | +| [`LAST_INSERT_ID()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_last-insert-id) | Return the value of the `AUTOINCREMENT` column for the last `INSERT` | +| [`SCHEMA()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_schema) | Synonym for `DATABASE()` | +| [`SESSION_USER()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_session-user) | Synonym for `USER()` | +| [`SYSTEM_USER()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_system-user) | Synonym for `USER()` | +| [`USER()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_user) | Return the user name and host name provided by the client | +| [`VERSION()`](https://dev.mysql.com/doc/refman/5.7/en/information-functions.html#function_version) | Return a string that indicates the MySQL server version | +| `TIDB_VERSION` | Return a string that indicates the TiDB server version | diff --git a/v2.0/sql/json-functions-generated-column.md b/v2.0/sql/json-functions-generated-column.md new file mode 100755 index 0000000000000..f31c0bcd343ec --- /dev/null +++ b/v2.0/sql/json-functions-generated-column.md @@ -0,0 +1,118 @@ +--- +title: JSON Functions and Generated Column +summary: Learn how to use JSON functions and generated column to handle scenarios with uncertain schema. +category: user guide +--- + +# JSON Functions and Generated Column + +## About + +To be compatible with MySQL 5.7 or later and better support the document store, TiDB supports JSON in the latest version. In TiDB, a document is a set of Key-Value pairs, encoded as a JSON object. You can use the JSON datatype in a TiDB table and create indexes for the JSON document fields using generated columns. In this way, you can flexibly deal with the business scenarios with uncertain schema and are no longer limited by the read performance and the lack of support for transactions in traditional document databases. + +## JSON functions + +The support for JSON in TiDB mainly refers to the user interface of MySQL 5.7. For example, you can create a table that includes a JSON field to store complex information: + +```sql +CREATE TABLE person ( + id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, + name VARCHAR(255) NOT NULL, + address_info JSON +); +``` + +When you insert data into a table, you can deal with those data with uncertain schema like this: + +```sql +INSERT INTO person (name, address_info) VALUES ("John", '{"city": "Beijing"}'); +``` + +You can insert JSON data into the table by inserting a legal JSON string into the column corresponding to the JSON field. TiDB will then parse the text and save it in a more compact and easy-to-access binary form. + +You can also convert other data type into JSON using CAST: + +```sql +INSERT INTO person (name, address_info) VALUES ("John", CAST('{"city": "Beijing"}' AS JSON)); +INSERT INTO person (name, address_info) VALUES ("John", CAST('123' AS JSON)); +INSERT INTO person (name, address_info) VALUES ("John", CAST(123 AS JSON)); +``` + +Now, if you want to query all the users living in Beijing from the table, you can simply use the following SQL statement: + +```sql +SELECT id, name FROM person WHERE JSON_EXTRACT(address_info, '$.city') = 'Beijing'; +``` + +TiDB supports the `JSON_EXTRACT` function which is exactly the same as in MySQL. The function is to extract the `city` field from the `address_info` document. The second argument is a "path expression" and is used to specify which field to extract. See the following few examples to help you understand the "path expression": + +```sql +SET @person = '{"name":"John","friends":[{"name":"Forest","age":16},{"name":"Zhang San","gender":"male"}]}'; + +SELECT JSON_EXTRACT(@person, '$.name'); -- gets "John" +SELECT JSON_EXTRACT(@person, '$.friends[0].age'); -- gets 16 +SELECT JSON_EXTRACT(@person, '$.friends[1].gender'); -- gets "male" +SELECT JSON_EXTRACT(@person, '$.friends[2].name'); -- gets NULL +``` + +In addition to inserting and querying data, TiDB also supports editing JSON. In general, TiDB currently supports the following JSON functions in MySQL 5.7: + +- [JSON_EXTRACT](https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-extract) +- [JSON_ARRAY](https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-array) +- [JSON_OBJECT](https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-object) +- [JSON_SET](https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-set) +- [JSON_REPLACE](https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-replace) +- [JSON_INSERT](https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-insert) +- [JSON_REMOVE](https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-remove) +- [JSON_TYPE](https://dev.mysql.com/doc/refman/5.7/en/json-attribute-functions.html#function_json-type) +- [JSON_UNQUOTE](https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-unquote) + +You can get the general use of these functions directly from the function name. These functions in TiDB behave the same as in MySQL 5.7. For more information, see the [JSON Functions document of MySQL 5.7](https://dev.mysql.com/doc/refman/5.7/en/json-functions.html). If you are a user of MySQL 5.7, you can migrate to TiDB seamlessly. + +Currently TiDB does not support all the JSON functions in MySQL 5.7. This is because our preliminary goal is to provide complete support for **MySQL X Plugin**, which covers the majority of JSON functions used to insert, select, update and delete data. More functions will be supported if necessary. + +## Index JSON using generated column + +The full table scan is executed when you query a JSON field. When you run the `EXPLAIN` statement in TiDB, the results show that it is full table scan. Then, can you index the JSON field? + +First, this type of index is wrong: + +```sql +CREATE TABLE person ( + id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, + name VARCHAR(255) NOT NULL, + address_info JSON, + KEY (address_info) +); +``` + +This is not because of technical impossibility but because the direct comparison of JSON itself is meaningless. Although we can agree on some comparison rules, such as `ARRAY` is bigger than all `OBJECT`, it is useless. Therefore, as what is done in MySQL 5.7, TiDB prohibits the direct creation of index on JSON field, but you can index the fields in the JSON document in the form of generated column: + +```sql +CREATE TABLE person ( + id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, + name VARCHAR(255) NOT NULL, + address_info JSON, + city VARCHAR(64) AS (JSON_UNQUOTE(JSON_EXTRACT(address_info, '$.city'))) VIRTUAL, + KEY (city) +); +``` + +In this table, the `city` column is a **generated column**. As the name implies, the column is generated by other columns in the table, and cannot be assigned a value when inserted or updated. For generating a column, you can specify it as `VIRTUAL` to prevent it from being explicitly saved in the record, but by other columns when needed. This is particularly useful when the column is wide and you need to save storage space. With this generated column, you can create an index on it, and it looks the same with other regular columns. In query, you can run the following statements: + +```sql +SELECT name, id FROM person WHERE city = 'Beijing'; +``` + +In this way, you can create an index. + +> **Note**: In the JSON document, if the field in the specified path does not exist, the result of `JSON_EXTRACT` will be `NULL`. The value of the generated column with index is also `NULL`. If this is not what you want to see, you can add a `NOT NULL` constraint on the generated column. In this way, when the value of the `city` field is `NULL` after you insert data, it can be detected. + +## Limitations + +The current limitations of JSON and generated column are as follows: + +- You cannot add the generated column in the storage type of `STORED` through `ALTER TABLE`. +- You cannot create an index on the generated column through `ALTER TABLE`. + +The above functions and some other JSON functions are under development. diff --git a/v2.0/sql/json-functions.md b/v2.0/sql/json-functions.md new file mode 100755 index 0000000000000..ee52706900a77 --- /dev/null +++ b/v2.0/sql/json-functions.md @@ -0,0 +1,33 @@ +--- +title: JSON Functions +summary: Learn about JSON functions. +category: user guide +--- + +# JSON Functions + +| Function Name and Syntactic Sugar | Description | +| ---------- | ------------------ | +| [JSON_EXTRACT(json_doc, path[, path] ...)][json_extract]| Return data from a JSON document, selected from the parts of the document matched by the `path` arguments | +| [JSON_UNQUOTE(json_val)][json_unquote] | Unquote JSON value and return the result as a `utf8mb4` string | +| [JSON_TYPE(json_val)][json_type] | Return a `utf8mb4` string indicating the type of a JSON value | +| [JSON_SET(json_doc, path, val[, path, val] ...)][json_set] | Insert or update data in a JSON document and return the result | +| [JSON_INSERT(json_doc, path, val[, path, val] ...)][json_insert] | Insert data into a JSON document and return the result | +| [JSON_REPLACE(json_doc, path, val[, path, val] ...)][json_replace] | Replace existing values in a JSON document and return the result | +| [JSON_REMOVE(json_doc, path[, path] ...)][json_remove] | Remove data from a JSON document and return the result | +| [JSON_MERGE(json_doc, json_doc[, json_doc] ...)][json_merge] | Merge two or more JSON documents and return the merged result | +| [JSON_OBJECT(key, val[, key, val] ...)][json_object] | Evaluate a (possibly empty) list of key-value pairs and return a JSON object containing those pairs | +| [JSON_ARRAY([val[, val] ...])][json_array] | Evaluate a (possibly empty) list of values and return a JSON array containing those values | +| -> | Return value from JSON column after evaluating path; the syntactic sugar of `JSON_EXTRACT(doc, path_literal)` | +| ->> | Return value from JSON column after evaluating path and unquoting the result; the syntactic sugar of `JSON_UNQUOTE(JSONJSON_EXTRACT(doc, path_literal))` | + +[json_extract]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-extract +[json_unquote]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-unquote +[json_type]: https://dev.mysql.com/doc/refman/5.7/en/json-attribute-functions.html#function_json-type +[json_set]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-set +[json_insert]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-insert +[json_replace]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-replace +[json_remove]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-remove +[json_merge]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-merge +[json_object]: https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-object +[json_array]: https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-array diff --git a/v2.0/sql/keywords-and-reserved-words.md b/v2.0/sql/keywords-and-reserved-words.md new file mode 100755 index 0000000000000..5961cd9fbdec7 --- /dev/null +++ b/v2.0/sql/keywords-and-reserved-words.md @@ -0,0 +1,146 @@ +--- +title: Keywords and Reserved Words +summary: Learn about the keywords and reserved words in TiDB. +category: user guide +--- + +# Keywords and Reserved Words + +Keywords are words that have significance in SQL. Certain keywords, such as `SELECT`, `UPDATE`, or `DELETE`, are reserved and require special treatment for use as identifiers such as table and column names. For example, as table names, the reserved words must be quoted with backquotes: + +``` +mysql> CREATE TABLE select (a INT); +ERROR 1105 (HY000): line 0 column 19 near " (a INT)" (total length 27) +mysql> CREATE TABLE `select` (a INT); +Query OK, 0 rows affected (0.09 sec) +``` + +The `BEGIN` and `END` are keywords but not reserved words, so you do not need to quote them with backquotes: + +``` +mysql> CREATE TABLE `select` (BEGIN int, END int); +Query OK, 0 rows affected (0.09 sec) +``` + +Exception: A word that follows a period `.` qualifier does not need to be quoted with backquotes either: + +``` +mysql> CREATE TABLE test.select (BEGIN int, END int); +Query OK, 0 rows affected (0.08 sec) +``` + +The following table lists the keywords and reserved words in TiDB. The reserved words are labelled with (R). + +| ACTION | ADD (R) | ADDDATE | +|:------------------------|:-------------------|:-----------------------| +| ADMIN | AFTER | ALL (R) | +| ALTER (R) | ALWAYS | ANALYZE(R) | +| AND (R) | ANY | AS (R) | +| ASC (R) | ASCII | AUTO_INCREMENT | +| AVG | AVG_ROW_LENGTH | BEGIN | +| BETWEEN (R) | BIGINT (R) | BINARY (R) | +| BINLOG | BIT | BIT_XOR | +| BLOB (R) | BOOL | BOOLEAN | +| BOTH (R) | BTREE | BY (R) | +| BYTE | CASCADE (R) | CASE (R) | +| CAST | CHANGE (R) | CHAR (R) | +| CHARACTER (R) | CHARSET | CHECK (R) | +| CHECKSUM | COALESCE | COLLATE (R) | +| COLLATION | COLUMN (R) | COLUMNS | +| COMMENT | COMMIT | COMMITTED | +| COMPACT | COMPRESSED | COMPRESSION | +| CONNECTION | CONSISTENT | CONSTRAINT (R) | +| CONVERT (R) | COUNT | CREATE (R) | +| CROSS (R) | CURRENT_DATE (R) | CURRENT_TIME (R) | +| CURRENT_TIMESTAMP (R) | CURRENT_USER (R) | CURTIME | +| DATA | DATABASE (R) | DATABASES (R) | +| DATE | DATE_ADD | DATE_SUB | +| DATETIME | DAY | DAY_HOUR (R) | +| DAY_MICROSECOND (R) | DAY_MINUTE (R) | DAY_SECOND (R) | +| DDL | DEALLOCATE | DEC | +| DECIMAL (R) | DEFAULT (R) | DELAY_KEY_WRITE | +| DELAYED (R) | DELETE (R) | DESC (R) | +| DESCRIBE (R) | DISABLE | DISTINCT (R) | +| DISTINCTROW (R) | DIV (R) | DO | +| DOUBLE (R) | DROP (R) | DUAL (R) | +| DUPLICATE | DYNAMIC | ELSE (R) | +| ENABLE | ENCLOSED | END | +| ENGINE | ENGINES | ENUM | +| ESCAPE | ESCAPED | EVENTS | +| EXCLUSIVE | EXECUTE | EXISTS | +| EXPLAIN (R) | EXTRACT | FALSE (R) | +| FIELDS | FIRST | FIXED | +| FLOAT (R) | FLUSH | FOR (R) | +| FORCE (R) | FOREIGN (R) | FORMAT | +| FROM (R) | FULL | FULLTEXT (R) | +| FUNCTION | GENERATED (R) | GET_FORMAT | +| GLOBAL | GRANT (R) | GRANTS | +| GROUP (R) | GROUP_CONCAT | HASH | +| HAVING (R) | HIGH_PRIORITY (R) | HOUR | +| HOUR_MICROSECOND (R) | HOUR_MINUTE (R) | HOUR_SECOND (R) | +| IDENTIFIED | IF (R) | IGNORE (R) | +| IN (R) | INDEX (R) | INDEXES | +| INFILE (R) | INNER (R) | INSERT (R) | +| INT (R) | INTEGER (R) | INTERVAL (R) | +| INTO (R) | IS (R) | ISOLATION | +| JOBS | JOIN (R) | JSON | +| KEY (R) | KEY_BLOCK_SIZE | KEYS (R) | +| KILL (R) | LEADING (R) | LEFT (R) | +| LESS | LEVEL | LIKE (R) | +| LIMIT (R) | LINES (R) | LOAD (R) | +| LOCAL | LOCALTIME (R) | LOCALTIMESTAMP (R) | +| LOCK (R) | LONGBLOB (R) | LONGTEXT (R) | +| LOW_PRIORITY (R) | MAX | MAX_ROWS | +| MAXVALUE (R) | MEDIUMBLOB (R) | MEDIUMINT (R) | +| MEDIUMTEXT (R) | MICROSECOND | MIN | +| MIN_ROWS | MINUTE | MINUTE_MICROSECOND (R) | +| MINUTE_SECOND (R) | MIN | MIN_ROWS | +| MINUTE | MINUTE_MICROSECOND | MINUTE_SECOND | +| MOD (R) | MODE | MODIRY | +| MONTH | NAMES | NATIONAL | +| NATURAL (R) | NO | NO_WRITE_TO_BINLOG (R) | +| NONE | NOT (R) | NOW | +| NULL (R) | NUMERIC (R) | NVARCHAR (R) | +| OFFSET | ON (R) | ONLY | +| OPTION (R) | OR (R) | ORDER (R) | +| OUTER (R) | PARTITION (R) | PARTITIONS | +| PASSWORD | PLUGINS | POSITION | +| PRECISION (R) | PREPARE | PRIMARY (R) | +| PRIVILEGES | PROCEDURE (R) | PROCESS | +| PROCESSLIST | QUARTER | QUERY | +| QUICK | RANGE (R) | READ (R) | +| REAL (R) | REDUNDANT | REFERENCES (R) | +| REGEXP (R) | RENAME (R) | REPEAT (R) | +| REPEATABLE | REPLACE (R) | RESTRICT (R) | +| REVERSE | REVOKE (R) | RIGHT (R) | +| RLIKE (R) | ROLLBACK | ROW | +| ROW_COUNT | ROW_FORMAT | SCHEMA | +| SCHEMAS | SECOND | SECOND_MICROSECOND (R) | +| SELECT (R) | SERIALIZABLE | SESSION | +| SET (R) | SHARE | SHARED | +| SHOW (R) | SIGNED | SMALLINT (R) | +| SNAPSHOT | SOME | SQL_CACHE | +| SQL_CALC_FOUND_ROWS (R) | SQL_NO_CACHE | START | +| STARTING (R) | STATS | STATS_BUCKETS | +| STATS_HISTOGRAMS | STATS_META | STATS_PERSISTENT | +| STATUS | STORED (R) | SUBDATE | +| SUBSTR | SUBSTRING | SUM | +| SUPER | TABLE (R) | TABLES | +| TERMINATED (R) | TEXT | THAN | +| THEN (R) | TIDB | TIDB_INLJ | +| TIDB_SMJ | TIME | TIMESTAMP | +| TIMESTAMPADD | TIMESTAMPDIFF | TINYBLOB (R) | +| TINYINT (R) | TINYTEXT (R) | TO (R) | +| TRAILING (R) | TRANSACTION | TRIGGER (R) | +| TRIGGERS | TRIM | TRUE (R) | +| TRUNCATE | UNCOMMITTED | UNION (R) | +| UNIQUE (R) | UNKNOWN | UNLOCK (R) | +| UNSIGNED (R) | UPDATE (R) | USE (R) | +| USER | USING (R) | UTC_DATE (R) | +| UTC_TIME (R) | UTC_TIMESTAMP (R) | VALUE | +| VALUES (R) | VARBINARY (R) | VARCHAR (R) | +| VARIABLES | VIEW | VIRTUAL (R) | +| WARNINGS | WEEK | WHEN (R) | +| WHERE (R) | WITH (R) | WRITE (R) | +| XOR (R) | YEAR | YEAR_MONTH (R) | | +| ZEROFILL (R) | | | diff --git a/v2.0/sql/literal-values.md b/v2.0/sql/literal-values.md new file mode 100755 index 0000000000000..dd9d20c9d3610 --- /dev/null +++ b/v2.0/sql/literal-values.md @@ -0,0 +1,244 @@ +--- +title: Literal Values +summary: Learn how to use various literal values. +category: user guide +--- + +# Literal Values + +This document describes String literals, Numeric literals, NULL values, Hexadecimal literals, Date and time literals, Boolean literals, and Bit-value literals. + +## String literals + +A string is a sequence of bytes or characters, enclosed within either single quote `'` or double quote `"` characters. For example: + +``` +'example string' +"example string" +``` + +Quoted strings placed next to each other are concatenated to a single string. The following lines are equivalent: + +``` +'a string' +'a' ' ' 'string' +"a" ' ' "string" +``` + +If the `ANSI_QUOTES` SQL MODE is enabled, string literals can be quoted only within single quotation marks because a string quoted within double quotation marks is interpreted as an identifier. + +A binary string is a string of bytes. Each binary string has a character set and collation named `binary`. A non-binary string is a string of characters. It has a character set other than `binary` and a collation that is compatible with the character set. + +For both types of strings, comparisons are based on the numeric values of the string unit. For binary strings, the unit is the byte. For non-binary strings, the unit is the character and some character sets support multibyte characters. + +A string literal may have an optional `character set introducer` and `COLLATE clause`, to designate it as a string that uses a specific character set and collation. TiDB only supports this in syntax, but does not process it. + +``` +[_charset_name]'string' [COLLATE collation_name] +``` + +For example: + +``` +SELECT _latin1'string'; +SELECT _binary'string'; +SELECT _utf8'string' COLLATE utf8_bin; +``` + +You can use N'literal' (or n'literal') to create a string in the national character set. The following statements are equivalent: + +``` +SELECT N'some text'; +SELECT n'some text'; +SELECT _utf8'some text'; +``` + +Escape characters: + +- `\0`: An ASCII NUL (X'00') character +- `\'`: A single quote (') character +- `\"`: A double quote (")character +- `\b`: A backspace character +- `\n`: A newline (linefeed) character +- `\r`: A carriage return character +- `\t`: A tab character +- `\z`: ASCII 26 (Ctrl + Z) +- `\\`: A backslash `\` character +- `\%`: A `%` character +- `\_`: A `_` character + +You can use the following ways to include quote characters within a string: + +- A `'` inside a string quoted with `'` may be written as `''`. +- A `"` inside a string quoted with `"` may be written as `""`. +- Precede the quote character by an escape character `\`. +- A `'` inside a string quoted with `"` needs no special treatment, and a `"` inside a string quoted with `'` needs no special treatment either. + +For more information, see [String Literals in MySQL](https://dev.mysql.com/doc/refman/5.7/en/string-literals.html). + +## Numeric literals + +Numeric literals include integer and DECIMAL literals and floating-point literals. + +Integer may include `.` as a decimal separator. Numbers may be preceded by `-` or `+` to indicate a negative or positive value respectively. + +Exact-value numeric literals can be represented as `1, .2, 3.4, -5, -6.78, +9.10`. + +Numeric literals can also be represented in scientific notation, such as `1.2E3, 1.2E-3, -1.2E3, -1.2E-3`. + +For more information, see [Numeric Literals in MySQL](https://dev.mysql.com/doc/refman/5.7/en/number-literals.html). + +## NULL values + +The `NULL` value means “no data”. NULL can be written in any letter case. A synonym is `\N` (case sensitive). + +Be aware that the `NULL` value is different from values such as `0` for numeric types or the empty string `''` for string types. + +## Hexadecimal literals + +Hexadecimal literal values are written using `X'val'` or `0xval` notation, where `val` contains hexadecimal digits. A leading `0x` is case sensitive and cannot be written as `0X`. + +Legal hexadecimal literals: + +``` +X'ac12' +X'12AC' +x'ac12' +x'12AC' +0xac12 +0x12AC +``` + +Illegal hexadecimal literals: + +``` +X'1z' (z is not a hexadecimal legal digit) +0X12AC (0X must be written as 0x) +``` + +Hexadecimal literals written using `X'val'` notation must contain an even number of digits. To avoid the syntax error, pad the value with a leading zero: + +``` +mysql> select X'aff'; +ERROR 1105 (HY000): line 0 column 13 near ""hex literal: invalid hexadecimal format, must even numbers, but 3 (total length 13) +mysql> select X'0aff'; ++---------+ +| X'0aff' | ++---------+ +| + | ++---------+ +1 row in set (0.00 sec) +``` + +By default, a hexadecimal literal is a binary string. + +To convert a string or a number to a string in hexadecimal format, use the `HEX()` function: + +``` +mysql> SELECT HEX('TiDB'); ++-------------+ +| HEX('TiDB') | ++-------------+ +| 54694442 | ++-------------+ +1 row in set (0.01 sec) + +mysql> SELECT X'54694442'; ++-------------+ +| X'54694442' | ++-------------+ +| TiDB | ++-------------+ +1 row in set (0.00 sec) +``` + +## Date and time literals + +Date and time values can be represented in several formats, such as quoted strings or as numbers. When TiDB expects a date, it interprets any of `'2015-07-21'`, `'20150721'` and `20150721` as a date. + +TiDB supports the following formats for date values: + +- As a string in either `'YYYY-MM-DD'` or `'YY-MM-DD'` format. The `-` delimiter is "relaxed" in syntax. Any punctuation character may be used as the delimiter between date parts. For example, `'2017-08-24'`, `'2017&08&24'` and `'2012@12^31'` are equivalent. The only delimiter recognized is the `.` character, which is treated as a decimal point to separate the integer and fractional parts. The date and time parts can be separated by `T` other than a space. For example, `2017-8-24 10:42:00` and `2017-8-24T10:42:00` are equivalent. +- As a string with no delimiters in either `'YYYYMMDDHHMMSS'` or `'YYMMDDHHMMSS'` format. For example, `'20170824104520'` and `'170824104520'` are interpreted as `'2017-08-24 10:45:20'`. But `'170824304520'` is illegal because the hour part exceeds the legal range. +- As a number in either `YYYYMMDDHHMMSS` or `YYMMDDHHMMSS` format, without single quotation marks or double quotation marks. For example, `20170824104520` is interpreted as `'2017-08-24 10:45:20'`. + +A DATETIME or TIMESTAMP value can include a trailing fractional seconds part in up to microseconds (6 digits) precision. The fractional part should always be separated from the rest of the time by a decimal point. + +Dates containing two-digit year values are ambiguous. It is recommended to use the four-digit format. TiDB interprets two-digit year values using the following rules: + +- Year values in the range of `70-99` are converted to `1970-1999`. +- Year values in the range of `00-69` are converted to `2000-2069`. + +For values specified as strings that include date part delimiters, it is unnecessary to specify two digits for month or day values that are less than 10. `'2017-8-4'` is the same as `'2017-08-04'`. Similarly, for values specified as strings that include time part delimiters, it is unnecessary to specify two digits for hour, minute, or second values that are less than 10. `'2017-08-24 1:2:3'` is the same as `'2017-08-24 01:02:03'`. + +In TiDB, the date or time values specified as numbers are interpreted according their length: + +- 6 digits: `YYMMDD` +- 12 digits: `YYMMDDHHMMSS` +- 8 digits: `YYYYMMDD` +- 14 digits: `YYYYMMDDHHMMSS` + +TiDB supports the following formats for time values: + +- As a string in `'D HH:MM:SS'` format. You can also use one of the following “relaxed” syntaxes: `'HH:MM:SS'`, `'HH:MM'`, `'D HH:MM'`, `'D HH'`, or `'SS'`. Here D represents days and the legal value range is `0-34`. +- As a number in `'HHMMSS'` format. For example, `231010` is interpreted as `'23:10:10'`. +- A number in any of the `SS`, `MMSS` or `HHMMSS` format can be treated as time. + +The time value can also include a trailing fractional part in up to 6 digits precision. The `.` character represents the decimal point. + +For more information, see [Date and Time Literals in MySQL](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-literals.html). + +## Boolean literals + +The constants `TRUE` and `FALSE` evaluate to 1 and 0 respectively, which are not case sensitive. + +``` +mysql> SELECT TRUE, true, tRuE, FALSE, FaLsE, false; ++------+------+------+-------+-------+-------+ +| TRUE | true | tRuE | FALSE | FaLsE | false | ++------+------+------+-------+-------+-------+ +| 1 | 1 | 1 | 0 | 0 | 0 | ++------+------+------+-------+-------+-------+ +1 row in set (0.00 sec) +``` + +## Bit-value literals + +Bit-value literals are written using `b'val'` or `0bval` notation. The `val` is a binary value written using zeros and ones. A leading `0b` is case sensitive and cannot be written as `0B`. + +Legal bit-value literals: + +``` +b'01' +B'01' +0b01 +``` + +Illegal bit-value literals: + +``` +b'2' (2 is not a binary digit; it must be 0 or 1) +0B01 (0B must be written as 0b) +``` + +By default, a bit-value literal is a binary string. + +Bit values are returned as binary values, which may not display well in the MySQL client. To convert a bit value to printable form, you can use a conversion function such as `BIN()` or `HEX()`. + +```sql +CREATE TABLE t (b BIT(8)); +INSERT INTO t SET b = b'00010011'; +INSERT INTO t SET b = b'1110'; +INSERT INTO t SET b = b'100101'; + +mysql> SELECT b+0, BIN(b), HEX(b) FROM t; ++------+--------+--------+ +| b+0 | BIN(b) | HEX(b) | ++------+--------+--------+ +| 19 | 10011 | 13 | +| 14 | 1110 | E | +| 37 | 100101 | 25 | ++------+--------+--------+ +3 rows in set (0.00 sec) +``` diff --git a/v2.0/sql/miscellaneous-functions.md b/v2.0/sql/miscellaneous-functions.md new file mode 100755 index 0000000000000..db3150f02f639 --- /dev/null +++ b/v2.0/sql/miscellaneous-functions.md @@ -0,0 +1,24 @@ +--- +title: Miscellaneous Functions +summary: Learn about miscellaneous functions in TiDB. +category: user guide +--- + +# Miscellaneous Functions + +| Name | Description | +|:------------|:-----------------------------------------------------------------------------------------------| +| [`ANY_VALUE()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_any-value) | Suppress ONLY_FULL_GROUP_BY value rejection | +| [`SLEEP()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_sleep) | Sleep for a number of seconds | +| [`UUID()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid) | Return a Universal Unique Identifier (UUID) | +| [`VALUES()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_values) | Defines the values to be used during an INSERT | +| [`INET_ATON()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_inet-aton) | Return the numeric value of an IP address | +| [`INET_NTOA()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_inet-ntoa) | Return the IP address from a numeric value | +| [`INET6_ATON()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_inet6-aton) | Return the numeric value of an IPv6 address | +| [`INET6_NTOA()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_inet6-ntoa) | Return the IPv6 address from a numeric value | +| [`IS_IPV4()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_is-ipv4) | Whether argument is an IPv4 address | +| [`IS_IPV4_COMPAT()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_is-ipv4-compat) | Whether argument is an IPv4-compatible address | +| [`IS_IPV4_MAPPED()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_is-ipv4-mapped) | Whether argument is an IPv4-mapped address | +| [`IS_IPV6()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_is-ipv6) | Whether argument is an IPv6 address | +| [`GET_LOCK()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_get-lock) | Get a named lock | +| [`RELEASE_LOCK()`](https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_release-lock) | Releases the named lock | diff --git a/v2.0/sql/mysql-compatibility.md b/v2.0/sql/mysql-compatibility.md new file mode 100755 index 0000000000000..69d9df8857927 --- /dev/null +++ b/v2.0/sql/mysql-compatibility.md @@ -0,0 +1,118 @@ +--- +title: Compatibility with MySQL +summary: Learn about the compatibility of TiDB with MySQL, and the unsupported and different features. +category: user guide +--- + +# Compatibility with MySQL + +TiDB supports the majority of the MySQL 5.7 syntax, including cross-row transactions, JOIN, subquery, and so on. You can connect to TiDB directly using your own MySQL client. If your existing business is developed based on MySQL, you can replace MySQL with TiDB to power your application without changing a single line of code in most cases. + +TiDB is compatible with most of the MySQL database management & administration tools such as `PHPMyAdmin`, `Navicat`, `MySQL Workbench`, and so on. It also supports the database backup tools, such as `mysqldump` and `mydumper/myloader`. + +However, in TiDB, the following MySQL features are not supported for the time being or are different: + +## Unsupported features + ++ Stored Procedures ++ View ++ Trigger ++ The user-defined functions ++ The `FOREIGN KEY` constraints ++ The `FULLTEXT` indexes ++ The `Spatial` indexes ++ The Non-UTF-8 characters ++ Add primary key ++ Drop primary key + +## Features that are different from MySQL + +### Auto-increment ID + +The auto-increment ID feature in TiDB is only guaranteed to be automatically incremental and unique but is not guaranteed to be allocated sequentially. Currently, TiDB is allocating IDs in batches. If data is inserted into multiple TiDB servers simultaneously, the allocated IDs are not sequential. + +> **Warning**: +> +> If you use the auto-increment ID in a cluster with multiple tidb-server instances, do not mix the default value and the custom value, otherwise an error occurs in the following situation: +> +> Assume that you have a table with the auto-increment ID: +> +> ``` +> create table t(id int unique key auto_increment, c int); +> ``` +> +> The principle of the auto-increment ID in TiDB is that each tidb-server instance caches a section of ID values (currently 30000 IDs are cached) for allocation and fetches the next section after this section is used up. +> +> Assume that the cluster contains two tidb-server instances, namely Instance A and Instance B. Instance A caches the auto-increment ID of [1, 30000], while Instance B caches the auto-increment ID of [30001, 60000]. +> +> The operations are executed as follows: +> +> 1. The client issues the `insert into t values (1, 1)` statement to Instance B which sets the `id` to 1 and the statement is executed successfully. +> 2. The client issues the `insert into t (c) (1)` statement to Instance A. This statement does not specify the value of `id`, so Instance A allocates the value. Currently, Instances A caches the auto-increment ID of [1, 30000], so it allocates the `id` value to 1 and adds 1 to the local counter. However, at this time the data with the `id` of 1 already exists in the cluster, therefore it reports `Duplicated Error`. + +### Built-in functions + +TiDB supports most of the MySQL built-in functions, but not all. See [TiDB SQL Grammar](https://pingcap.github.io/sqlgram/#FunctionCallKeyword) for the supported functions. + +### DDL + +TiDB implements the asynchronous schema changes algorithm in F1. The Data Manipulation Language (DML) operations cannot be blocked during DDL the execution. Currently, the supported DDL includes: + ++ Create Database ++ Drop Database ++ Create Table ++ Drop Table ++ Add Index: Does not support creating multiple indexes at the same time. ++ Drop Index ++ Add Column: + - Does not support creating multiple columns at the same time. + - Does not support setting a column as the primary key, or creating a unique index, or specifying auto_increment while adding it. ++ Drop Column: Does not support dropping the primary key column or index column. ++ Alter Column ++ Change/Modify Column + - Supports changing/modifying the types among the following integer types: TinyInt, SmallInt, MediumInt, Int, BigInt. + - Supports changing/modifying the types among the following string types: Char, Varchar, Text, TinyText, MediumText, LongText + - Support changing/modifying the types among the following string types: Blob, TinyBlob, MediumBlob, LongBlob. + + > **Note:** The changing/modifying column operation cannot make the length of the original type become shorter and it cannot change the unsigned/charset/collate attributes of the column. + + - Supports changing the following type definitions: default value, comment, null, not null and OnUpdate, but does not support changing from null to not null. + - Supports parsing the `LOCK [=] {DEFAULT|NONE|SHARED|EXCLUSIVE}` syntax, but there is no actual operation. + ++ Truncate Table ++ Rename Table ++ Create Table Like + +### Transaction + +TiDB implements an optimistic transaction model. Unlike MySQL, which uses row-level locking to avoid write conflict, in TiDB, the write conflict is checked only in the `commit` process during the execution of the statements like `Update`, `Insert`, `Delete`, and so on. + +**Note:** On the business side, remember to check the returned results of `commit` because even there is no error in the execution, there might be errors in the `commit` process. + +### Load data + ++ Syntax: + + ``` + LOAD DATA LOCAL INFILE 'file_name' INTO TABLE table_name + {FIELDS | COLUMNS} TERMINATED BY 'string' ENCLOSED BY 'char' ESCAPED BY 'char' + LINES STARTING BY 'string' TERMINATED BY 'string' + (col_name ...); + ``` + + Currently, the supported `ESCAPED BY` characters are: `/\/\`. + ++ Transaction + + When TiDB is in the execution of loading data, by default, a record with 20,000 rows of data is seen as a transaction for persistent storage. If a load data operation inserts more than 20,000 rows, it will be divided into multiple transactions to commit. If an error occurs in one transaction, this transaction in process will not be committed. However, transactions before that are committed successfully. In this case, a part of the load data operation is successfully inserted, and the rest of the data insertion fails. But MySQL treats a load data operation as a transaction, one error leads to the failure of the entire load data operation. + +### Default differences + +- Default character set: `latin1` in MySQL 5.7 (UTF-8 in MySQL 8.0), while `utf8mb4` in TiDB. +- Default collation: `latin1_swedish_ci` in MySQL 5.7, while `binary` in TiDB. +- Default value of `lower_case_table_names`: + - The default value in TiDB is 2 and currently TiDB only supports 2. + - The default value in MySQL: + - On Linux: 0 + - On Windows: 1 + - On macOS: 2 \ No newline at end of file diff --git a/v2.0/sql/numeric-functions-and-operators.md b/v2.0/sql/numeric-functions-and-operators.md new file mode 100755 index 0000000000000..0ba58269145d7 --- /dev/null +++ b/v2.0/sql/numeric-functions-and-operators.md @@ -0,0 +1,57 @@ +--- +title: Numeric Functions and Operators +summary: Learn about the numeric functions and operators. +category: user guide +--- + +# Numeric Functions and Operators + +This document describes the arithmetic operators and mathematical functions. + +## Arithmetic operators + +| Name | Description | +|:----------------------------------------------------------------------------------------------|:----------------------------------| +| [`+`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_plus) | Addition operator | +| [`-`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_minus) | Minus operator | +| [`*`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_times) | Multiplication operator | +| [`/`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_divide) | Division operator | +| [`DIV`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_div) | Integer division | +| [`%`, `MOD`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_mod) | Modulo operator | +| [`-`](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_unary-minus) | Change the sign of the argument | + + +## Mathematical functions + +| Name | Description | +|:----------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------| +| [`POW()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_pow) | Return the argument raised to the specified power | +| [`POWER()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_power) | Return the argument raised to the specified power | +| [`EXP()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_exp) | Raise to the power of | +| [`SQRT()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_sqrt) | Return the square root of the argument | +| [`LN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_ln) | Return the natural logarithm of the argument | +| [`LOG()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_log) | Return the natural logarithm of the first argument | +| [`LOG2()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_log2) | Return the base-2 logarithm of the argument | +| [`LOG10()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_log10) | Return the base-10 logarithm of the argument | +| [`PI()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_pi) | Return the value of pi | +| [`TAN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_tan) | Return the tangent of the argument | +| [`COT()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_cot) | Return the cotangent | +| [`SIN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_sin) | Return the sine of the argument | +| [`COS()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_cos) | Return the cosine | +| [`ATAN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_atan) | Return the arc tangent | +| [`ATAN2(), ATAN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_atan2) | Return the arc tangent of the two arguments | +| [`ASIN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_asin) | Return the arc sine | +| [`ACOS()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_acos) | Return the arc cosine | +| [`RADIANS()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_radians) | Return argument converted to radians | +| [`DEGREES()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_degrees) | Convert radians to degrees | +| [`MOD()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_mod) | Return the remainder | +| [`ABS()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_abs) | Return the absolute value | +| [`CEIL()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_ceil) | Return the smallest integer value not less than the argument | +| [`CEILING()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_ceiling) | Return the smallest integer value not less than the argument | +| [`FLOOR()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_floor) | Return the largest integer value not greater than the argument | +| [`ROUND()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_round) | Round the argument | +| [`RAND()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_rand) | Return a random floating-point value | +| [`SIGN()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_sign) | Return the sign of the argument | +| [`CONV()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_conv) | Convert numbers between different number bases | +| [`TRUNCATE()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_truncate) | Truncate to specified number of decimal places | +| [`CRC32()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_crc32) | Compute a cyclic redundancy check value | diff --git a/v2.0/sql/operators.md b/v2.0/sql/operators.md new file mode 100755 index 0000000000000..a11d98334f789 --- /dev/null +++ b/v2.0/sql/operators.md @@ -0,0 +1,135 @@ +--- +title: Operators +summary: Learn about the operators precedence, comparison functions and operators, logical operators, and assignment operators. +category: user guide +--- + +# Operators + +This document describes the operators precedence, comparison functions and operators, logical operators, and assignment operators. + +- [Operator precedence](#operator-precedence) +- [Comparison functions and operators](#comparison-functions-and-operators) +- [Logical operators](#logical-operators) +- [Assignment operators](#assignment-operators) + +| Name | Description | +| ---------------------------------------- | ---------------------------------------- | +| [AND, &&](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_and) | Logical AND | +| [=](https://dev.mysql.com/doc/refman/5.7/en/assignment-operators.html#operator_assign-equal) | Assign a value (as part of a [`SET`](https://dev.mysql.com/doc/refman/5.7/en/set-variable.html) statement, or as part of the `SET` clause in an [`UPDATE`](https://dev.mysql.com/doc/refman/5.7/en/update.html) statement) | +| [:=](https://dev.mysql.com/doc/refman/5.7/en/assignment-operators.html#operator_assign-value) | Assign a value | +| [BETWEEN ... AND ...](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_between) | Check whether a value is within a range of values | +| [BINARY](https://dev.mysql.com/doc/refman/5.7/en/cast-functions.html#operator_binary) | Cast a string to a binary string | +| [&](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-and) | Bitwise AND | +| [~](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-invert) | Bitwise inversion | +| [\|](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-or) | Bitwise OR | +| [^](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_bitwise-xor) | Bitwise XOR | +| [CASE](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#operator_case) | Case operator | +| [DIV](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_div) | Integer division | +| [/](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_divide) | Division operator | +| [=](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_equal) | Equal operator | +| [<=>](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_equal-to) | NULL-safe equal to operator | +| [>](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_greater-than) | Greater than operator | +| [>=](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_greater-than-or-equal) | Greater than or equal operator | +| [IS](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is) | Test a value against a boolean | +| [IS NOT](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is-not) | Test a value against a boolean | +| [IS NOT NULL](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is-not-null) | NOT NULL value test | +| [IS NULL](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is-null) | NULL value test | +| [->](https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#operator_json-column-path) | Return value from JSON column after evaluating path; equivalent to `JSON_EXTRACT()` | +| [->>](https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#operator_json-inline-path) | Return value from JSON column after evaluating path and unquoting the result; equivalent to `JSON_UNQUOTE(JSON_EXTRACT())` | +| [<<](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_left-shift) | Left shift | +| [<](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_less-than) | Less than operator | +| [<=](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_less-than-or-equal) | Less than or equal operator | +| [LIKE](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#operator_like) | Simple pattern matching | +| [-](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_minus) | Minus operator | +| [%, MOD](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_mod) | Modulo operator | +| [NOT, !](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_not) | Negates value | +| [NOT BETWEEN ... AND ...](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_not-between) | Check whether a value is not within a range of values | +| [!=, <>](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_not-equal) | Not equal operator | +| [NOT LIKE](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#operator_not-like) | Negation of simple pattern matching | +| [NOT REGEXP](https://dev.mysql.com/doc/refman/5.7/en/regexp.html#operator_not-regexp) | Negation of REGEXP | +| [\|\|, OR](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_or) | Logical OR | +| [+](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_plus) | Addition operator | +| [REGEXP](https://dev.mysql.com/doc/refman/5.7/en/regexp.html#operator_regexp) | Pattern matching using regular expressions | +| [>>](https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html#operator_right-shift) | Right shift | +| [RLIKE](https://dev.mysql.com/doc/refman/5.7/en/regexp.html#operator_regexp) | Synonym for REGEXP | +| [SOUNDS LIKE](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#operator_sounds-like) | Compare sounds | +| [*](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_times) | Multiplication operator | +| [-](https://dev.mysql.com/doc/refman/5.7/en/arithmetic-functions.html#operator_unary-minus) | Change the sign of the argument | +| [XOR](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_xor) | Logical XOR | + +## Operator precedence + +Operator precedences are shown in the following list, from highest precedence to the lowest. Operators that are shown together on a line have the same precedence. + +``` sql +INTERVAL +BINARY, COLLATE +! +- (unary minus), ~ (unary bit inversion) +^ +*, /, DIV, %, MOD +-, + +<<, >> +& +| += (comparison), <=>, >=, >, <=, <, <>, !=, IS, LIKE, REGEXP, IN +BETWEEN, CASE, WHEN, THEN, ELSE +NOT +AND, && +XOR +OR, || += (assignment), := +``` + +For details, see [here](https://dev.mysql.com/doc/refman/5.7/en/operator-precedence.html). + +## Comparison functions and operators + +| Name | Description | +| ---------------------------------------- | ---------------------------------------- | +| [BETWEEN ... AND ...](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_between) | Check whether a value is within a range of values | +| [COALESCE()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_coalesce) | Return the first non-NULL argument | +| [=](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_equal) | Equal operator | +| [<=>](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_equal-to) | NULL-safe equal to operator | +| [>](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_greater-than) | Greater than operator | +| [>=](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_greater-than-or-equal) | Greater than or equal operator | +| [GREATEST()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_greatest) | Return the largest argument | +| [IN()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_in) | Check whether a value is within a set of values | +| [INTERVAL()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_interval) | Return the index of the argument that is less than the first argument | +| [IS](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is) | Test a value against a boolean | +| [IS NOT](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is-not) | Test a value against a boolean | +| [IS NOT NULL](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is-not-null) | NOT NULL value test | +| [IS NULL](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_is-null) | NULL value test | +| [ISNULL()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_isnull) | Test whether the argument is NULL | +| [LEAST()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_least) | Return the smallest argument | +| [<](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_less-than) | Less than operator | +| [<=](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_less-than-or-equal) | Less than or equal operator | +| [LIKE](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#operator_like) | Simple pattern matching | +| [NOT BETWEEN ... AND ...](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_not-between) | Check whether a value is not within a range of values | +| [!=, <>](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_not-equal) | Not equal operator | +| [NOT IN()](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_not-in) | Check whether a value is not within a set of values | +| [NOT LIKE](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#operator_not-like) | Negation of simple pattern matching | +| [STRCMP()](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#function_strcmp) | Compare two strings | + +For details, see [here](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html). + +## Logical operators + +| Name | Description | +| ---------------------------------------- | ------------- | +| [AND, &&](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_and) | Logical AND | +| [NOT, !](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_not) | Negates value | +| [\|\|, OR](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_or) | Logical OR | +| [XOR](https://dev.mysql.com/doc/refman/5.7/en/logical-operators.html#operator_xor) | Logical XOR | + +For details, see [here](https://dev.mysql.com/doc/refman/5.7/en/group-by-handling.html). + +## Assignment operators + +| Name | Description | +| ---------------------------------------- | ---------------------------------------- | +| [=](https://dev.mysql.com/doc/refman/5.7/en/assignment-operators.html#operator_assign-equal) | Assign a value (as part of a [`SET`](https://dev.mysql.com/doc/refman/5.7/en/set-variable.html) statement, or as part of the `SET` clause in an [`UPDATE`](https://dev.mysql.com/doc/refman/5.7/en/update.html) statement) | +| [:=](https://dev.mysql.com/doc/refman/5.7/en/assignment-operators.html#operator_assign-value) | Assign a value | + +For details, see [here](https://dev.mysql.com/doc/refman/5.7/en/group-by-functional-dependence.html). diff --git a/v2.0/sql/precision-math.md b/v2.0/sql/precision-math.md new file mode 100755 index 0000000000000..05b940acc3d1a --- /dev/null +++ b/v2.0/sql/precision-math.md @@ -0,0 +1,139 @@ +--- +title: Precision Math +summary: Learn about the precision math in TiDB. +category: user guide +--- + +# Precision Math + +The precision math support in TiDB is consistent with MySQL. For more information, see [Precision Math in MySQL](https://dev.mysql.com/doc/refman/5.7/en/precision-math.html). + +## Numeric types + +The scope of precision math for exact-value operations includes the exact-value data types (integer and DECIMAL types) and exact-value numeric literals. Approximate-value data types and numeric literals are handled as floating-point numbers. + +Exact-value numeric literals have an integer part or fractional part, or both. They may be signed. Examples: `1`, `.2`, `3.4`, `-5`, `-6.78`, `+9.10`. + +Approximate-value numeric literals are represented in scientific notation (power-of-10) with a mantissa and exponent. Either or both parts may be signed. Examples: `1.2E3`, `1.2E-3`, `-1.2E3`, `-1.2E-3`. + +Two numbers that look similar might be treated differently. For example, `2.34` is an exact-value (fixed-point) number, whereas `2.34E0` is an approximate-value (floating-point) number. + +The DECIMAL data type is a fixed-point type and the calculations are exact. The FLOAT and DOUBLE data types are floating-point types and calculations are approximate. + +## DECIMAL data type characteristics + +This section discusses the following topics of the characteristics of the DECIMAL data type (and its synonyms): + +- Maximum number of digits +- Storage format +- Storage requirements + +The declaration syntax for a DECIMAL column is `DECIMAL(M,D)`. The ranges of values for the arguments are as follows: + +- M is the maximum number of digits (the precision). 1<= M <= 65. +- D is the number of digits to the right of the decimal point (the scale). 1 <= D <= 30 and D must be no larger than M. + +The maximum value of 65 for M means that calculations on DECIMAL values are accurate up to 65 digits. This limit of 65 digits of precision also applies to exact-value numeric literals. + +Values for DECIMAL columns are stored using a binary format that packs 9 decimal digits into 4 bytes. The storage requirements for the integer and fractional parts of each value are determined separately. Each multiple of 9 digits requires 4 bytes, and any remaining digits left over require some fraction of 4 bytes. The storage required for remaining digits is given by the following table. + +| Leftover Digits | Number of Bytes | +| --- | --- | +| 0 | 0 | +| 1–2 | 1 | +| 3–4 | 2 | +| 5–6 | 3 | +| 7–9 | 4 | + +For example, a `DECIMAL(18,9)` column has 9 digits on each side of the decimal point, so the integer part and the fractional part each require 4 bytes. A `DECIMAL(20,6)` column has 14 integer digits and 6 fractional digits. The integer digits require 4 bytes for 9 of the digits and 3 bytes for the remaining 5 digits. The 6 fractional digits require 3 bytes. + +DECIMAL columns do not store a leading `+` character or `-` character or leading `0` digits. If you insert `+0003.1` into a `DECIMAL(5,1)` column, it is stored as `3.1`. For negative numbers, a literal `-` character is not stored. + +DECIMAL columns do not permit values larger than the range implied by the column definition. For example, a `DECIMAL(3,0)` column supports a range of `-999` to `999`. A `DECIMAL(M,D)` column permits at most `M - D` digits to the left of the decimal point. + +For more information about the internal format of the DECIMAL values, see [`mydecimal.go`](https://github.com/pingcap/tidb/blob/master/types/mydecimal.go) in TiDB souce code. + +## Expression handling + +For expressions with precision math, TiDB uses the exact-value numbers as given whenever possible. For example, numbers in comparisons are used exactly as given without a change in value. In strict SQL mode, if you add an exact data type into a column, a number is inserted with its exact value if it is within the column range. When retrieved, the value is the same as what is inserted. If strict SQL mode is not enabled, truncation for INSERT is permitted in TiDB. + +How to handle a numeric expression depends on the values of the expression: + +- If the expression contains any approximate values, the result is approximate. TiDB evaluates the expression using floating-point arithmetic. +- If the expression contains no approximate values are present, which means only exact values are contained, and if any exact value contains a fractional part, the expression is evaluated using DECIMAL exact arithmetic and has a precision of 65 digits. +- Otherwise, the expression contains only integer values. The expression is exact. TiDB evaluates the expression using integer arithmetic and has a precision the same as BIGINT (64 bits). + +If a numeric expression contains strings, the strings are converted to double-precision floating-point values and the result of the expression is approximate. + +Inserts into numeric columns are affected by the SQL mode. The following discussions mention strict mode and `ERROR_FOR_DIVISION_BY_ZERO`. To turn on all the restrictions, you can simply use the `TRADITIONAL` mode, which includes both strict mode values and `ERROR_FOR_DIVISION_BY_ZERO`: + +```sql +SET sql_mode = 'TRADITIONAL`; +``` + +If a number is inserted into an exact type column (DECIMAL or integer), it is inserted with its exact value if it is within the column range. For this number: +- If the value has too many digits in the fractional part, rounding occurs and a warning is generated. +- If the value has too many digits in the integer part, it is too large and is handled as follows: + - If strict mode is not enabled, the value is truncated to the nearest legal value and a warning is generated. + - If strict mode is enabled, an overflow error occurs. + +To insert strings into numeric columns, TiDB handles the conversion from string to number as follows if the string has nonnumeric contents: + +- In strict mode, a string (including an empty string) that does not begin with a number cannot be used as a number. An error, or a warning occurs. +- A string that begins with a number can be converted, but the trailing nonnumeric portion is truncated. In strict mode, if the truncated portion contains anything other than spaces, an error, or a warning occurs. + +By default, the result of the division by 0 is NULL and no warning. By setting the SQL mode appropriately, division by 0 can be restricted. If you enable the `ERROR_FOR_DIVISION_BY_ZERO` SQL mode, TiDB handles division by 0 differently: + +- In strict mode, inserts and updates are prohibited, and an error occurs. +- If it's not in the strict mode, a warning occurs. + +In the following SQL statement: + +```sql +INSERT INTO t SET i = 1/0; +``` +The following results are returned in different SQL modes: + +| `sql_mode` Value | Result | +| :--- | :--- | +| '' | No warning, no error; i is set to NULL.| +| strict | No warning, no error; i is set to NULL. | +| `ERROR_FOR_DIVISION_BY_ZERO` | Warning, no error; i is set to NULL. | +| strict, `ERROR_FOR_DIVISION_BY_ZERO` | Error; no row is inserted. | + + +## Rounding behavior + +The result of the `ROUND()` function depends on whether its argument is exact or approximate: + +- For exact-value numbers, the `ROUND()` function uses the “round half up” rule. +- For approximate-value numbers, the results in TiDB differs from that in MySQL: + + ```sql + TiDB > SELECT ROUND(2.5), ROUND(25E-1); + +------------+--------------+ + | ROUND(2.5) | ROUND(25E-1) | + +------------+--------------+ + | 3 | 3 | + +------------+--------------+ + 1 row in set (0.00 sec) + ``` + +For inserts into a DECIMAL or integer column, the rounding uses [round half away from zero](https://en.wikipedia.org/wiki/Rounding\#Round_half_away_from_zero). + +```sql +TiDB > CREATE TABLE t (d DECIMAL(10,0)); +Query OK, 0 rows affected (0.01 sec) + +TiDB > INSERT INTO t VALUES(2.5),(2.5E0); +Query OK, 2 rows affected, 2 warnings (0.00 sec) + +TiDB > SELECT d FROM t; ++------+ +| d | ++------+ +| 3 | +| 3 | ++------+ +2 rows in set (0.00 sec) +``` \ No newline at end of file diff --git a/v2.0/sql/prepare.md b/v2.0/sql/prepare.md new file mode 100755 index 0000000000000..83184585c78cd --- /dev/null +++ b/v2.0/sql/prepare.md @@ -0,0 +1,43 @@ +--- +title: Prepared SQL Statement Syntax +summary: Use Prepared statements to reduce the load of statement parsing and query optimization, and improve execution efficiency. +category: user guide +--- + +# Prepared SQL Statement Syntax + +TiDB supports server-side Prepared statements, which can reduce the load of statement parsing and query optimization and improve execution efficiency. You can use Prepared statements in two ways: application programs and SQL statements. + +## Use application programs + +Most MySQL Drivers support Prepared statements, such as [MySQL Connector/C](https://dev.mysql.com/doc/connector-c/en/). You can call the Prepared statement API directly through the Binary protocol. + +## Use SQL statements + +You can also implement Prepared statements using `PREPARE`, `EXECUTE` and `DEALLOCATE PREPARE`. This approach is not as efficient as the application programs, but you do not need to write a program. + +### `PREPARE` statement + +```sql +PREPARE stmt_name FROM preparable_stmt +``` + +The `PREPARE` statement preprocesses `preparable_stmt` (syntax parsing, semantic check and query optimization) and names the result as `stmt_name`. The following operations can refer to it using `stmt_name`. Processed statements can be executed using the `EXECUTE` statement or released using the `DEALLOCATE PREPARE` statement. + +### `EXECUTE` statement + +```sql +EXECUTE stmt_name [USING @var_name [, @var_name] ...] +``` + +The `EXECUTE` statement executes the prepared statements named as `stmt_name`. If parameters exist in the prepared statements, use the User Variable list in the `USING` clause to assign values to parameters. + +### `DEALLOCATE PREPARE` statement + +```sql +{DEALLOCATE | DROP} PREPARE stmt_name +``` + +The `DEALLOCATE PREPARE` statement is used to delete the result of the prepared statements returned by `PREPARE`. + +For more information, see [MySQL Prepared Statement Syntax](https://dev.mysql.com/doc/refman/5.7/en/sql-syntax-prepared-statements.html). diff --git a/v2.0/sql/privilege.md b/v2.0/sql/privilege.md new file mode 100755 index 0000000000000..3b5ba05bd7180 --- /dev/null +++ b/v2.0/sql/privilege.md @@ -0,0 +1,327 @@ +--- +title: Privilege Management +summary: Learn how to manage the privilege. +category: user guide +--- + +# Privilege Management + +TiDB's privilege management system is implemented according to the privilege management system in MySQL. It supports most of the syntaxes and privilege types in MySQL. If you find any inconsistency with MySQL, feel free to [open an issue](https://github.com/pingcap/docs-cn/issues/new). + +## Examples + +### User account operation + +TiDB user account names consist of a user name and a host name. The account name syntax is `'user_name'@'host_name'`. + +- The `user_name` is case sensitive. +- The `host_name` can be a host name or an IP address. The `%` and `_` wildcard characters are permitted in host name or IP address values. For example, a host value of `'%'` matches any host name and `'192.168.1.%'` matches every host on a subnet. + +#### Create user + +The `CREATE USER` statement creates new MySQL accounts. + +```sql +create user 'test'@'127.0.0.1' identified by 'xxx'; +``` + +If the host name is not specified, you can log in from any IP address. If the password is not specified, it is empty by default: + +```sql +create user 'test'; +``` + +Equals: + +```sql +create user 'test'@'%' identified by ''; +``` + +**Required Privilege:** To use `CREATE USER`, you must have the global `CREATE USER` privilege. + +#### Change the password + +You can use the `SET PASSWORD` syntax to assign or modify a password to a user account. + +```sql +set password for 'root'@'%' = 'xxx'; +``` + +**Required Privilege:** Operations that assign or modify passwords are permitted only to users with the `CREATE USER` privilege. + +#### Drop user + +The `DROP USER` statement removes one or more MySQL accounts and their privileges. It removes the user record entries in the `mysql.user` table and the privilege rows for the account from all grant tables. + +```sql +drop user 'test'@'%'; +``` +**Required Privilege:** To use `DROP USER`, you must have the global `CREATE USER` privilege. + +#### Reset the root password + +If you forget the root password, you can skip the privilege system and use the root privilege to reset the password. + +To reset the root password, + +1. Start TiDB with a special startup option (root privilege required): + + ```bash + sudo ./tidb-server -skip-grant-table=true + ``` + +2. Use the root account to log in and reset the password: + + ```base + mysql -h 127.0.0.1 -P 4000 -u root + ``` + +### Privilege-related operations + +#### Grant privileges + +The `GRANT` statement grants privileges to the user accounts. + +For example, use the following statement to grant the `xxx` user the privilege to read the `test` database. + +```sql +grant Select on test.* to 'xxx'@'%'; +``` + +Use the following statement to grant the `xxx` user all privileges on all databases: + +``` +grant all privileges on *.* to 'xxx'@'%'; +``` + +If the granted user does not exist, TiDB will automatically create a user. + +``` +mysql> select * from mysql.user where user='xxxx'; +Empty set (0.00 sec) + +mysql> grant all privileges on test.* to 'xxxx'@'%' identified by 'yyyyy'; +Query OK, 0 rows affected (0.00 sec) + +mysql> select user,host from mysql.user where user='xxxx'; ++------|------+ +| user | host | ++------|------+ +| xxxx | % | ++------|------+ +1 row in set (0.00 sec) +``` + +In this example, `xxxx@%` is the user that is automatically created. + +> **Note:** Granting privileges to a database or table does not check if the database or table exists. + +``` +mysql> select * from test.xxxx; +ERROR 1146 (42S02): Table 'test.xxxx' doesn't exist + +mysql> grant all privileges on test.xxxx to xxxx; +Query OK, 0 rows affected (0.00 sec) + +mysql> select user,host from mysql.tables_priv where user='xxxx'; ++------|------+ +| user | host | ++------|------+ +| xxxx | % | ++------|------+ +1 row in set (0.00 sec) +``` + +You can use fuzzy matching to grant privileges to databases and tables. + +``` +mysql> grant all privileges on `te%`.* to genius; +Query OK, 0 rows affected (0.00 sec) + +mysql> select user,host,db from mysql.db where user='genius'; ++--------|------|-----+ +| user | host | db | ++--------|------|-----+ +| genius | % | te% | ++--------|------|-----+ +1 row in set (0.00 sec) +``` + +In this example, because of the `%` in `te%`, all the databases starting with `te` are granted the privilege. + +#### Revoke privileges + +The `REVOKE` statement enables system administrators to revoke privileges from the user accounts. + +The `REVOKE` statement corresponds with the `REVOKE` statement: + +```sql +revoke all privileges on `test`.* from 'genius'@'localhost'; +``` + +> **Note:** To revoke privileges, you need the exact match. If the matching result cannot be found, an error will be displayed: + + ``` + mysql> revoke all privileges on `te%`.* from 'genius'@'%'; + ERROR 1141 (42000): There is no such grant defined for user 'genius' on host '%' + ``` + +About fuzzy matching, escape, string and identifier: + +```sql +mysql> grant all privileges on `te\%`.* to 'genius'@'localhost'; +Query OK, 0 rows affected (0.00 sec) +``` + +This example uses exact match to find the database named `te%`. Note that the `%` uses the `\` escape character so that `%` is not considered as a wildcard. + +A string is enclosed in single quotation marks(''), while an identifier is enclosed in backticks (``). See the differences below: + +```sql +mysql> grant all privileges on 'test'.* to 'genius'@'localhost'; +ERROR 1064 (42000): You have an error in your SQL syntax; check the +manual that corresponds to your MySQL server version for the right +syntax to use near ''test'.* to 'genius'@'localhost'' at line 1 + +mysql> grant all privileges on `test`.* to 'genius'@'localhost'; +Query OK, 0 rows affected (0.00 sec) +``` + +If you want to use special keywords as table names, enclose them in backticks (``). For example: + +```sql +mysql> create table `select` (id int); +Query OK, 0 rows affected (0.27 sec) +``` + +#### Check privileges granted to user + +You can use the `show grant` statement to see what privileges are granted to a user. + +```sql +show grants for 'root'@'%'; +``` + +To be more precise, you can check the privilege information in the `Grant` table. For example, you can use the following steps to check if the `test@%` user has the `Insert` privilege on `db1.t`: + +1. Check if `test@%` has global `Insert` privilege: + + ```sql + select Insert_priv from mysql.user where user='test' and host='%'; + ``` + +2. If not, check if `test@%` has database-level `Insert` privilege at `db1`: + + ```sql + select Insert_priv from mysql.db where user='test' and host='%'; + ``` + +3. If the result is still empty, check whether `test@%` has table-level `Insert` privilege at `db1.t`: + + ```sql + select table_priv from mysql.tables_priv where user='test' and host='%' and db='db1'; + ``` + +### Implementation of the privilege system + +#### Grant table + +The following system tables are special because all the privilege-related data is stored in them: + +- mysql.user (user account, global privilege) +- mysql.db (database-level privilege) +- mysql.tables_priv (table-level privilege) +- mysql.columns_priv (column-level privilege) + +These tables contain the effective range and privilege information of the data. For example, in the `mysql.user` table: + +```sql +mysql> select User,Host,Select_priv,Insert_priv from mysql.user limit 1; ++------|------|-------------|-------------+ +| User | Host | Select_priv | Insert_priv | ++------|------|-------------|-------------+ +| root | % | Y | Y | ++------|------|-------------|-------------+ +1 row in set (0.00 sec) +``` + +In this record, `Host` and `User` determine that the connection request sent by the `root` user from any host (`%`) can be accepted. `Select_priv` and `Insert_priv` mean that the user has global `Select` and `Insert` privilege. The effective range in the `mysql.user` table is global. + +`Host` and `User` in `mysql.db` determine which databases users can access. The effective range is the database. + +In theory, all privilege-related operations can be done directly by the CRUD operations on the grant table. + +On the implementation level, only a layer of syntactic sugar is added. For example, you can use the following command to remove a user: + +``` +delete from mysql.user where user='test'; +``` + +However, it’s not recommended to manually modify the grant table. + +#### Connection verification + +When the client sends a connection request, TiDB server will verify the login operation. TiDB server first checks the `mysql.user` table. If a record of `User` and `Host` matches the connection request, TiDB server then verifies the `Password`. + +User identity is based on two pieces of information: `Host`, the host that initiates the connection, and `User`, the user name. If the user name is not empty, the exact match of user named is a must. + +`User`+`Host` may match several rows in `user` table. To deal with this scenario, the rows in the `user` table are sorted. The table rows will be checked one by one when the client connects; the first matched row will be used to verify. When sorting, Host is ranked before User. + +#### Request verification + +When the connection is successful, the request verification process checks whether the operation has the privilege. + +For database-related requests (INSERT, UPDATE), the request verification process first checks the user’s global privileges in the `mysql.user` table. If the privilege is granted, you can access directly. If not, check the `mysql.db` table. + +The `user` table has global privileges regardless of the default database. For example, the `DELETE` privilege in `user` can apply to any row, table, or database. + +In the `Db` table, an empty user is to match the anonymous user name. Wildcards are not allowed in the `User` column. The value for the `Host` and `Db` columns can use `%` and `_`, which can use pattern matching. + +Data in the `user` and `db` tables is also sorted when loaded into memory. + +The use of `%` in `tables_priv` and `columns_priv` is similar, but column value in `Db`, `Table_name` and `Column_name` cannot contain `%`. The sorting is also similar when loaded. + +#### Time of effect + +When TiDB starts, some privilege-check tables are loaded into memory, and then the cached data is used to verify the privileges. The system will periodically synchronize the `grant` table from database to cache. Time of effect is determined by the synchronization cycle. Currently, the value is 5 minutes. + +If an immediate effect is needed when you modify the `grant` table, you can run the following command: + +```sql +flush privileges +``` + +### Limitations and constraints + +Currently, the following privileges are not checked yet because they are less frequently used: + +- FILE +- USAGE +- SHUTDOWN +- EXECUTE +- PROCESS +- INDEX +- ... + +**Note:** The column-level privilege is not implemented at this stage. + +## `Create User` statement + +```sql +CREATE USER [IF NOT EXISTS] + user [auth_spec] [, user [auth_spec]] ... +auth_spec: { + IDENTIFIED BY 'auth_string' + | IDENTIFIED BY PASSWORD 'hash_string' +} +``` + +For more information about the user account, see [TiDB user account management](user-account-management.md). + +- IDENTIFIED BY `auth_string` + + When you set the login password, `auth_string` is encrypted by TiDB and stored in the `mysql.user` table. + +- IDENTIFIED BY PASSWORD `hash_string` + + When you set the login password, `hash_string` is encrypted by TiDB and stored in the `mysql.user` table. Currently, this is not the same as MySQL. diff --git a/v2.0/sql/schema-object-names.md b/v2.0/sql/schema-object-names.md new file mode 100755 index 0000000000000..96c7baf256612 --- /dev/null +++ b/v2.0/sql/schema-object-names.md @@ -0,0 +1,78 @@ +--- +title: Schema Object Names +summary: Learn about the schema object names (identifiers) in TiDB. +category: user guide +--- + +# Schema Object Names + +Some objects names in TiDB, including database, table, index, column, alias, etc., are known as identifiers. + +In TiDB, you can quote or unquote an identifier. If an identifier contains special characters or is a reserved word, you must quote it whenever you refer to it. To quote, use the backtick (\`) to wrap the identifier. For example: + +```sql +mysql> SELECT * FROM `table` WHERE `table`.id = 20; +``` + +If the `ANSI_QUOTES` SQL mode is enabled, you can also quote identifiers within double quotation marks("): + +```sql +mysql> CREATE TABLE "test" (a varchar(10)); +ERROR 1105 (HY000): line 0 column 19 near " (a varchar(10))" (total length 35) + +mysql> SET SESSION sql_mode='ANSI_QUOTES'; +Query OK, 0 rows affected (0.00 sec) + +mysql> CREATE TABLE "test" (a varchar(10)); +Query OK, 0 rows affected (0.09 sec) +``` + +The quote characters can be included within an identifier. Double the character if the character to be included within the identifier is the same as that used to quote the identifier itself. For example, the following statement creates a table named a\`b: + +```sql +mysql> CREATE TABLE `a``b` (a int); +``` + +In a `SELECT` statement, a quoted column alias can be specified using an identifier or a string quoting characters: + +```sql +mysql> SELECT 1 AS `identifier`, 2 AS 'string'; ++------------+--------+ +| identifier | string | ++------------+--------+ +| 1 | 2 | ++------------+--------+ +1 row in set (0.00 sec) +``` + +For more information, see [MySQL Schema Object Names](https://dev.mysql.com/doc/refman/5.7/en/identifiers.html). + +## Identifier qualifiers + +Object names can be unqualified or qualified. For example, the following statement creates a table using the unqualified name `t`: + +```sql +CREATE TABLE t (i int); +``` + +If there is no default database, the `ERROR 1046 (3D000): No database selected` is displayed. You can also use the qualified name ` test.t`: + +```sql +CREATE TABLE test.t (i int); +``` + +The qualifier character is a separate token and need not be contiguous with the associated identifiers. For example, there can be white spaces around `.`, and `table_name.col_name` and `table_name . col_name` are equivalent. + +To quote this identifier, use: + +```sql +`table_name`.`col_name` +``` + +Instead of + +```sql +`table_name.col_name` +``` +For more information, see [MySQL Identifier Qualifiers](https://dev.mysql.com/doc/refman/5.7/en/identifier-qualifiers.html). + diff --git a/v2.0/sql/server-command-option.md b/v2.0/sql/server-command-option.md new file mode 100755 index 0000000000000..ed72ef6022202 --- /dev/null +++ b/v2.0/sql/server-command-option.md @@ -0,0 +1,225 @@ +--- +title: The TiDB Command Options +summary: Learn about TiDB command options and configuration files. +category: user guide +--- + +# The TiDB Command Options + +This document describes the startup options and TiDB server configuration files. + +## TiDB startup options + +When you start TiDB processes, you can specify some program options. + +TiDB supports a lot of startup options. Run the following command to get a brief introduction: + +``` +./tidb-server --help +``` + +Run the following command to get the version: + +``` +./tidb-server -V +``` + +The complete descriptions of startup options are as follows. + +### -L + +- Log level +- Default: "info" +- Optional values: debug, info, warn, error or fatal + +### -P + +- TiDB service monitor port +- Default: "4000" +- TiDB uses this port to accept requests from the MySQL client + +### \-\-binlog-socket + +- TiDB uses the unix socket file to accept the internal connection, such as the PUMP service. +- Default: "" +- For example, use "/tmp/pump.sock" to accept the PUMP unix socket file communication. + +### \-\-config + +- TiDB configuration files +- Default: "" +- The file path of the configuration files + +### \-\-lease + +- The lease time of schema; unit: second +- Default: "10" +- The lease of schema is mainly used in online schema changes. This value affects the actual execution time of the DDL statement. In most cases, you do not need to change this value unless you clearly understand the internal implementation mechanism of TiDB DDL. + +### \-\-host + +- TiDB service monitor host +- Default: "0.0.0.0" +- TiDB service monitors this host. +- The 0.0.0.0 port monitors the address of all network cards. You can specify the network card that provides external service, such as 192.168.100.113. + +### \-\-log-file + +- Log file +- Default: "" +- If the option is not set, the log is output to "stderr"; if set, the log is output to the corresponding file. In the small hours of every day, the log automatically rotates to use a new file, renames and backups the previous file. + +### \-\-metrics-addr + +- The address of Prometheus Push Gateway +- Default: "" +- If the option value is null, TiDB does not push the statistics to Push Gateway. The option format is like `--metrics-addr=192.168.100.115:9091`. + +### \-\-metrics-intervel + +- The time interval that the statistics are pushed to Prometheus Push Gateway +- Default: 15s +- If you set the option value to 0, the statistics are not pushed to Push Gateway. `--metrics-interval=2` means the statistics are pushed to Push Gateway every two seconds. + +### \-\-path + +- For the local storage engines such as "goleveldb" or "BoltDB", `path` specifies the actual data storage path. +- For the "memory" storage engine, it is not necessary to set `path`. +- For the "TiKV" storage engine, `path` specifies the actual PD address. For example, if the PD is deployed on 192.168.100.113:2379, 192.168.100.114:2379 and 192.168.100.115:2379, the `path` is "192.168.100.113:2379, 192.168.100.114:2379, 192.168.100.115:2379". + +### \-\-report-status + +- Enable (true) or disable (false) the status monitor port +- Default: true +- The value is either true or false. The `true` value means opening the status monitor port. The `false` value means closing the status monitor port. The status monitor port is used to report some internal service information to the external. + +### \-\-run-ddl + +- Whether the TiDB server runs DDL statements; set the option when more than two TiDB servers are in the cluster +- Default: true +- The value is either true or false. The `true` value means the TiDB server runs DDL statements. The `false` value means the TiDB server does not run DDL statements. + +### \-\-socket string + +- TiDB uses the unix socket file to accept the external connection. +- Default: "" +- For example, use "/tmp/tidb.sock" to open the unix socket file. + +### \-\-status + +- The status monitor port of TiDB +- Default: "10080" +- This port is used to display the internal data of TiDB, including the [Prometheus statistics](https://prometheus.io/) and [pprof](https://golang.org/pkg/net/http/pprof/). +- Access the Prometheus statistics at http://host:status_port/metrics. +- Access the pprof data at http://host:status_port/debug/pprof. + +### \-\-store + +- To specify the storage engine used by the bottom layer of TiDB +- Default: "mocktikv" +- Optional values: "memory", "goleveldb", "boltdb", "mocktikv" or "tikv" (TiKV is a distributed storage engine, while the others are local storage engines) +- For example, use `tidb-server --store=memory` to start a TiDB server with a pure memory engine + +## TiDB server configuration files + +When you start the TiDB server, you can specify the server's configuration file using `--config path`. For overlapped options in configuration, the priority of command options is higher than configuration files. + +See [an example of the configuration file](https://github.com/pingcap/tidb/blob/master/config/config.toml.example). + +The complete descriptions of startup options are as follows. + +### host + +Same as the "host" startup option + +### port + +Same as the "P" startup option + +### path + +Same as the "path" startup option + +### socket + +Same as the "socket" startup option + +### binlog-socket + +Same as the "binlog-socket" startup option + +### run-ddl + +Same as the "run-ddl" startup option + +### cross-join + +- Default: true +- When you execute `join` on tables without any conditions on both sides, the statement can be run by default. But if you set the value to `false`, the server does not run such `join` statement. + +### join-concurrency + +- The goroutine number when the `join-concurrency` runs `join` +- Default: 5 +- To view the amount of data and data distribution; generally the more the better; a larger value indicates a larger CPU is needed + +### query-log-max-len + +- To record the maximum length of SQL statements in the log +- Default: 2048 +- The overlong request is truncated when it is output to the log + +### slow-threshold int + +- To record the SQL statement that has a larger value than this option +- Default: 300 +- It is required that the value is an integer (int); unit: millisecond + +### slow-query-file + +- The slow query log file +- Default: "" +- The value is the file name. If a non-null string is specified, the slow query log is redirected to the corresponding file. + +### retry-limit + +- The maximum number of commit retries when the transaction meets a conflict +- Default: 10 +- Setting a large number of retries can affect the performance of the TiDB cluster + +### skip-grant-table + +- Allow anyone to connect without a password, and all operations do not check privileges +- Default: false +- The value is either true or false. The machine's root privilege is required to enable this option, which is used to reset the password when forgotten. + +### stats-lease + +- Scan the full table incrementally, and analyze the data amount and indexes of the table +- Default: "3s" +- To use this option, you need to manually run `analyze table name`. Update the statistics automatically and store data in TiKV persistently, taking up some memory. + +### tcp-keep-alive + +- To Enable keepalive in the tcp layer of TiDB +- Default: false + +### ssl-cert + +- The file path of SSL certificate in PEM format +- Default: "" +- If this option and the `--ssl-key` option are set at the same time, the client can (not required) securely connect to TiDB using TLS. +- If the specified certificate or private key is invalid, TiDB starts as usual but does not support encrypted connections. + +### ssl-key + +- The file path of SSL certificate keys in PEM format, or the private keys specified by `--ssl-cert` +- Default: "" +- Currently, you cannot load a password-protected private key in TiDB. + +### ssl-ca + +- The file path of the trusted CA certificate in PEM format +- Default: "" +- If this option and the `--ssl-cert`, `--ssl-key` options are set at the same time, TiDB authenticates the client certificate based on the trusted CA list specified by the option when the client presents the certificate. If the authentication fails, the connection stops. +- If this option is set but the client does not present the certificate, the encrypted connection continues but the client certificate is not authenticated. diff --git a/v2.0/sql/slow-query.md b/v2.0/sql/slow-query.md new file mode 100755 index 0000000000000..0a7378cced0f8 --- /dev/null +++ b/v2.0/sql/slow-query.md @@ -0,0 +1,97 @@ +--- +title: Slow Query Log +summary: Use the slow query log to identify problematic SQL statements. +category: user guide +--- + +# Slow Query Log + +The slow query log is a record of SQL statements that took a long time to perform. + +A problematic SQL statement can increase the pressure on the entire cluster, resulting in a longer response time. To solve this problem, you can use the slow query log to identify the problematic statements and thus improve the performance. + +### Obtain the log + +By `grep` the keyword `SLOW_QUERY` in the log file of TiDB, you can obtain the logs of statements whose execution time exceeds [slow-threshold](../op-guide/tidb-config-file.md#slow-threshold). + +You can edit `slow-threshold` in the configuration file and its default value is 300ms. If you configure the [slow-query-file](../op-guide/tidb-config-file.md#slow-query-file), all the slow query logs will be written in this file. + +### Usage example + +``` +2018/08/20 19:52:08.632 adapter.go:363: [warning] [SLOW_QUERY] cost_time:18.647928814s +process_time:1m6.768s wait_time:12m11.212s backoff_time:600ms request_count:2058 +total_keys:1869712 processed_keys:1869710 succ:true con:3 user:root@127.0.0.1 +txn_start_ts:402329674704224261 database:test table_ids:[31],index_ids:[1], +sql:select count(c) from sbtest1 use index (k_1) +``` + +### Fields description + +This section describes fields in the slow query log based on the usage example above. + +#### `cost_time` + +The execution time of this statement. Only the statements whose execution time exceeds [slow-threshold](../op-guide/tidb-config-file.md#slow-threshold) output this log. + +#### `process_time` + +The total processing time of this statement in TiKV. Because data is sent to TiKV concurrently for execution, this value might exceed `cost_time`. + +#### `wait_time` + +The total waiting time of this statement in TiKV. Because the Coprocessor of TiKV runs a limited number of threads, requests might queue up when all threads of Coprocessor are working. When a request in the queue takes a long time to process, the waiting time of the subsequent requests will increase. + +#### `backoff_time` + +The waiting time before retry when this statement encounters errors that require a retry. The common errors as such include: lock occurs, Region split, the TiKV server is busy. + +#### `request_count` + +The number of Coprocessor requests that this statement sends. + +#### `total_keys` + +The number of keys that Coprocessor has scanned. + +#### `processed_keys` + +The number of keys that Coprocessor has processed. Compared with `total_keys`, `processed_keys` does not include the old versions of MVCC or the MVCC `delete` marks. A great difference between `processed_keys` and `total_keys` indicates that the number of old versions are relatively large. + +#### `succ` + +Whether the execution of the request succeeds or not. + +#### `con` + +Connection ID (session ID). For example, you can use the keyword `con:3` to `grep` the log whose session ID is 3. + +#### `user` + +The name of the user who executes this statement. + +#### `txn_start_ts` + +The start timestamp of the transaction, that is, the ID of the transaction. You can use this value to `grep` the transaction-related logs. + +#### `database` + +The current database. + +#### `table_ids` + +The IDs of the tables involved in the statement. + +#### `index_ids` + +The IDs of the indexes involved in the statement. + +#### `sql` + +The SQL statement. + +### Identify problematic SQL statements + +Not all of the `SLOW_QUERY` statements are problematic. Only those whose `process_time` is very large will increase the pressure on the entire cluster. + +The statements whose `wait_time` is very large and `process_time` is very small are usually not problematic. The large `wait_time` is because the statement is blocked by real problematic statements and it has to wait in the execution queue, which leads to a much longer response time. diff --git a/v2.0/sql/statistics.md b/v2.0/sql/statistics.md new file mode 100755 index 0000000000000..354cc8b9eea9c --- /dev/null +++ b/v2.0/sql/statistics.md @@ -0,0 +1,160 @@ +--- +title: Introduction to Statistics +summary: Learn how the statistics collect table-level and column-level information. +category: user guide +--- + +# Introduction to Statistics + +Based on the statistics, the TiDB optimizer chooses the most efficient query execution plan. The statistics collect table-level and column-level information. + +- The statistics of a table include the total number of rows and the number of updated rows. +- The statistics of a column include the number of different values, the number of `NULL`, the histogram, and the Count-Min Sketch of the column. + +## Collect statistics + +### Manual collection + +You can run the `ANALYZE` statement to collect statistics. + +Syntax: + +```sql +ANALYZE TABLE TableNameList +> The statement collects statistics of all the tables in `TableNameList`. + +ANALYZE TABLE TableName INDEX [IndexNameList] +> The statement collects statistics of the index columns on all `IndexNameList` in `TableName`. +> The statement collects statistics of all index columns when `IndexNameList` is empty. +``` + +### Automatic update + +For the `INSERT`, `DELETE`, or `UPDATE` statements, TiDB automatically updates the number of rows and updated rows. TiDB persists this information regularly and the update cycle is 5 * `stats-lease`. The default value of `stats-lease` is `3s`. If you specify the value as `0`, it does not update automatically. + +When the ratio of the number of modified rows to the total number of rows is greater than `auto-analyze-ratio`, TiDB automatically starts the `Analyze` statement. You can modify the value of `auto-analyze-ratio` in the configuration file. The default value is `0`, which means that this function is not enabled. + +When the query is executed, TiDB collects feedback with the probability of `feedback-probability` and uses it to update the histogram and Count-Min Sketch. You can modify the value of `feedback-probability` in the configuration file. The default value is `0`. + +### Control `ANALYZE` concurrency + +When you run the `ANALYZE` statement, you can adjust the concurrency using the following parameters, to control its effect on the system. + +#### `tidb_build_stats_concurrency` + +Currently, when you run the `ANALYZE` statement, the task is divided into multiple small tasks. Each task only works on one column or index. You can use the `tidb_build_stats_concurrency` parameter to control the number of simultaneous tasks. The default value is `4`. + +#### `tidb_distsql_scan_concurrency` + +When you analyze regular columns, you can use the `tidb_distsql_scan_concurrency` parameter to control the number of Region to be read at one time. The default value is `10`. + +#### `tidb_index_serial_scan_concurrency` + +When you analyze index columns, you can use the `tidb_index_serial_scan_concurrency` parameter to control the number of Region to be read at one time. The default value is `1`. + +## View statistics + +You can view the statistics status using the following statements. + +### Metadata of tables + +You can use the `SHOW STATS_META` statement to view the total number of rows and the number of updated rows. + +Syntax: + +```sql +SHOW STATS_META [ShowLikeOrWhere] +> The statement returns the total number of rows and the number of updated rows. You can use `ShowLikeOrWhere` to filter the information you need. +``` + +Currently, the `SHOW STATS_META` statement returns the following 5 columns: + +| Syntax Element | Description | +| :-------- | :------------- | +| `db_name` | database name | +| `table_name` | table name | +| `update_time` | the time of the update | +| `modify_count` | the number of modified rows | +| `row_count` | the total number of rows | + +### Metadata of columns + +You can use the `SHOW STATS_HISTOGRAMS` statement to view the number of different values and the number of `NULL` in all the columns. + +Syntax: + +```sql +SHOW STATS_HISTOGRAMS [ShowLikeOrWhere] +> The statement returns the number of different values and the number of `NULL` in all the columns. You can use `ShowLikeOrWhere` to filter the information you need. +``` + +Currently, the `SHOW STATS_HISTOGRAMS` statement returns the following 7 columns: + +| Syntax Element | Description | +| :-------- | :------------- | +| `db_name` | database name | +| `table_name` | table name | +| `column_name` | column name | +| `is_index` | whether it is an index column or not | +| `update_time` | the time of the update | +| `distinct_count` | the number of different values | +| `null_count` | the number of `NULL` | +| `avg_col_size` | the average length of columns | + +### Buckets of histogram + +You can use the `SHOW STATS_BUCKETS` statement to view each bucket of the histogram. + +Syntax: + +```sql +SHOW STATS_BUCKETS [ShowLikeOrWhere] +> The statement returns information about all the buckets. You can use `ShowLikeOrWhere` to filter the information you need. +``` + +Currently, the `SHOW STATS_BUCKETS` statement returns the following 9 columns: + +| Syntax Element | Description | +| :-------- | :------------- | +| `db_name` | database name | +| `table_name` | table name | +| `column_name` | column name | +| `is_index` | whether it is an index column or not | +| `bucket_id` | the ID of a bucket | +| `count` | the number of all the values that falls on the bucket and the previous buckets | +| `repeats` | the occurrence number of the maximum value | +| `lower_bound` | the minimum value | +| `upper_bound` | the maximum value | + +## Delete statistics + +You can run the `DROP STATS` statement to delete statistics. + +Syntax: + +```sql +DROP STATS TableName +> The statement deletes statistics of all the tables in `TableName`. +``` + +## Import and export statistics + +### Export statistics + +The interface to export statistics: + +``` +http://${tidb-server-ip}:${tidb-server-status-port}/stats/dump/${db_name}/${table_name} +> Use this interface to obtain the JSON format statistics of the `${table_name}` table in the `${db_name}` database. +``` + +### Import statistics + +Generally, the imported statistics refer to the JSON file obtained using the export interface. + +Syntax: + +``` +LOAD STATS 'file_name' +> `file_name` is the file name of the statistics to be imported. +``` \ No newline at end of file diff --git a/v2.0/sql/string-functions.md b/v2.0/sql/string-functions.md new file mode 100755 index 0000000000000..35e3eefa886ab --- /dev/null +++ b/v2.0/sql/string-functions.md @@ -0,0 +1,75 @@ +--- +title: String Functions +summary: Learn about the string functions in TiDB. +category: user guide +--- + +# String Functions + +| Name | Description | +|:------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------| +| [`ASCII()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_ascii) | Return numeric value of left-most character | +| [`CHAR()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_char) | Return the character for each integer passed | +| [`BIN()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_bin) | Return a string containing binary representation of a number | +| [`HEX()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_hex) | Return a hexadecimal representation of a decimal or string value | +| [`OCT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_oct) | Return a string containing octal representation of a number | +| [`UNHEX()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_unhex) | Return a string containing hex representation of a number | +| [`TO_BASE64()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_to-base64) | Return the argument converted to a base-64 string | +| [`FROM_BASE64()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_from-base64) | Decode to a base-64 string and return result | +| [`LOWER()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_lower) | Return the argument in lowercase | +| [`LCASE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_lcase) | Synonym for LOWER() | +| [`UPPER()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_upper) | Convert to uppercase | +| [`UCASE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_ucase) | Synonym for UPPER() | +| [`LPAD()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_lpad) | Return the string argument, left-padded with the specified string | +| [`RPAD()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_rpad) | Append string the specified number of times | +| [`TRIM()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_trim) | Remove leading and trailing spaces | +| [`LTRIM()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_ltrim) | Remove leading spaces | +| [`RTRIM()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_rtrim) | Remove trailing spaces | +| [`BIT_LENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_bit-length) | Return length of argument in bits | +| [`CHAR_LENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_char-length) | Return number of characters in argument | +| [`CHARACTER_LENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_character-length) | Synonym for CHAR_LENGTH() | +| [`LENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_length) | Return the length of a string in bytes | +| [`OCTET_LENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_octet-length) | Synonym for LENGTH() | +| [`INSERT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_insert) | Insert a substring at the specified position up to the specified number of characters | +| [`REPLACE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_replace) | Replace occurrences of a specified string | +| [`SUBSTR()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_substr) | Return the substring as specified | +| [`SUBSTRING()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_substring) | Return the substring as specified | +| [`SUBSTRING_INDEX()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_substring-index) | Return a substring from a string before the specified number of occurrences of the delimiter | +| [`MID()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_mid) | Return a substring starting from the specified position | +| [`LEFT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_left) | Return the leftmost number of characters as specified | +| [`RIGHT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_right) | Return the specified rightmost number of characters | +| [`INSTR()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_instr) | Return the index of the first occurrence of substring | +| [`LOCATE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_locate) | Return the position of the first occurrence of substring | +| [`POSITION()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_position) | Synonym for LOCATE() | +| [`REPEAT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_repeat) | Repeat a string the specified number of times | +| [`CONCAT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat) | Return concatenated string | +| [`CONCAT_WS()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat-ws) | Return concatenate with separator | +| [`REVERSE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_reverse) | Reverse the characters in a string | +| [`SPACE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_space) | Return a string of the specified number of spaces | +| [`FIELD()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_field) | Return the index (position) of the first argument in the subsequent arguments | +| [`ELT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_elt) | Return string at index number | +| [`EXPORT_SET()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_export-set) | Return a string such that for every bit set in the value bits, you get an on string and for every unset bit, you get an off string | +| [`MAKE_SET()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_make-set) | Return a set of comma-separated strings that have the corresponding bit in bits set | +| [`FIND_IN_SET()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_find-in-set) | Return the index position of the first argument within the second argument | +| [`FORMAT()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_format) | Return a number formatted to specified number of decimal places | +| [`ORD()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_ord) | Return character code for leftmost character of the argument | +| [`QUOTE()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_quote) | Escape the argument for use in an SQL statement | +| [`SOUNDEX()`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_soundex) | Return a soundex string | +| [`SOUNDS LIKE`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#operator_sounds-like) | Compare sounds | + +## String comparison functions + +| Name | Description | +|:------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------| +| [`LIKE`](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#operator_like) | Simple pattern matching | +| [`NOT LIKE`](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#operator_not-like) | Negation of simple pattern matching | +| [`STRCMP()`](https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#function_strcmp) | Compare two strings | +| [`MATCH`](https://dev.mysql.com/doc/refman/5.7/en/fulltext-search.html#function_match) | Perform full-text search | + +## Regular expressions + +| Name | Description | +|:------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------| +| [`REGEXP`](https://dev.mysql.com/doc/refman/5.7/en/regexp.html#operator_regexp) | Pattern matching using regular expressions | +| [`RLIKE`](https://dev.mysql.com/doc/refman/5.7/en/regexp.html#operator_regexp) | Synonym for REGEXP | +| [`NOT REGEXP`](https://dev.mysql.com/doc/refman/5.7/en/regexp.html#operator_not-regexp) | Negation of REGEXP | diff --git a/v2.0/sql/system-database.md b/v2.0/sql/system-database.md new file mode 100755 index 0000000000000..37154ad56c5f6 --- /dev/null +++ b/v2.0/sql/system-database.md @@ -0,0 +1,256 @@ +--- +title: The TiDB System Database +summary: Learn tables contained in the TiDB System Database. +category: user guide +--- + +# The TiDB System Database + +The TiDB System Database is similar to MySQL, which contains tables that store information required by the server when it runs. + +## Grant system tables + +These system tables contain grant information about user accounts and their privileges: + +- `user`: user accounts, global privileges, and other non-privilege columns +- `db`: database-level privileges +- `tables_priv`: table-level privileges +- `columns_priv`: column-level privileges + +## Server-side help system tables + +Currently, the `help_topic` is NULL. + +## Statistics system tables + +- `stats_buckets`: the buckets of statistics +- `stats_histograms`: the histograms of statistics +- `stats_meta`: the meta information of tables, such as the total number of rows and updated rows + +## GC worker system tables + +- `gc_delete_range`: to record the data to be deleted + +## Miscellaneous system tables + +- `GLOBAL_VARIABLES`: global system variable table +- `tidb`: to record the version information when TiDB executes `bootstrap` + +## INFORMATION\_SCHEMA tables + +To be compatible with MySQL, TiDB supports INFORMATION\_SCHEMA tables. Some third-party software queries information in these tables. Currently, most INFORMATION\_SCHEMA tables in TiDB are NULL. + +### CHARACTER\_SETS table + +The CHARACTER\_SETS table provides information about character sets. But it contains dummy data. By default, TiDB only supports utf8mb4. + +```sql +mysql> select * from CHARACTER_SETS; ++--------------------|----------------------|-----------------------|--------+ +| CHARACTER_SET_NAME | DEFAULT_COLLATE_NAME | DESCRIPTION | MAXLEN | ++--------------------|----------------------|-----------------------|--------+ +| ascii | ascii_general_ci | US ASCII | 1 | +| binary | binary | Binary pseudo charset | 1 | +| latin1 | latin1_swedish_ci | cp1252 West European | 1 | +| utf8 | utf8_general_ci | UTF-8 Unicode | 3 | +| utf8mb4 | utf8mb4_general_ci | UTF-8 Unicode | 4 | ++--------------------|----------------------|-----------------------|--------+ +5 rows in set (0.00 sec) +``` + +### COLLATIONS table + +The COLLATIONS table is similar to the CHARACTER\_SETS table. + +### COLLATION\_CHARACTER\_SET\_APPLICABILITY table + +NULL. + +### COLUMNS table + +The COLUMNS table provides information about columns in tables. The information in this table is not accurate. To query information, it is recommended to use the `SHOW` statement: + +```sql +SHOW COLUMNS FROM table_name [FROM db_name] [LIKE 'wild'] +``` + +### COLUMN\_PRIVILEGES table + +NULL. + +### ENGINES table + +The ENGINES table provides information about storage engines. But it contains dummy data only. In the production environment, use the TiKV engine for TiDB. + +### EVENTS table + +NULL. + +### FILES table + +NULL. + +### GLOBAL\_STATUS table + +NULL. + +### GLOBAL\_VARIABLES table + +NULL. + +### KEY\_COLUMN\_USAGE table + +The KEY_COLUMN_USAGE table describes the key constraints of the columns, such as the primary key constraint. + +### OPTIMIZER\_TRACE table + +NULL. + +### PARAMETERS table + +NULL. + +### PARTITIONS table + +NULL. + +### PLUGINS table + +NULL. + +### PROFILING table + +NULL. + +### REFERENTIAL\_CONSTRAINTS table + +NULL. + +### ROUTINES table + +NULL. + +### SCHEMATA table + +The SCHEMATA table provides information about databases. The table data is equivalent to the result of the `SHOW DATABASES` statement. + +```sql +mysql> select * from SCHEMATA; ++--------------|--------------------|----------------------------|------------------------|----------+ +| CATALOG_NAME | SCHEMA_NAME | DEFAULT_CHARACTER_SET_NAME | DEFAULT_COLLATION_NAME | SQL_PATH | ++--------------|--------------------|----------------------------|------------------------|----------+ +| def | INFORMATION_SCHEMA | utf8 | utf8_bin | NULL | +| def | mysql | utf8 | utf8_bin | NULL | +| def | PERFORMANCE_SCHEMA | utf8 | utf8_bin | NULL | +| def | test | utf8 | utf8_bin | NULL | ++--------------|--------------------|----------------------------|------------------------|----------+ +4 rows in set (0.00 sec) +``` + +### SCHEMA\_PRIVILEGES table + +NULL. + +### SESSION\_STATUS table + +NULL. + +### SESSION\_VARIABLES table + +The SESSION\_VARIABLES table provides information about session variables. The table data is similar to the result of the `SHOW SESSION VARIABLES` statement. + +### STATISTICS table + +The STATISTICS table provides information about table indexes. + +```sql +mysql> desc statistics; ++---------------|---------------------|------|------|---------|-------+ +| Field | Type | Null | Key | Default | Extra | ++---------------|---------------------|------|------|---------|-------+ +| TABLE_CATALOG | varchar(512) | YES | | NULL | | +| TABLE_SCHEMA | varchar(64) | YES | | NULL | | +| TABLE_NAME | varchar(64) | YES | | NULL | | +| NON_UNIQUE | varchar(1) | YES | | NULL | | +| INDEX_SCHEMA | varchar(64) | YES | | NULL | | +| INDEX_NAME | varchar(64) | YES | | NULL | | +| SEQ_IN_INDEX | bigint(2) UNSIGNED | YES | | NULL | | +| COLUMN_NAME | varchar(21) | YES | | NULL | | +| COLLATION | varchar(1) | YES | | NULL | | +| CARDINALITY | bigint(21) UNSIGNED | YES | | NULL | | +| SUB_PART | bigint(3) UNSIGNED | YES | | NULL | | +| PACKED | varchar(10) | YES | | NULL | | +| NULLABLE | varchar(3) | YES | | NULL | | +| INDEX_TYPE | varchar(16) | YES | | NULL | | +| COMMENT | varchar(16) | YES | | NULL | | +| INDEX_COMMENT | varchar(1024) | YES | | NULL | | ++---------------|---------------------|------|------|---------|-------+ +``` + +The following statements are equivalent: + +```sql +SELECT * FROM INFORMATION_SCHEMA.STATISTICS + WHERE table_name = 'tbl_name' + AND table_schema = 'db_name' + +SHOW INDEX + FROM tbl_name + FROM db_name +``` + +### TABLES table + +The TABLES table provides information about tables in databases. + +The following statements are equivalent: + +```sql +SELECT table_name FROM INFORMATION_SCHEMA.TABLES + WHERE table_schema = 'db_name' + [AND table_name LIKE 'wild'] + +SHOW TABLES + FROM db_name + [LIKE 'wild'] +``` + +### TABLESPACES table + +NULL. + +### TABLE\_CONSTRAINTS table + +The TABLE_CONSTRAINTS table describes which tables have constraints. + +- The `CONSTRAINT_TYPE` value can be UNIQUE, PRIMARY KEY, or FOREIGN KEY. +- The UNIQUE and PRIMARY KEY information is similar to the result of the `SHOW INDEX` statement. + +### TABLE\_PRIVILEGES table + +NULL. + +### TRIGGERS table + +NULL. + +### USER\_PRIVILEGES table + +The USER_PRIVILEGES table provides information about global privileges. This information comes from the mysql.user grant table. + +```sql +mysql> desc USER_PRIVILEGES; ++----------------|--------------|------|------|---------|-------+ +| Field | Type | Null | Key | Default | Extra | ++----------------|--------------|------|------|---------|-------+ +| GRANTEE | varchar(81) | YES | | NULL | | +| TABLE_CATALOG | varchar(512) | YES | | NULL | | +| PRIVILEGE_TYPE | varchar(64) | YES | | NULL | | +| IS_GRANTABLE | varchar(3) | YES | | NULL | | ++----------------|--------------|------|------|---------|-------+ +4 rows in set (0.00 sec) +``` + +### VIEWS table + +NULL. Currently, TiDB does not support views. diff --git a/v2.0/sql/tidb-memory-control.md b/v2.0/sql/tidb-memory-control.md new file mode 100755 index 0000000000000..1a88bc5097cf1 --- /dev/null +++ b/v2.0/sql/tidb-memory-control.md @@ -0,0 +1,45 @@ +--- +title: TiDB Memory Control +summary: Learn how to configure the memory quota of a query and avoid OOM (out of memory). +category: user guide +--- + +# TiDB Memory Control + +Currently, TiDB can track the memory quota of a single SQL query and take actions to prevent OOM (out of memory) or troubleshoot OOM when the memory usage exceeds a specific threshold value. In the TiDB configuration file, you can configure the options as below to control TiDB behaviors when the memory quota exceeds the threshold value: + +``` +# Valid options: ["log", "cancel"] +oom-action = "log" +``` + +- If the configuration item above uses "log", when the memory quota of a single SQL query exceeds the threshold value which is controlled by the `tidb_mem_quota_query` variable, TiDB prints an entry of log. Then the SQL query continues to be executed. If OOM occurs, you can find the corresponding SQL query in the log. +- If the configuration item above uses "cancel", when the memory quota of a single SQL query exceeds the threshold value, TiDB stops executing the SQL query immediately and returns an error to the client. The error information clearly shows the memory usage of each physical execution operator that consumes much memory in the SQL execution process. + +## Configure the memory quota of a query + +You can control the memory quota of a query using the following session variables. Generally, you only need to configure `tidb_mem_quota_query`. Other variables are used for advanced configuration which most users do not need to care about. + +| Variable Name | Description | Unit | Default Value | +|-----------------------------------|---------------------------------------------------|-------|-----------| +| tidb_mem_quota_query | Control the memory quota of a query | Byte | 32 << 30 | +| tidb_mem_quota_hashjoin | Control the memory quota of "HashJoinExec" | Byte | 32 << 30 | +| tidb_mem_quota_mergejoin | Control the memory quota of "MergeJoinExec" | Byte | 32 << 30 | +| tidb_mem_quota_sort | Control the memory quota of "SortExec" | Byte | 32 << 30 | +| tidb_mem_quota_topn | Control the memory quota of "TopNExec" | Byte | 32 << 30 | +| tidb_mem_quota_indexlookupreader | Control the memory quota of "IndexLookUpExecutor" | Byte | 32 << 30 | +| tidb_mem_quota_indexlookupjoin | Control the memory quota of "IndexLookUpJoin" | Byte | 32 << 30 | +| tidb_mem_quota_nestedloopapply | Control the memory quota of "NestedLoopApplyExec" | Byte | 32 << 30 | + +Some usage examples: + +```sql +-- Set the threshold value of memory quota for a single SQL query to 8GB: +set @@tidb_mem_quota_query = 8 << 30; + +-- Set the threshold value of memory quota for a single SQL query to 8MB: +set @@tidb_mem_quota_query = 8 << 20; + +-- Set the threshold value of memory quota for a single SQL query to 8KB: +set @@tidb_mem_quota_query = 8 << 10; +``` diff --git a/v2.0/sql/tidb-server.md b/v2.0/sql/tidb-server.md new file mode 100755 index 0000000000000..4882dcf06acf9 --- /dev/null +++ b/v2.0/sql/tidb-server.md @@ -0,0 +1,35 @@ +--- +title: The TiDB Server +summary: Learn about the basic management functions of the TiDB cluster. +category: user guide +--- + +# The TiDB Server + +TiDB refers to the TiDB database management system. This document describes the basic management functions of the TiDB cluster. + +## TiDB cluster startup configuration + +You can set the service parameters using the command line or the configuration file, or both. The priority of the command line parameters is higher than the configuration file. If the same parameter is set in both ways, TiDB uses the value set using command line parameters. For more information, see [The TiDB Command Options](server-command-option.md). + +## TiDB system variable + +TiDB is compatible with MySQL system variables, and defines some unique system variables to adjust the database behavior. For more information, see [The Proprietary System Variables and Syntaxes in TiDB](tidb-specific.md). + +## TiDB system table + +Similar to MySQL, TiDB also has system tables that store the information needed when TiDB runs. For more information, see [The TiDB System Database](system-database.md). + +## TiDB data directory + +The TiDB data is stored in the storage engine and the data directory depends on the storage engine used. For more information about how to choose the storage engine, see the [TiDB startup parameters document](../op-guide/configuration.md#store). + +When you use the local storage engine, the data is stored on the local hard disk and the directory location is controlled by the [`path`](../op-guide/configuration.md#path) parameter. + +When you use the TiKV storage engine, the data is stored on the TiKV node and the directory location is controlled by the [`data-dir`](../op-guide/configuration.md#data-dir-1) parameter. + +## TiDB server logs + +The three components of the TiDB cluster (`tidb-server`, ` tikv-server` and `pd-server`) outputs the logs to standard errors by default. In each of the three components, you can set the [`--log-file`](op-guide/configuration.md#--log-file) parameter (or the configuration item in the configuration file) and output the log into a file. + +You can adjust the log behavior using the configuration file. For more details, see the configuration file description of each component. For example, the [`tidb-server` log configuration item](https://github.com/pingcap/tidb/blob/master/config/config.toml.example#L46). diff --git a/v2.0/sql/tidb-specific.md b/v2.0/sql/tidb-specific.md new file mode 100755 index 0000000000000..eb21d17ce9512 --- /dev/null +++ b/v2.0/sql/tidb-specific.md @@ -0,0 +1,333 @@ +--- +title: The Proprietary System Variables and Syntaxes in TiDB +summary: Use the proprietary system variables and syntaxes in TiDB to optimize performance. +category: user guide +--- + +# The Proprietary System Variables and Syntaxes in TiDB + +On the basis of MySQL variables and syntaxes, TiDB has defined some specific system variables and syntaxes to optimize performance. + +## System variable + +Variables can be set with the `SET` statement, for example: + +``` +set @@tidb_distsql_scan_concurrency = 10 +``` + +If you need to set the global variable, run: + +``` +set @@global.tidb_distsql_scan_concurrency = 10 +``` + +### tidb_snapshot + +- Scope: SESSION +- Default value: "" +- This variable is used to set the time point at which the data is read by the session. For example, when you set the variable to "2017-11-11 20:20:20" or a TSO number like "400036290571534337", the current session reads the data of this moment. + +### tidb_import_data + +- Scope: SESSION +- Default value: 0 +- This variable indicates whether to import data from the dump file currently. +- To speed up importing, the unique index constraint is not checked when the variable is set to 1. +- This variable is only used by Lightning. Do not modify it. + +### tidb_opt_agg_push_down + +- Scope: SESSION +- Default value: 0 +- This variable is used to set whether the optimizer executes the optimization operation of pushing down the aggregate function to the position before Join. +- When the aggregate operation is slow in query, you can set the variable value to 1. + +### tidb_opt_insubquery_unfold + +- Scope: SESSION +- Default value: 0 +- This variable is used to set whether the optimizer executes the optimization operation of unfolding the "in-" subquery. + +### tidb_build_stats_concurrency + +- Scope: SESSION +- Default value: 4 +- This variable is used to set the concurrency of executing the `ANALYZE` statement. +- When the variable is set to a larger value, the execution performance of other queries is affected. + +### tidb_checksum_table_concurrency + +- Scope: SESSION +- Default value: 4 +- This variable is used to set the scan index concurrency of executing the `ADMIN CHECKSUM TABLE` statement. +- When the variable is set to a larger value, the execution performance of other queries is affected. + +### tidb_current_ts + +- Scope: SESSION +- Default value: 0 +- This variable is read-only. It is used to obtain the timestamp of the current transaction. + +### tidb_config + +- Scope: SESSION +- Default value: "" +- This variable is read-only. It is used to obtain the configuration information of the current TiDB server. + +### tidb_distsql_scan_concurrency + +- Scope: SESSION | GLOBAL +- Default value: 15 +- This variable is used to set the concurrency of the `scan` operation. +- Use a bigger value in OLAP scenarios, and a smaller value in OLTP scenarios. +- For OLAP scenarios, the maximum value cannot exceed the number of CPU cores of all the TiKV nodes. + +### tidb_index_lookup_size + +- Scope: SESSION | GLOBAL +- Default value: 20000 +- This variable is used to set the batch size of the `index lookup` operation. +- Use a bigger value in OLAP scenarios, and a smaller value in OLTP scenarios. + +### tidb_index_lookup_concurrency + +- Scope: SESSION | GLOBAL +- Default value: 4 +- This variable is used to set the concurrency of the `index lookup` operation. +- Use a bigger value in OLAP scenarios, and a smaller value in OLTP scenarios. + +### tidb_index_lookup_join_concurrency + +- Scope: SESSION | GLOBAL +- Default value: 4 +- This variable is used to set the concurrency of the `index lookup join` algorithm. + +### tidb_hash_join_concurrency + +- Scope: SESSION | GLOBAL +- Default value: 5 +- This variable is used to set the concurrency of the `hash join` algorithm. + +### tidb_index_serial_scan_concurrency + +- Scope: SESSION | GLOBAL +- Default value: 1 +- This variable is used to set the concurrency of the `serial scan` operation. +- Use a bigger value in OLAP scenarios, and a smaller value in OLTP scenarios. + +### tidb_projection_concurrency + +- Scope: SESSION | GLOBAL +- Default value: 4 +- This variable is used to set the concurrency of the `Projection` operator. + +### tidb_hashagg_partial_concurrency + +- Scope: SESSION | GLOBAL +- Default value: 4 +- This variable is used to set the concurrency of executing the concurrent `hash aggregation` algorithm in the `partial` phase. +- When the parameter of the aggregate function is not distinct, `HashAgg` is run concurrently and respectively in two phases - the `partial` phase and the `final` phase. + +### tidb_hashagg_final_concurrency + +- Scope: SESSION | GLOBAL +- Default value: 4 +- This variable is used to set the concurrency of executing the concurrent `hash aggregation` algorithm in the `final` phase. +- When the parameter of the aggregate function is not distinct, `HashAgg` is run concurrently and respectively in two phases - the `partial` phase and the `final` phase. + +### tidb_index_join_batch_size + +- Scope: SESSION | GLOBAL +- Default value: 25000 +- This variable is used to set the batch size of the `index lookup join` operation. +- Use a bigger value in OLAP scenarios, and a smaller value in OLTP scenarios. + +### tidb_skip_utf8_check + +- Scope: SESSION | GLOBAL +- Default value: 0 +- This variable is used to set whether to skip UTF-8 validation. +- Validating UTF-8 characters affects the performance. When you are sure that the input characters are valid UTF-8 characters, you can set the variable value to 1. + +### tidb_batch_insert + +- Scope: SESSION +- Default value: 0 +- This variable is used to set whether to divide the inserted data automatically. It is valid only when `autocommit` is enabled. +- When inserting a large amount of data, you can set the variable value to true. Then the inserted data is automatically divided into multiple batches and each batch is inserted by a single transaction. + +### tidb_batch_delete + +- Scope: SESSION +- Default value: 0 +- This variable is used to set whether to divide the data for deletion automatically. It is valid only when `autocommit` is enabled. +- When deleting a large amount of data, you can set the variable value to true. Then the data for deletion is automatically divided into multiple batches and each batch is deleted by a single transaction. + +### tidb_dml_batch_size + +- Scope: SESSION +- Default value: 20000 +- This variable is used to set the automatically divided batch size of the data for insertion/deletion. It is only valid when `tidb_batch_insert` or `tidb_batch_delete` is enabled. +- When the data size of a single row is very large, the overall data size of 20 thousand rows exceeds the size limit for a single transaction. In this case, set the variable to a smaller value. + +### tidb_max_chunk_size + +- Scope: SESSION | GLOBAL +- Default value: 1024 +- This variable is used to set the maximum number of rows in a chunk during the execution process. + +### tidb_mem_quota_query + +- Scope: SESSION +- Default value: 32 GB +- This variable is used to set the threshold value of memory quota for a query. +- If the memory quota of a query during execution exceeds the threshold value, TiDB performs the operation designated by the OOMAction option in the configuration file. + +### tidb_mem_quota_hashjoin + +- Scope: SESSION +- Default value: 32 GB +- This variable is used to set the threshold value of memory quota for the `HashJoin` operator. +- If the memory quota of the `HashJoin` operator during execution exceeds the threshold value, TiDB performs the operation designated by the OOMAction option in the configuration file. + +### tidb_mem_quota_mergejoin + +- Scope: SESSION +- Default value: 32 GB +- This variable is used to set the threshold value of memory quota for the `MergeJoin` operator. +- If the memory quota of the `MergeJoin` operator during execution exceeds the threshold value, TiDB performs the operation designated by the OOMAction option in the configuration file. + +### tidb_mem_quota_sort + +- Scope: SESSION +- Default value: 32 GB +- This variable is used to set the threshold value of memory quota for the `Sort` operator. +- If the memory quota of the `Sort` operator during execution exceeds the threshold value, TiDB performs the operation designated by the OOMAction option in the configuration file. + +### tidb_mem_quota_topn + +- Scope: SESSION +- Default value: 32 GB +- This variable is used to set the threshold value of memory quota for the `TopN` operator. +- If the memory quota of the `TopN` operator during execution exceeds the threshold value, TiDB performs the operation designated by the OOMAction option in the configuration file. + +### tidb_mem_quota_indexlookupreader + +- Scope: SESSION +- Default value: 32 GB +- This variable is used to set the threshold value of memory quota for the `IndexLookupReader` operator. +- If the memory quota of the `IndexLookupReader` operator during execution exceeds the threshold value, TiDB performs the operation designated by the OOMAction option in the configuration file. + +### tidb_mem_quota_indexlookupjoin + +- Scope: SESSION +- Default value: 32 GB +- This variable is used to set the threshold value of memory quota for the `IndexLookupJoin` operator. +- If the memory quota of the `IndexLookupJoin` operator during execution exceeds the threshold value, TiDB performs the operation designated by the OOMAction option in the configuration file. + +### tidb_mem_quota_nestedloopapply + +- Scope: SESSION +- Default value: 32 GB +- This variable is used to set the threshold value of memory quota for the `NestedLoopApply` operator. +- If the memory quota of the `NestedLoopApply` operator during execution exceeds the threshold value, TiDB performs the operation designated by the OOMAction option in the configuration file. + +### tidb_general_log + +- Scope: SERVER +- Default value: 0 +- This variable is used to set whether to record all the SQL statements in the log. + +### tidb_enable_streaming + +- Scope: SERVER +- Default value: 0 +- This variable is used to set whether to enable Streaming. + +### tidb_retry_limit + +- Scope: SESSION | GLOBAL +- Default value: 10 +- When a transaction encounters retriable errors, such as transaction conflicts and TiKV busy, this transaction can be re-executed. This variable is used to set the maximum number of the retries. + +### tidb_disable_txn_auto_retry + +- Scope: SESSION | GLOBAL +- Default: 0 +- This variable is used to set whether to disable automatic retry of explicit transactions. If you set this variable to 1, the transaction does not retry automatically. If there is a conflict, the transaction needs to be retried at the application layer. To decide whether you need to disable automatic retry, see [description of optimistic transactions](transaction-isolation.md#description-of-optimistic-transactions). + +## tidb_enable_table_partition + +- Scope: SESSION +- Default value: 0 +- This variable is used to set whether to enable the `TABLE PARTITION` feature. + +## tidb_backoff_lock_fast + +- Scope: SESSION | GLOBAL +- Default value: 100 +- This variable is used to set the `backoff` time when the read request meets a lock. + +## tidb_ddl_reorg_worker_cnt + +- Scope: SESSION | GLOBAL +- Default value: 16 +- This variable is used to set the concurrency of the DDL operation in the `re-organize` phase. + +## tidb_ddl_reorg_priority + +- Scope: SESSION | GLOBAL +- Default value: `PRIORITY_NORMAL` +- This variable is used to set the priority of executing the `ADD INDEX` operation in the `re-organize` phase. +- You can set the value of this variable to `PRIORITY_LOW`, `PRIORITY_NORMAL` or `PRIORITY_HIGH`. + +## Optimizer Hint + +On the basis of MySQL’s `Optimizer Hint` Syntax, TiDB adds some proprietary `Hint` syntaxes. When using the `Hint` syntax, the TiDB optimizer will try to use the specific algorithm, which performs better than the default algorithm in some scenarios. + +The `Hint` syntax is included in comments like `/*+ xxx */`, and in MySQL client versions earlier than 5.7.7, the comment is removed by default. If you want to use the `Hint` syntax in these earlier versions, add the `--comments` option when starting the client. For example: `mysql -h 127.0.0.1 -P 4000 -uroot --comments`. + +### TIDB_SMJ(t1, t2) + +```SELECT /*+ TIDB_SMJ(t1, t2) */ * from t1, t2 where t1.id = t2.id``` + +This variable is used to remind the optimizer to use the `Sort Merge Join` algorithm. This algorithm takes up less memory, but takes longer to execute. It is recommended if the data size is too large, or there’s insufficient system memory. + +### TIDB_INLJ(t1, t2) + +```SELECT /*+ TIDB_INLJ(t1, t2) */ * from t1, t2 where t1.id = t2.id``` + +This variable is used to remind the optimizer to use the `Index Nested Loop Join` algorithm. In some scenarios, this algorithm runs faster and takes up fewer system resources, but may be slower and takes up more system resources in some other scenarios. You can try to use this algorithm in scenarios where the result-set is less than 10,000 rows after the outer table is filtered by the WHERE condition. The parameter in `TIDB_INLJ()` is the candidate table for the driving table (external table) when generating the query plan. That means, `TIDB_INLJ (t1)` will only consider using t1 as the driving table to create a query plan. + +### TIDB_HJ(t1, t2) + +```SELECT /*+ TIDB_HJ(t1, t2) */ * from t1, t2 where t1.id = t2.id``` + +This variable is used to remind the optimizer to use the `Hash Join` algorithm. This algorithm executes threads concurrently. It runs faster but takes up more memory. + +## _tidb_rowid + +This is a hidden column of TiDB, which represents the column name of the implicit ROW ID. It only exists on the tables with non-integer PK or without PK. You can execute the `SELECT`, `INSERT`, `UPDATE` and `DELETE` statements on this column, and the usage of these statements are as follows: + +- `SELECT`: `SELECT *, _tidb_rowid from t;` +- `INSERT`: `INSERT t (c, _tidb_rowid) VALUES (1, 1);` +- `UPDATE`: `UPDATE t SET c = c + 1 WHERE _tidb_rowid = 1;` +- `DELETE`: `DELETE FROM t WHERE _tidb_rowid = 1;` + +## SHARD_ROW_ID_BITS + +You can use this TABLE OPTION to set the bit digit of the number of implicit `_tidb_rowid` shards. + +For the tables with non-integer PK or without PK, TiDB uses an implicit auto-increment ROW ID. When a large number of `INSERT` operations occur, the data is written into a single Region, causing a write hot spot. + +To mitigate the hot spot issue, you can configure `SHARD_ROW_ID_BITS`. The ROW ID is scattered and the data is written into multiple different Regions. But setting an overlarge value might lead to an excessively large number of RPC requests, which increases the CPU and network overheads. + +- `SHARD_ROW_ID_BITS = 4` indicates 16 shards +- `SHARD_ROW_ID_BITS = 6` indicates 64 shards +- `SHARD_ROW_ID_BITS = 0` indicates the default 1 shard + +Usage of statements: + +- `CREATE TABLE`: `CREATE TABLE t (c int) SHARD_ROW_ID_BITS = 4;` +- `ALTER TABLE`: `ALTER TABLE t SHARD_ROW_ID_BITS = 4;` diff --git a/v2.0/sql/time-zone.md b/v2.0/sql/time-zone.md new file mode 100755 index 0000000000000..baa83244a9672 --- /dev/null +++ b/v2.0/sql/time-zone.md @@ -0,0 +1,66 @@ +--- +title: Time Zone +summary: Learn how to set the time zone and its format. +category: user guide +--- + +# Time Zone + +The time zone in TiDB is decided by the global `time_zone` system variable and the session `time_zone` system variable. The initial value for `time_zone` is 'SYSTEM', which indicates that the server time zone is the same as the system time zone. + +You can use the following statement to set the global server `time_zone` value at runtime: + +```sql +mysql> SET GLOBAL time_zone = timezone; +``` + +Each client has its own time zone setting, given by the session `time_zone` variable. Initially, the session variable takes its value from the global `time_zone` variable, but the client can change its own time zone with this statement: + +```sql +mysql> SET time_zone = timezone; +``` + +You can use the following statment to view the current values of the global and client-specific time zones: + +```sql +mysql> SELECT @@global.time_zone, @@session.time_zone; +``` + +To set the format of the value of the `time_zone`: + +- The value 'SYSTEM' indicates that the time zone should be the same as the system time zone. +- The value can be given as a string indicating an offset from UTC, such as '+10:00' or '-6:00'. +- The value can be given as a named time zone, such as 'Europe/Helsinki', 'US/Eastern', or 'MET'. + +The current session time zone setting affects the display and storage of time values that are zone-sensitive. This includes the values displayed by functions such as `NOW()` or `CURTIME()`, + +> **Note**: Only the values of the Timestamp data type is affected by time zone. This is because the Timestamp data type uses the literal value + time zone information. Other data types, such as Datetime/Date/Time, do not have time zone information, thus their values are not affected by the changes of time zone. + +```sql +mysql> create table t (ts timestamp, dt datetime); +Query OK, 0 rows affected (0.02 sec) + +mysql> set @@time_zone = 'UTC'; +Query OK, 0 rows affected (0.00 sec) + +mysql> insert into t values ('2017-09-30 11:11:11', '2017-09-30 11:11:11'); +Query OK, 1 row affected (0.00 sec) + +mysql> set @@time_zone = '+8:00'; +Query OK, 0 rows affected (0.00 sec) + +mysql> select * from t; ++---------------------|---------------------+ +| ts | dt | ++---------------------|---------------------+ +| 2017-09-30 19:11:11 | 2017-09-30 11:11:11 | ++---------------------|---------------------+ +1 row in set (0.00 sec) +``` + +In this example, no matter how you adjust the value of the time zone, the value of the Datetime data type is not affected. But the displayed value of the Timestamp data type changes if the time zone information changes. In fact, the value that is stored in the storage does not change, it's just displayed differently according to different time zone setting. + +> **Note**: +> +> - Time zone is involved during the conversion of the value of Timestamp and Datetime, which is handled based on the current `time_zone` of the session. +> - For data migration, you need to pay special attention to the time zone setting of the master database and the slave database. \ No newline at end of file diff --git a/v2.0/sql/transaction-isolation.md b/v2.0/sql/transaction-isolation.md new file mode 100755 index 0000000000000..e459d1df14f64 --- /dev/null +++ b/v2.0/sql/transaction-isolation.md @@ -0,0 +1,150 @@ +--- +title: TiDB Transaction Isolation Levels +summary: Learn about the transaction isolation levels in TiDB. +category: user guide +--- + +# TiDB Transaction Isolation Levels + +Transaction isolation is one of the foundations of database transaction processing. Isolation is the I in the acronym ACID (Atomicity, Consistency, Isolation, Durability), which represents the isolation property of database transactions. + +The SQL-92 standard defines four levels of transaction isolation: Read Uncommitted, Read Committed, Repeatable Read and Serializable. See the following table for details: + +| Isolation Level | Dirty Read | Nonrepeatable Read | Phantom Read | Serialization Anomaly | +| ---------------- | ------------ | ------------------ | --------------------- | --------------------- | +| Read Uncommitted | Possible | Possible | Possible | Possible | +| Read Committed | Not possible | Possible | Possible | Possible | +| Repeatable Read | Not possible | Not possible | Not possible in TiDB | Possible | +| Serializable | Not possible | Not possible | Not possible | Not possible | + +TiDB offers two transaction isolation levels: Read Committed and Repeatable Read. + +TiDB uses the [Percolator transaction model](https://research.google.com/pubs/pub36726.html). A global read timestamp is obtained when the transaction is started, and a global commit timestamp is obtained when the transaction is committed. The execution order of transactions is confirmed based on the timestamps. To know more about the implementation of TiDB transaction model, see [MVCC in TiKV](https://pingcap.com/blog/2016-11-17-mvcc-in-tikv/). + +Use the following command to set the isolation level of the Session or Global transaction: + +``` +SET [SESSION | GLOBAL] TRANSACTION ISOLATION LEVEL [read committed|repeatable read] +``` + +If you do not use the Session or Global keyword, this statement takes effect only for the transaction to be executed next, but not for the entire session or global transaction. + +``` +SET TRANSACTION ISOLATION LEVEL [read committed|repeatable read] +``` + +## Repeatable Read + +Repeatable Read is the default transaction isolation level in TiDB. The Repeatable Read isolation level only sees data committed before the transaction begins, and it never sees either uncommitted data or changes committed during transaction execution by concurrent transactions. However, the transaction statement does see the effects of previous updates executed within its own transaction, even though they are not yet committed. + +For transactions running on different nodes, the start and commit order depends on the order that the timestamp is obtained from PD. + +Transactions of the Repeatable Read isolation level cannot concurrently update a same row. When committing, if the transaction finds that the row has been updated by another transaction after it starts, then the transaction rolls back and retries automatically. For example: + +``` +create table t1(id int); +insert into t1 values(0); + +start transaction; | start transaction; +select * from t1; | select * from t1; +update t1 set id=id+1; | update t1 set id=id+1; +commit; | + | commit; -- roll back and retry atutomatically +``` + +### Difference between TiDB and ANSI Repeatable Read + +The Repeatable Read isolation level in TiDB differs from ANSI Repeatable Read isolation level, though they sharing the same name. According to the standard described in the [A Critique of ANSI SQL Isolation Levels](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-51.pdf) paper, TiDB implements the snapshot isolation level, and it does not allow phantom reads but allows write skews. In contrast, the ANSI Repeatable Read isolation level allows phantom reads but does not allow write skews. + +### Difference between TiDB and MySQL Repeatable Read + +The Repeatable Read isolation level in TiDB differs from that in MySQL. The MySQL Repeatable Read isolation level does not check whether the current version is visible when updating, which means it can continue to update even if the row has been updated after the transaction starts. In contrast, if the row has been updated after the transaction starts, the TiDB transaction is rolled back and retried. Transaction Retries in TiDB might fail, leading to a final failure of the transaction, while in MySQL the updating transaction can be successful. + +The MySQL Repeatable Read isolation level is not the snapshot isolation level. The consistency of MySQL Repeatable Read isolation level is weaker than both the snapshot isolation level and TiDB Repeatable Read isolation level. + +## Read Committed + +The Read Committed isolation level differs from Repeatable Read isolation level. Read Committed only guarantees the uncommitted data cannot be read. + +**Note:** Because the transaction commit is a dynamic process, the Read Committed isolation level might read the data committed by part of the transaction. It is not recommended to use the Read Committed isolation level in a database that requires strict consistency. + +## Transaction retry + +For the `insert/delete/update` operation, if the transaction fails and can be retried according to the system, the transaction is automatically retried within the system. + +You can control the number of retries by configuring the `retry-limit` parameter: + +``` +[performance] +... +# The maximum number of retries when commit a transaction. +retry-limit = 10 +``` + +## Description of optimistic transactions + +Because TiDB uses the optimistic transaction model, the final result might not be as expected if the transactions created by the explicit `BEGIN` statement automatically retry after meeting a conflict. + +Example 1: + +| Session1 | Session2 | +| ---------------- | ------------ | +| `begin;` | `begin;` | +| `select balance from t where id = 1;` | `update t set balance = balance -100 where id = 1;` | +| | `update t set balance = balance -100 where id = 2;` | +| // the subsequent logic depends on the result of `select` | `commit;` | +| `if balance > 100 {` | | +| `update t set balance = balance + 100 where id = 2;` | | +| `}` | | +| `commit;` // automatic retry | | + +Example 2: + +| Session1 | Session2 | +| ---------------- | ------------ | +| `begin;` | `begin;` | +| `update t set balance = balance - 100 where id = 1;` | `delete t where id = 1;` | +| | `commit;` | +| // the subsequent logic depends on the result of `affected_rows` | | +| `if affected_rows > 100 {` | | +| `update t set balance = balance + 100 where id = 2;` | | +| `}` | | +| `commit;` // automatic retry | | + +Under the automatic retry mechanism of TiDB, all the executed statements for the first time are re-executed again. When whether the subsequent statements are to be executed or not depends on the results of the previous statements, automatic retry cannot guarantee the final result is as expected. + +To disable the automatic retry of explicit transactions, configure the `tidb_disable_txn_auto_retry` global variable: + +``` +set @@global.tidb_disable_txn_auto_retry = 1; +``` + +This variable does not affect the implicit single statement with `auto_commit = 1`, so this type of statement still automatically retries. + +After the automatic retry of explicit transactions is disabled, if a transaction conflict occurs, the `commit` statement returns an error that includes the `try again later` string. The application layer uses this string to judge whether the error can be retried. + +If the application layer logic is included in the process of transaction execution, it is recommended to add the retry of explicit transactions at the application layer and disable automatic retry. + +## Statement rollback + +If you execute a statement within a transaction, the statement does not take effect when an error occurs. + +``` +begin; +insert into test values (1); +insert into tset values (2); // This statement does not take effect because "test" is misspelled as "tset". +insert into test values (3); +commit; +``` + +In the above example, the second `insert` statement fails, while the other two `insert` statements (1 & 3) can be successfully committed. + +``` +begin; +insert into test values (1); +insert into tset values (2); // This statement does not take effect because "test" is misspelled as "tset". +insert into test values (3); +rollback; +``` + +In the above example, the second `insert` statement fails, and this transaction does not insert any data into the database because `rollback` is called. \ No newline at end of file diff --git a/v2.0/sql/transaction.md b/v2.0/sql/transaction.md new file mode 100755 index 0000000000000..b95d5c798ae4f --- /dev/null +++ b/v2.0/sql/transaction.md @@ -0,0 +1,78 @@ +--- +title: Transactions +summary: Learn how to use the distributed transaction statements. +category: user guide +--- + +# Transactions + +TiDB supports distributed transactions. The statements that relate to transactions include the `Autocommit` variable, `START TRANSACTION`/`BEGIN`, `COMMIT` and `ROLLBACK`. + +## Autocommit + +Syntax: + +```sql +SET autocommit = {0 | 1} +``` + +If you set the value of `autocommit` to 1, the status of the current Session is autocommit. If you set the value of `autocommit` to 0, the status of the current Session is non-autocommit. The value of `autocommit` is 1 by default. + +In the autocommit status, the updates are automatically committed to the database after you run each statement. Otherwise, the updates are only committed when you run the `COMMIT` or `BEGIN` statement. + +`autocommit` is also a System Variable. You can update the current Session or the Global value using the following variable assignment statement: + +```sql +SET @@SESSION.autocommit = {0 | 1}; +SET @@GLOBAL.autocommit = {0 | 1}; +``` + +## START TRANSACTION, BEGIN + +Syntax: + +```sql +BEGIN; + +START TRANSACTION; + +START TRANSACTION WITH CONSISTENT SNAPSHOT; +``` + +The three statements above are all statements that transactions start with, through which you can explicitly start a new transaction. If at this time, the current Session is in the process of a transaction, a new transaction is started after the current transaction is committed. + +## COMMIT + +Syntax: + +```sql +COMMIT; +``` + +This statement is used to commit the current transaction, including all the updates between `BEGIN` and `COMMIT`. + +## ROLLBACK + +Syntax: + +```sql +ROLLBACK; +``` + +This statement is used to roll back the current transaction and cancels all the updates between `BEGIN` and `COMMIT`. + +## Explicit and implicit transaction + +TiDB supports explicit transactions (`BEGIN/COMMIT`) and implicit transactions (`SET autocommit = 1`). + +If you set the value of `autocommit` to 1 and start a new transaction through `BEGIN`, the autocommit is disabled before `COMMIT`/`ROLLBACK` which makes the transaction becomes explicit. + +For DDL statements, the transaction is committed automatically and does not support rollback. If you run the DDL statement while the current Session is in the process of a transaction, the DDL statement is run after the current transaction is committed. + +## Transaction isolation level + +TiDB uses `SNAPSHOT ISOLATION` by default. You can set the isolation level of the current Session to `READ COMMITTED` using the following statement: + +```sql +SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED; +``` diff --git a/v2.0/sql/type-conversion-in-expression-evaluation.md b/v2.0/sql/type-conversion-in-expression-evaluation.md new file mode 100755 index 0000000000000..e17bae0935b9b --- /dev/null +++ b/v2.0/sql/type-conversion-in-expression-evaluation.md @@ -0,0 +1,9 @@ +--- +title: Type Conversion in Expression Evaluation +summary: Learn about the type conversion in expression evaluation. +category: user guide +--- + +# Type Conversion in Expression Evaluation + +TiDB behaves the same as MySQL: [https://dev.mysql.com/doc/refman/5.7/en/type-conversion.html](https://dev.mysql.com/doc/refman/5.7/en/type-conversion.html) diff --git a/v2.0/sql/understanding-the-query-execution-plan.md b/v2.0/sql/understanding-the-query-execution-plan.md new file mode 100755 index 0000000000000..39a29d93e2a18 --- /dev/null +++ b/v2.0/sql/understanding-the-query-execution-plan.md @@ -0,0 +1,86 @@ +--- +title: Understand the Query Execution Plan +summary: Learn about the execution plan information returned by the `EXPLAIN` statement in TiDB. +category: user guide +--- + +# Understand the Query Execution Plan + +Based on the details of your tables, the TiDB optimizer chooses the most efficient query execution plan, which consists of a series of operators. This document details the execution plan information returned by the `EXPLAIN` statement in TiDB. + +## Optimize SQL statements using `EXPLAIN` + +The result of the `EXPLAIN` statement provides information about how TiDB executes SQL queries: + +- `EXPLAIN` works together with `SELECT`, `DELETE`, `INSERT`, `REPLACE`, and `UPDATE`. +- When you run the `EXPLAIN` statement, TiDB returns the final physical execution plan which is optimized by the SQL statment of `EXPLAIN`. In other words, `EXPLAIN` displays the complete information about how TiDB executes the SQL statement, such as in which order, how tables are joined, and what the expression tree looks like. For more information, see [`EXPLAIN` output format](#explain-output-format). +- TiDB does not support `EXPLAIN [options] FOR CONNECTION connection_id` currently. We'll do it in the future. For more information, see [#4351](https://github.com/pingcap/tidb/issues/4351). + +The results of `EXPLAIN` shed light on how to index the data tables so that the execution plan can use the index to speed up the execution of SQL statements. You can also use `EXPLAIN` to check if the optimizer chooses the optimal order to join tables. + +## `EXPLAIN` output format + +Currently, the `EXPLAIN` statement returns the following four columns: id, count, task, operator info. Each operator in the execution plan is described by the four properties. In the results returned by `EXPLAIN`, each row describes an operator. See the following table for details: + +| Property Name | Description | +| -----| ------------- | +| id | The id of an operator, to identify the uniqueness of an operator in the entire execution plan. As of TiDB 2.1, the id includes formatting to show a tree structure of operators. The data flows from a child to its parent, and each operator has one and only one parent. | +| count | An estimation of the number of data items that the current operator outputs, based on the statistics and the execution logic of the operator | +| task | the task that the current operator belongs to. The current execution plan contains two types of tasks: 1) the **root** task that runs on the TiDB server; 2) the **cop** task that runs concurrently on the TiKV server. The topological relations of the current execution plan in the task level is that a root task can be followed by many cop tasks. The root task uses the output of cop task as the input. The cop task executes the tasks that TiDB pushes to TiKV. Each cop task scatters in the TiKV cluster and is executed by multiple processes. | +| operator info | The details about each operator. The information of each operator differs from others, see [Operator Info](#operator-info).| + +## Overview + +### Introduction to task + +Currently, the calculation task of TiDB contains two different tasks: cop task and root task. The cop task refers to the computing task that is pushed to the KV side and executed distributedly. The root task refers to the computing task that is executed at a single point in TiDB. One of the goals of SQL optimization is to push the calculation down to the KV side as much as possible. + +### Table data and index data + +The table data in TiDB refers to the raw data of a table, which is stored in TiKV. For each row of the table data, its key is a 64-bit integer called Handle ID. If a table has int type primary key, the value of the primary key is taken as the Handle ID of the table data, otherwise the system automatically generates the Handle ID. The value of the table data is encoded by all the data in this row. When the table data is read, return the results in the order in which the Handle ID is incremented. + +Similar to the table data, the index data in TiDB is also stored in TiKV. The key of index data is ordered bytes encoded by index columns. The value is the Handle ID of each row of index data. You can use the Handle ID to read the non-index columns in this row. When the index data is read, return the results in the order in which the index columns are incremented. If the case of multiple index columns, make sure that the first column is incremented and that the i + 1 column is incremented when the i column is equal. + +### Range query + +In the WHERE/HAVING/ON condition, analyze the results returned by primary key or index key queries. For example, number and date types of comparison symbols, greater than, less than, equal to, greater than or equal to, less than or equal to, and character type LIKE symbols. + +TiDB only supports the comparison symbols of which one side is a column and the other side is a constant or can be calculated as a constant. Query conditions like `year(birth_day) < 1992` cannot use the index. Try to use the same type to compare: additional cast operations prevent the index from being used. For example, in `user_id = 123456`, if the `user_id` is a string, you need to write `123456` as a string constant. + +Using `AND` and `OR` combination on the range query conditions of the same column is equivalent to getting the intersection or union set. For multidimensional combined indexes, you can write the conditions for multiple columns. For example, in the `(a, b, c)` combined index, when `a` is an equivalent query, you can continue to calculate the query range of `b`; when `b` is also an equivalent query, you can continue to calculate the query range of `c`; otherwise, if `a` is a non-equivalent query, you can only calculate the query range of `a`. + +## Operator info + +### TableReader and TableScan + +TableScan refers to scanning the table data at the KV side. TableReader refers to reading the table data from TiKV at the TiDB side. TableReader and TableScan are the two operators of one function. The `table` represents the table name in SQL statements. If the table is renamed, it displays the new name. The `range` represents the range of scanned data. If the WHERE/HAVING/ON condition is not specified in the query, full table scan is executed. If the range query condition is specified on the int type primary keys, range query is executed. The `keep order` indicates whether the table scan is returned in order. + +### IndexReader and IndexLookUp + +The index data in TiDB is read in two ways: 1) IndexReader represents reading the index columns directly from the index, which is used when only index related columns or primary keys are quoted in SQL statements; 2) IndexLookUp represents filtering part of the data from the index, returning only the Handle ID, and retrieving the table data again using Handle ID. In the second way, data is retrieved twice from TiKV. The way of reading index data is automatically selected by the optimizer. + +Similar to TableScan, IndexScan is the operator to read index data in the KV side. The `table` represents the table name in SQL statements. If the table is renamed, it displays the new name. The `index` represents the index name. The `range` represents the range of scanned data. The `out of order` indicates whether the index scan is returned in order. In TiDB, the primary key composed of multiple columns or non-int columns is treated as the unique index. + +### Selection + +Selection represents the selection conditions in SQL statements, usually used in WHERE/HAVING/ON clause. + +### Projection + +Projection corresponds to the `SELECT` list in SQL statements, used to map the input data into new output data. + +### Aggregation + +Aggregation corresponds to `Group By` in SQL statements, or the aggregate functions if the `Group By` statement does not exist, such as the `COUNT` or `SUM` function. TiDB supports two aggregation algorithms: Hash Aggregation and Stream Aggregation. Hash Aggregation is a hash-based aggregation algorithm. If Hash Aggregation is close to the read operator of Table or Index, the aggregation operator pre-aggregates in TiKV to improve the concurrency and reduce the network load. + +### Join + +TiDB supports Inner Join and Left/Right Outer Join, and automatically converts the external connection that can be simplified to Inner Join. + +TiDB supports three Join algorithms: Hash Join, Sort Merge Join and Index Look up Join. The principle of Hash Join is to pre-load the memory with small tables involved in the connection and read all the data of big tables to connect. The principle of Sort Merge Join is to read the data of two tables at the same time and compare one by one using the order information of the input data. Index Look Up Join reads data of external tables and executes primary key or index key queries on internal tables. + +### Apply + +Apply is an operator used to describe subqueries in TiDB. The behavior of Apply is similar to Nested Loop. The Apply operator retrieves one piece of data from external tables, puts it into the associated column of the internal tables, executes and calculates the connection according to the inline Join algorithm in Apply. + +Generally, the Apply operator is automatically converted to a Join operation by the query optimizer. Therefore, try to avoid using the Apply operator when you write SQL statements. diff --git a/v2.0/sql/user-account-management.md b/v2.0/sql/user-account-management.md new file mode 100755 index 0000000000000..c0af9fdd20421 --- /dev/null +++ b/v2.0/sql/user-account-management.md @@ -0,0 +1,95 @@ +--- +title: TiDB User Account Management +summary: Learn how to manage a TiDB user account. +category: user guide +--- + +# TiDB User Account Management + +This document describes how to manage a TiDB user account. + +## User names and passwords + +TiDB stores the user accounts in the table of the `mysql.user` system database. Each account is identified by a user name and the client host. Each account may have a password. + +You can connect to the TiDB server using the MySQL client, and use the specified account and password to login: + +```sql +shell> mysql --port 4000 --user xxx --password +``` + +Or use the abbreviation of command line parameters: + +```sql +shell> mysql -P 4000 -u xxx -p +``` + +## Add user accounts + +You can create TiDB accounts in two ways: + +- By using the standard account-management SQL statements intended for creating accounts and establishing their privileges, such as `CREATE USER` and `GRANT`. +- By manipulating the grant tables directly with statements such as `INSERT`, `UPDATE`, or `DELETE`. + +It is recommended to use the account-management statements, because manipulating the grant tables directly can lead to incomplete updates. You can also create accounts by using third party GUI tools. + +The following example uses the `CREATE USER` and `GRANT` statements to set up four accounts: + +```sql +mysql> CREATE USER 'finley'@'localhost' IDENTIFIED BY 'some_pass'; +mysql> GRANT ALL PRIVILEGES ON *.* TO 'finley'@'localhost' WITH GRANT OPTION; +mysql> CREATE USER 'finley'@'%' IDENTIFIED BY 'some_pass'; +mysql> GRANT ALL PRIVILEGES ON *.* TO 'finley'@'%' WITH GRANT OPTION; +mysql> CREATE USER 'admin'@'localhost' IDENTIFIED BY 'admin_pass'; +mysql> GRANT RELOAD,PROCESS ON *.* TO 'admin'@'localhost'; +mysql> CREATE USER 'dummy'@'localhost'; +``` + +To see the privileges for an account, use `SHOW GRANTS`: + +```sql +mysql> SHOW GRANTS FOR 'admin'@'localhost'; ++-----------------------------------------------------+ +| Grants for admin@localhost | ++-----------------------------------------------------+ +| GRANT RELOAD, PROCESS ON *.* TO 'admin'@'localhost' | ++-----------------------------------------------------+ +``` + +## Remove user accounts + +To remove a user account, use the `DROP USER` statement: + +```sql +mysql> DROP USER 'jeffrey'@'localhost'; +``` + +## Reserved user accounts + +TiDB creates the `'root'@'%'` default account during the database initialization. + +## Set account resource limits + +Currently, TiDB does not support setting account resource limits. + +## Assign account passwords + +TiDB stores passwords in the `mysql.user` system database. Operations that assign or update passwords are permitted only to users with the `CREATE USER` privilege, or, alternatively, privileges for the `mysql` database (`INSERT` privilege to create new accounts, `UPDATE` privilege to update existing accounts). + +To assign a password when you create a new account, use `CREATE USER` and include an `IDENTIFIED BY` clause: + +```sql +CREATE USER 'jeffrey'@'localhost' IDENTIFIED BY 'mypass'; +``` + +To assign or change a password for an existing account, use `SET PASSWORD FOR` or `ALTER USER`: + +```sql +SET PASSWORD FOR 'root'@'%' = 'xxx'; +``` + +Or: + +```sql +ALTER USER 'jeffrey'@'localhost' IDENTIFIED BY 'mypass'; +``` diff --git a/v2.0/sql/user-defined-variables.md b/v2.0/sql/user-defined-variables.md new file mode 100755 index 0000000000000..ae2cc0ba27766 --- /dev/null +++ b/v2.0/sql/user-defined-variables.md @@ -0,0 +1,132 @@ +--- +title: User-Defined Variables +summary: Learn how to use user-defined variables. +category: user guide +--- + +# User-Defined Variables + +The format of the user-defined variables is `@var_name`. `@var_name` consists of alphanumeric characters, `_`, and `$`. The user-defined variables are case-insensitive. + +The user-defined variables are session specific, which means a user variable defined by one client cannot be seen or used by other clients. + +You can use the `SET` statement to set a user variable: + +```sql +SET @var_name = expr [, @var_name = expr] ... +``` +or + +```sql +SET @var_name := expr +``` +For SET, you can use `=` or `:=` as the assignment operator. + +For example: + +```sql +mysql> SET @a1=1, @a2=2, @a3:=4; +mysql> SELECT @a1, @a2, @t3, @a4 := @a1+@a2+@a3; ++------+------+------+--------------------+ +| @a1 | @a2 | @a3 | @a4 := @a1+@a2+@a3 | ++------+------+------+--------------------+ +| 1 | 2 | 4 | 7 | ++------+------+------+--------------------+ +``` +Hexadecimal or bit values assigned to user variables are treated as binary strings in TiDB. To assign a hexadecimal or bit value as a number, use it in numeric context. For example, add `0` or use `CAST(... AS UNSIGNED)`: + +```sql +mysql> SELECT @v1, @v2, @v3; ++------+------+------+ +| @v1 | @v2 | @v3 | ++------+------+------+ +| A | 65 | 65 | ++------+------+------+ +1 row in set (0.00 sec) + +mysql> SET @v1 = b'1000001'; +Query OK, 0 rows affected (0.00 sec) + +mysql> SET @v2 = b'1000001'+0; +Query OK, 0 rows affected (0.00 sec) + +mysql> SET @v3 = CAST(b'1000001' AS UNSIGNED); +Query OK, 0 rows affected (0.00 sec) + +mysql> SELECT @v1, @v2, @v3; ++------+------+------+ +| @v1 | @v2 | @v3 | ++------+------+------+ +| A | 65 | 65 | ++------+------+------+ +1 row in set (0.00 sec) +``` + +If you refer to a user-defined variable that has not been initialized, it has a value of NULL and a type of string. + +```sql +mysql> select @not_exist; ++------------+ +| @not_exist | ++------------+ +| NULL | ++------------+ +1 row in set (0.00 sec) +``` + +The user-defined variables cannot be used as an identifier in the SQL statement. For example: + +```sql +mysql> select * from t; ++------+ +| a | ++------+ +| 1 | ++------+ +1 row in set (0.00 sec) + +mysql> SET @col = "a"; +Query OK, 0 rows affected (0.00 sec) + +mysql> SELECT @col FROM t; ++------+ +| @col | ++------+ +| a | ++------+ +1 row in set (0.00 sec) + +mysql> SELECT `@col` FROM t; +ERROR 1054 (42S22): Unknown column '@col' in 'field list' + +mysql> SET @col = "`a`"; +Query OK, 0 rows affected (0.00 sec) + +mysql> SELECT @col FROM t; ++------+ +| @col | ++------+ +| `a` | ++------+ +1 row in set (0.01 sec) +``` + +An exception is that when you are constructing a string for use as a prepared statement to execute later: + +```sql +mysql> PREPARE stmt FROM "SELECT @c FROM t"; +Query OK, 0 rows affected (0.00 sec) + +mysql> EXECUTE stmt; ++------+ +| @c | ++------+ +| a | ++------+ +1 row in set (0.01 sec) + +mysql> DEALLOCATE PREPARE stmt; +Query OK, 0 rows affected (0.00 sec) +``` + +For more information, see [User-Defined Variables in MySQL](https://dev.mysql.com/doc/refman/5.7/en/user-variables.html). \ No newline at end of file diff --git a/v2.0/sql/user-manual.md b/v2.0/sql/user-manual.md new file mode 100755 index 0000000000000..fa69f1d449366 --- /dev/null +++ b/v2.0/sql/user-manual.md @@ -0,0 +1,94 @@ +--- +title: TiDB User Guide +summary: Learn about the user guide of TiDB. +category: user guide +--- + +# TiDB User Guide + +TiDB supports the SQL-92 standard and is compatible with MySQL. To help you easily get started with TiDB, TiDB user guide mainly inherits the MySQL document structure with some TiDB specific changes. + +## TiDB server administration + +- [The TiDB Server](tidb-server.md) +- [The TiDB Command Options](server-command-option.md) +- [The TiDB Data Directory](tidb-server.md#tidb-data-directory) +- [The TiDB System Database](system-database.md) +- [The TiDB System Variables](variable.md) +- [The Proprietary System Variables and Syntax in TiDB](tidb-specific.md) +- [The TiDB Server Logs](tidb-server.md#tidb-server-logs) +- [The TiDB Access Privilege System](privilege.md) +- [TiDB User Account Management](user-account-management.md) +- [Use Encrypted Connections](encrypted-connections.md) + +## SQL optimization + +- [Understand the Query Execution Plan](understanding-the-query-execution-plan.md) +- [Introduction to Statistics](statistics.md) + +## Language structure + +- [Literal Values](literal-values.md) +- [Schema Object Names](schema-object-names.md) +- [Keywords and Reserved Words](keywords-and-reserved-words.md) +- [User-Defined Variables](user-defined-variables.md) +- [Expression Syntax](expression-syntax.md) +- [Comment Syntax](comment-syntax.md) + +## Globalization + +- [Character Set Support](character-set-support.md) +- [Character Set Configuration](character-set-configuration.md) +- [Time Zone](time-zone.md) + +## Data types + +- [Numeric Types](datatype.md#numeric-types) +- [Date and Time Types](datatype.md#date-and-time-types) +- [String Types](datatype.md#string-types) +- [JSON Types](datatype.md#json-types) +- [The ENUM data type](datatype.md#the-enum-data-type) +- [The SET Type](datatype.md#the-set-type) +- [Data Type Default Values](datatype.md#data-type-default-values) + +## Functions and operators + +- [Function and Operator Reference](functions-and-operators-reference.md) +- [Type Conversion in Expression Evaluation](type-conversion-in-expression-evaluation.md) +- [Operators](operators.md) +- [Control Flow Functions](control-flow-functions.md) +- [String Functions](string-functions.md) +- [Numeric Functions and Operators](numeric-functions-and-operators.md) +- [Date and Time Functions](date-and-time-functions.md) +- [Bit Functions and Operators](bit-functions-and-operators.md) +- [Cast Functions and Operators](cast-functions-and-operators.md) +- [Encryption and Compression Functions](encryption-and-compression-functions.md) +- [Information Functions](information-functions.md) +- [JSON Functions](json-functions.md) +- Functions Used with Global Transaction IDs [TBD] +- [Aggregate (GROUP BY) Functions](aggregate-group-by-functions.md) +- [Miscellaneous Functions](miscellaneous-functions.md) +- [Precision Math](precision-math.md) + +## SQL statement syntax + +- [Data Definition Statements](ddl.md) +- [Data Manipulation Statements](dml.md) +- [Transactions](transaction.md) + +- [Database Administration Statements](admin.md) +- [Prepared SQL Statement Syntax](prepare.md) +- [Utility Statements](util.md) +- [TiDB SQL Syntax Diagram](https://pingcap.github.io/sqlgram/) + +## JSON functions and generated column + +- [JSON Functions and Generated Column](json-functions-generated-column.md) + +## Connectors and APIs + +- [Connectors and APIs](connection-and-APIs.md) + +## Compatibility with MySQL + +- [Compatibility with MySQL](mysql-compatibility.md) \ No newline at end of file diff --git a/v2.0/sql/util.md b/v2.0/sql/util.md new file mode 100755 index 0000000000000..f819183887dae --- /dev/null +++ b/v2.0/sql/util.md @@ -0,0 +1,99 @@ +--- +title: Utility Statements +summary: Learn how to use the utility statements, including the `DESCRIBE`, `EXPLAIN`, and `USE` statements. +category: user guide +--- + +# Utility Statements + +This document describes the utility statements, including the `DESCRIBE`, `EXPLAIN`, and `USE` statements. + +## `DESCRIBE` statement + +The `DESCRIBE` and `EXPLAIN` statements are synonyms, which can also be abbreviated as `DESC`. See the usage of the `EXPLAIN` statement. + +## `EXPLAIN` statement + +```sql +{EXPLAIN | DESCRIBE | DESC} + tbl_name [col_name] + +{EXPLAIN | DESCRIBE | DESC} + [explain_type] + explainable_stmt + +explain_type: + FORMAT = format_name + +format_name: + "DOT" + +explainable_stmt: { + SELECT statement + | DELETE statement + | INSERT statement + | REPLACE statement + | UPDATE statement +} +``` + +For more information about the `EXPLAIN` statement, see [Understand the Query Execution Plan](understanding-the-query-execution-plan.md). + +In addition to the MySQL standard result format, TiDB also supports DotGraph and you need to specify `FORMAT = "dot"` as in the following example: + +```sql +create table t(a bigint, b bigint); +desc format = "dot" select A.a, B.b from t A join t B on A.a > B.b where A.a < 10; + +TiDB > desc format = "dot" select A.a, B.b from t A join t B on A.a > B.b where A.a < 10;desc format = "dot" select A.a, B.b from t A join t B on A.a > B.b where A.a < 10; ++--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| dot contents | ++--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| +digraph HashRightJoin_7 { +subgraph cluster7{ +node [style=filled, color=lightgrey] +color=black +label = "root" +"HashRightJoin_7" -> "TableReader_10" +"HashRightJoin_7" -> "TableReader_12" +} +subgraph cluster9{ +node [style=filled, color=lightgrey] +color=black +label = "cop" +"Selection_9" -> "TableScan_8" +} +subgraph cluster11{ +node [style=filled, color=lightgrey] +color=black +label = "cop" +"TableScan_11" +} +"TableReader_10" -> "Selection_9" +"TableReader_12" -> "TableScan_11" +} + | ++--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +1 row in set (0.00 sec) +``` + +If the `dot` program (in the `graphviz` package) is installed on your computer, you can generate a PNG file using the following method: + +```bash +dot xx.dot -T png -O + +The xx.dot is the result returned by the above statement. +``` + +If the `dot` program is not installed on your computer, copy the result to [this website](http://www.webgraphviz.com/) to get a tree diagram: + +![Explain Dot](../media/explain_dot.png) + +## `USE` statement + +```sql +USE db_name +``` + +The `USE` statement is used to switch the default database. If the table in SQL statements does not display the specified database, then use the default database. diff --git a/v2.0/sql/variable.md b/v2.0/sql/variable.md new file mode 100755 index 0000000000000..0cdc3414f07d9 --- /dev/null +++ b/v2.0/sql/variable.md @@ -0,0 +1,49 @@ +--- +title: The System Variables +summary: Learn how to use the system variables in TiDB. +category: user guide +--- + +# The System Variables + +The system variables in MySQL are the system parameters that modify the operation of the database runtime. These variables have two types of scope, Global Scope and Session Scope. TiDB supports all the system variables in MySQL 5.7. Most of the variables are only supported for compatibility and do not affect the runtime behaviors. + +## Set the system variables + +You can use the [`SET`](admin.md#the-set-statement) statement to change the value of the system variables. Before you change, consider the scope of the variable. For more information, see [MySQL Dynamic System Variables](https://dev.mysql.com/doc/refman/5.7/en/dynamic-system-variables.html). + +### Set Global variables + +Add the `GLOBAL` keyword before the variable or use `@@global.` as the modifier: + +```sql +SET GLOBAL autocommit = 1; +SET @@global.autocommit = 1; +``` + +### Set Session Variables + +Add the `SESSION` keyword before the variable, use `@@session.` as the modifier, or use no modifier: + +```sql +SET SESSION autocommit = 1; +SET @@session.autocommit = 1; +SET @@autocommit = 1; +``` + +> **Note:** `LOCAL` and `@@local.` are the synonyms for `SESSION` and `@@session.` + +## The fully supported MySQL system variables in TiDB + +The following MySQL system variables are fully supported in TiDB and have the same behaviors as in MySQL. + +| Name | Scope | Description | +| ---------------- | -------- | -------------------------------------------------- | +| autocommit | GLOBAL \| SESSION | whether automatically commit a transaction| +| sql_mode | GLOBAL \| SESSION | support some of the MySQL SQL modes| +| time_zone | GLOBAL \| SESSION | the time zone of the database | +| tx_isolation | GLOBAL \| SESSION | the isolation level of a transaction | + +## The proprietary system variables and syntaxes in TiDB + +See [The Proprietary System Variables and Syntax in TiDB](tidb-specific.md). \ No newline at end of file diff --git a/v2.0/templates/copyright.tex b/v2.0/templates/copyright.tex new file mode 100755 index 0000000000000..95df173bc06c7 --- /dev/null +++ b/v2.0/templates/copyright.tex @@ -0,0 +1,4 @@ + +\noindent \rule{\textwidth}{1pt} + +©2017 PingCAP All Rights Reversed. \ No newline at end of file diff --git a/v2.0/templates/template.tex b/v2.0/templates/template.tex new file mode 100755 index 0000000000000..014fb4c38f16d --- /dev/null +++ b/v2.0/templates/template.tex @@ -0,0 +1,278 @@ +\documentclass[$if(fontsize)$$fontsize$,$endif$$if(lang)$$lang$,$endif$$if(papersize)$$papersize$,$endif$$for(classoption)$$classoption$$sep$,$endfor$]{$documentclass$} +$if(fontfamily)$ +\usepackage{$fontfamily$} +$else$ +\usepackage{lmodern} +$endif$ +$if(linestretch)$ +\usepackage{setspace} +\setstretch{$linestretch$} +$endif$ +\usepackage{amssymb,amsmath} +\usepackage{ifxetex,ifluatex} +\usepackage{fixltx2e} % provides \textsubscript +\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex + \usepackage[T1]{fontenc} + \usepackage[utf8]{inputenc} +$if(euro)$ + \usepackage{eurosym} +$endif$ +\else % if luatex or xelatex + \ifxetex + \usepackage{mathspec} + \usepackage{xltxtra,xunicode} + $if(CJKmainfont)$ + \usepackage{xeCJK} + \setCJKmainfont{$CJKmainfont$} + $endif$ + \else + \usepackage{fontspec} + \fi + \defaultfontfeatures{Mapping=tex-text,Scale=MatchLowercase} + \newcommand{\euro}{€} +$if(mainfont)$ + \setmainfont{$mainfont$} +$endif$ +$if(sansfont)$ + \setsansfont{$sansfont$} +$endif$ +$if(monofont)$ + \setmonofont[Mapping=tex-ansi]{$monofont$} +$endif$ +$if(mathfont)$ + \setmathfont(Digits,Latin,Greek){$mathfont$} +$endif$ +\fi +% use upquote if available, for straight quotes in verbatim environments +\IfFileExists{upquote.sty}{\usepackage{upquote}}{} +% use microtype if available +\IfFileExists{microtype.sty}{% +\usepackage{microtype} +\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts +}{} +$if(geometry)$ +\usepackage[$for(geometry)$$geometry$$sep$,$endfor$]{geometry} +$endif$ +$if(natbib)$ +\usepackage{natbib} +\bibliographystyle{$if(biblio-style)$$biblio-style$$else$plainnat$endif$} +$endif$ +$if(biblatex)$ +\usepackage{biblatex} +$if(biblio-files)$ +\bibliography{$biblio-files$} +$endif$ +$endif$ +$if(listings)$ + +\usepackage{xcolor} +\usepackage{listings} +\lstset{ + basicstyle=\ttfamily, + keywordstyle=\color[rgb]{0.13,0.29,0.53}\bfseries, + stringstyle=\color[rgb]{0.31,0.60,0.02}, + commentstyle=\color[rgb]{0.56,0.35,0.01}\itshape, + numberstyle=\footnotesize, + frame=single, + showspaces=false, % show spaces everywhere adding particular underscores; it overrides 'showstringspaces' + showstringspaces=false, % underline spaces within strings only + breaklines=true, + postbreak=\raisebox{0ex}[0ex][0ex]{\ensuremath{\color{gray}\hookrightarrow\space}} +} + +$endif$ +$if(lhs)$ +\lstnewenvironment{code}{\lstset{language=Haskell,basicstyle=\small\ttfamily}}{} +$endif$ +$if(highlighting-macros)$ +$highlighting-macros$ +$endif$ +$if(verbatim-in-note)$ +\usepackage{fancyvrb} +$endif$ +$if(tables)$ +\usepackage{longtable,booktabs} +$endif$ +$if(graphics)$ +\usepackage{graphicx} +\makeatletter +\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} +\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} +\makeatother +% Scale images if necessary, so that they will not overflow the page +% margins by default, and it is still possible to overwrite the defaults +% using explicit options in \includegraphics[width, height, ...]{} +\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} +$endif$ +\ifxetex + \usepackage[setpagesize=false, % page size defined by xetex + unicode=false, % unicode breaks when used with xetex + xetex]{hyperref} +\else + \usepackage[unicode=true]{hyperref} +\fi +\hypersetup{breaklinks=true, + bookmarks=true, + pdfauthor={$author-meta$}, + pdftitle={$title-meta$}, + colorlinks=true, + citecolor=$if(citecolor)$$citecolor$$else$blue$endif$, + urlcolor=$if(urlcolor)$$urlcolor$$else$blue$endif$, + linkcolor=$if(linkcolor)$$linkcolor$$else$magenta$endif$, + pdfborder={0 0 0}} +\urlstyle{same} % don't use monospace font for urls +$if(links-as-notes)$ +% Make links footnotes instead of hotlinks: +\renewcommand{\href}[2]{#2\footnote{\url{#1}}} +$endif$ +$if(strikeout)$ +\usepackage[normalem]{ulem} +% avoid problems with \sout in headers with hyperref: +\pdfstringdefDisableCommands{\renewcommand{\sout}{}} +$endif$ +\setlength{\parindent}{0pt} +\setlength{\parskip}{6pt plus 2pt minus 1pt} +\setlength{\emergencystretch}{3em} % prevent overfull lines +\providecommand{\tightlist}{% + \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} +$if(numbersections)$ +\setcounter{secnumdepth}{5} +$else$ +\setcounter{secnumdepth}{0} +$endif$ +$if(verbatim-in-note)$ +\VerbatimFootnotes % allows verbatim text in footnotes +$endif$ +$if(lang)$ +\ifxetex + \usepackage{polyglossia} + \setmainlanguage{$mainlang$} +\else + \usepackage[$lang$]{babel} +\fi +$endif$ + +$if(title)$ +\title{$title$$if(subtitle)$\\\vspace{0.5em}{\large $subtitle$}$endif$} +$endif$ +$if(author)$ +\author{$for(author)$$author$$sep$ \and $endfor$} +$endif$ +\date{$date$} +$for(header-includes)$ +$header-includes$ +$endfor$ + +% quote style +% http://tex.stackexchange.com/questions/179982/add-a-black-border-to-block-quotations +\usepackage{framed} +% \usepackage{xcolor} +\let\oldquote=\quote +\let\endoldquote=\endquote +\colorlet{shadecolor}{orange!15} +\renewenvironment{quote}{\begin{shaded*}\begin{oldquote}}{\end{oldquote}\end{shaded*}} + +% https://www.zhihu.com/question/25082703/answer/30038248 +% no cross chapter +\usepackage[section]{placeins} +% no float everywhere +\usepackage{float} +\floatplacement{figure}{H} + +% we chinese write article this way +\usepackage{indentfirst} +\setlength{\parindent}{2em} + +\renewcommand{\contentsname}{Table of Contents} +\renewcommand\figurename{Figure} + +% fix overlap toc number and title +% http://blog.csdn.net/golden1314521/article/details/39926135 +\usepackage{titlesec} +\usepackage{titletoc} +% \titlecontents{标题名}[左间距]{标题格式}{标题标志}{无序号标题}{指引线与页码}[下间距] +% fix overlap +\titlecontents{subsection} + [4em] + {}% + {\contentslabel{3em}}% + {}% + {\titlerule*[0.5pc]{$$\cdot$$}\contentspage\hspace*{0em}}% + +\titlecontents{subsubsection} + [7em] + {}% + {\contentslabel{3.5em}}% + {}% + {\titlerule*[0.5pc]{$$\cdot$$}\contentspage\hspace*{0em}}% + +\usepackage[all]{background} +% \backgroundsetup{contents=PingCAP Inc.,color=blue,opacity=0.2} +\backgroundsetup{contents=\includegraphics{media/pingcap-logo}, + placement=top,scale=0.2,hshift=1000pt,vshift=-150pt, + opacity=0.9,angle=0} + +% avoid level-4, 5 heading to be connected with following content +% https://github.com/jgm/pandoc/issues/1658 +\let\oldparagraph\paragraph +\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} +\let\oldsubparagraph\subparagraph +\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} + +\begin{document} + +% no bg at title page +\NoBgThispage +$if(title)$ +\maketitle +$endif$ +$if(abstract)$ +\begin{abstract} +$abstract$ +\end{abstract} +$endif$ + +$for(include-before)$ +$include-before$ + +$endfor$ +$if(toc)$ +{ +\hypersetup{linkcolor=black} +\setcounter{tocdepth}{$toc-depth$} +\tableofcontents +} +$endif$ +$if(lot)$ +\listoftables +$endif$ +$if(lof)$ +\listoffigures +$endif$ + +\newpage + +$body$ + +$if(natbib)$ +$if(biblio-files)$ +$if(biblio-title)$ +$if(book-class)$ +\renewcommand\bibname{$biblio-title$} +$else$ +\renewcommand\refname{$biblio-title$} +$endif$ +$endif$ +\bibliography{$biblio-files$} + +$endif$ +$endif$ +$if(biblatex)$ +\printbibliography$if(biblio-title)$[title=$biblio-title$]$endif$ + +$endif$ +$for(include-after)$ +$include-after$ + +$endfor$ +\end{document} diff --git a/v2.0/tikv/deploy-tikv-docker-compose.md b/v2.0/tikv/deploy-tikv-docker-compose.md new file mode 100755 index 0000000000000..b534f4ce92067 --- /dev/null +++ b/v2.0/tikv/deploy-tikv-docker-compose.md @@ -0,0 +1,73 @@ +--- +title: Install and Deploy TiKV Using Docker Compose +summary: Use Docker Compose to quickly deploy a TiKV testing cluster on a single machine. +category: operations +--- + +# Install and Deploy TiKV Using Docker Compose + +This guide describes how to quickly deploy a TiKV testing cluster using [Docker Compose](https://github.com/pingcap/tidb-docker-compose/) on a single machine. + +> **Note:** Currently, this installation method only supports the Linux system. + +## Prerequisites + +Make sure you have installed the following items on your machine: + +- Docker (17.06.0 or later) and Docker Compose + + ```bash + sudo yum install docker docker-compose + ``` + +- Helm + + ```bash + curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash + ``` + +- Git + + ``` + sudo yum install git + ``` + +## Install and deploy + +1. Download `tidb-docker-compose`. + + ```bash + git clone https://github.com/pingcap/tidb-docker-compose.git + ``` + +2. Edit the `compose/values.yaml` file to configure `networkMode` to `host` and comment the TiDB section out. + + ```bash + cd tidb-docker-compose/compose + networkMode: host + ``` + +3. Generate the `generated-docker-compose.yml` file. + + ```bash + helm template compose > generated-docker-compose.yml + ``` + +4. Create and start the cluster using the `generated-docker-compose.yml` file. + + ```bash + docker-compose -f generated-docker-compose.yml pull # Get the latest Docker images + docker-compose -f generated-docker-compose.yml up -d + ``` + +You can check whether the TiKV cluster has been successfully deployed using the following command: + +```bash +curl localhost:2379/pd/api/v1/stores +``` + +If the state of all the TiKV instances is "Up", you have successfully deployed a TiKV cluster. + +## What's next? + +If you want to try the Go client, see [Try Two Types of APIs](go-client-api.md). \ No newline at end of file diff --git a/v2.0/tikv/deploy-tikv-using-ansible.md b/v2.0/tikv/deploy-tikv-using-ansible.md new file mode 100755 index 0000000000000..e7b705a1cd92d --- /dev/null +++ b/v2.0/tikv/deploy-tikv-using-ansible.md @@ -0,0 +1,565 @@ +--- +title: Install and Deploy TiKV Using Ansible +summary: Use TiDB-Ansible to deploy a TiKV cluster on multiple nodes. +category: user guide +--- + +# Install and Deploy TiKV Using Ansible + +This guide describes how to install and deploy TiKV using Ansible. Ansible is an IT automation tool that can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. + +[TiDB-Ansible](https://github.com/pingcap/tidb-ansible) is a TiDB cluster deployment tool developed by PingCAP, based on Ansible playbook. TiDB-Ansible enables you to quickly deploy a new TiKV cluster which includes PD, TiKV, and the cluster monitoring modules. + +> **Note:** For the production environment, it is recommended to use TiDB-Ansible to deploy your TiDB cluster. If you only want to try TiKV out and explore the features, see [Install and Deploy TiKV using Docker Compose](deploy-tikv-docker-compose.md) on a single machine. + +## Prepare + +Before you start, make sure you have: + +1. Several target machines that meet the following requirements: + + - 4 or more machines + + A standard TiKV cluster contains 6 machines. You can use 4 machines for testing. + + - CentOS 7.3 (64 bit) or later with Python 2.7 installed, x86_64 architecture (AMD64) + - Network between machines + + > **Note:** When you deploy TiKV using Ansible, use SSD disks for the data directory of TiKV and PD nodes. Otherwise, it cannot pass the check. For more details, see [Software and Hardware Requirements](../op-guide/recommendation.md). + +2. A Control Machine that meets the following requirements: + + > **Note:** The Control Machine can be one of the target machines. + + - CentOS 7.3 (64 bit) or later with Python 2.7 installed + - Access to the Internet + - Git installed + +## Step 1: Install system dependencies on the Control Machine + +Log in to the Control Machine using the `root` user account, and run the corresponding command according to your operating system. + +- If you use a Control Machine installed with CentOS 7, run the following command: + + ``` + # yum -y install epel-release git curl sshpass + # yum -y install python-pip + ``` + +- If you use a Control Machine installed with Ubuntu, run the following command: + + ``` + # apt-get -y install git curl sshpass python-pip + ``` + +## Step 2: Create the `tidb` user on the Control Machine and generate the SSH key + +Make sure you have logged in to the Control Machine using the `root` user account, and then run the following command. + +1. Create the `tidb` user. + + ``` + # useradd -m -d /home/tidb tidb + ``` + +2. Set a password for the `tidb` user account. + + ``` + # passwd tidb + ``` + +3. Configure sudo without password for the `tidb` user account by adding `tidb ALL=(ALL) NOPASSWD: ALL` to the end of the sudo file: + + ``` + # visudo + tidb ALL=(ALL) NOPASSWD: ALL + ``` +4. Generate the SSH key. + + Execute the `su` command to switch the user from `root` to `tidb`. Create the SSH key for the `tidb` user account and hit the Enter key when `Enter passphrase` is prompted. After successful execution, the SSH private key file is `/home/tidb/.ssh/id_rsa`, and the SSH public key file is `/home/tidb/.ssh/id_rsa.pub`. + + ``` + # su - tidb + $ ssh-keygen -t rsa + Generating public/private rsa key pair. + Enter file in which to save the key (/home/tidb/.ssh/id_rsa): + Created directory '/home/tidb/.ssh'. + Enter passphrase (empty for no passphrase): + Enter same passphrase again: + Your identification has been saved in /home/tidb/.ssh/id_rsa. + Your public key has been saved in /home/tidb/.ssh/id_rsa.pub. + The key fingerprint is: + SHA256:eIBykszR1KyECA/h0d7PRKz4fhAeli7IrVphhte7/So tidb@172.16.10.49 + The key's randomart image is: + +---[RSA 2048]----+ + |=+o+.o. | + |o=o+o.oo | + | .O.=.= | + | . B.B + | + |o B * B S | + | * + * + | + | o + . | + | o E+ . | + |o ..+o. | + +----[SHA256]-----+ + ``` + +## Step 3: Download TiDB-Ansible to the Control Machine + +1. Log in to the Control Machine using the `tidb` user account and enter the `/home/tidb` directory. + +2. Download the corresponding TiDB-Ansible version from the [TiDB-Ansible project](https://github.com/pingcap/tidb-ansible). The default folder name is `tidb-ansible`. + + - Download the 2.0 GA version: + + ```bash + $ git clone -b release-2.0 https://github.com/pingcap/tidb-ansible.git + ``` + + - Download the master version: + + ```bash + $ git clone https://github.com/pingcap/tidb-ansible.git + ``` + + > **Note:** It is required to download `tidb-ansible` to the `/home/tidb` directory using the `tidb` user account. If you download it to the `/root` directory, a privilege issue occurs. + + If you have questions regarding which version to use, email to info@pingcap.com for more information or [file an issue](https://github.com/pingcap/tidb-ansible/issues/new). + +## Step 4: Install Ansible and its dependencies on the Control Machine + +Make sure you have logged in to the Control Machine using the `tidb` user account. + +It is required to use `pip` to install Ansible and its dependencies, otherwise a compatibility issue occurs. Currently, the TiDB 2.0 GA version and the master version are compatible with Ansible 2.4 and Ansible 2.5. + +1. Install Ansible and the dependencies on the Control Machine: + + ```bash + $ cd /home/tidb/tidb-ansible + $ sudo pip install -r ./requirements.txt + ``` + + Ansible and the related dependencies are in the `tidb-ansible/requirements.txt` file. + +2. View the version of Ansible: + + ```bash + $ ansible --version + ansible 2.5.0 + ``` + +## Step 5: Configure the SSH mutual trust and sudo rules on the Control Machine + +Make sure you have logged in to the Control Machine using the `tidb` user account. + +1. Add the IPs of your target machines to the `[servers]` section of the `hosts.ini` file. + + ```bash + $ cd /home/tidb/tidb-ansible + $ vi hosts.ini + [servers] + 172.16.10.1 + 172.16.10.2 + 172.16.10.3 + 172.16.10.4 + 172.16.10.5 + 172.16.10.6 + + [all:vars] + username = tidb + ntp_server = pool.ntp.org + ``` + +2. Run the following command and input the `root` user account password of your target machines. + + ```bash + $ ansible-playbook -i hosts.ini create_users.yml -u root -k + ``` + + This step creates the `tidb` user account on the target machines, and configures the sudo rules and the SSH mutual trust between the Control Machine and the target machines. + +> **Note:** To configure the SSH mutual trust and sudo without password manually, see [How to manually configure the SSH mutual trust and sudo without password](../op-guide/ansible-deployment.md#how-to-manually-configure-the-ssh-mutual-trust-and-sudo-without-password). + +## Step 6: Install the NTP service on the target machines + +> **Note:** If the time and time zone of all your target machines are same, the NTP service is on and is normally synchronizing time, you can ignore this step. See [How to check whether the NTP service is normal](../op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal). + +Make sure you have logged in to the Control Machine using the `tidb` user account, run the following command: + +```bash +$ cd /home/tidb/tidb-ansible +$ ansible-playbook -i hosts.ini deploy_ntp.yml -u tidb -b +``` + +The NTP service is installed and started using the software repository that comes with the system on the target machines. The default NTP server list in the installation package is used. The related `server` parameter is in the `/etc/ntp.conf` configuration file. + +To make the NTP service start synchronizing as soon as possible, the system executes the `ntpdate` command to set the local date and time by polling `ntp_server` in the `hosts.ini` file. The default server is `pool.ntp.org`, and you can also replace it with your NTP server. + +## Step 7: Configure the CPUfreq governor mode on the target machine + +For details about CPUfreq, see [the CPUfreq Governor documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/power_management_guide/cpufreq_governors). + +Set the CPUfreq governor mode to `performance` to make full use of CPU performance. + +### Check the governor modes supported by the system + +You can run the `cpupower frequency-info --governors` command to check the governor modes which the system supports: + +``` +# cpupower frequency-info --governors +analyzing CPU 0: + available cpufreq governors: performance powersave +``` + +Taking the above code for example, the system supports the `performance` and `powersave` modes. + +> **Note:** As the following shows, if it returns "Not Available", it means that the current system does not support CPUfreq configuration and you can skip this step. + +``` +# cpupower frequency-info --governors +analyzing CPU 0: + available cpufreq governors: Not Available +``` + +### Check the current governor mode + +You can run the `cpupower frequency-info --policy` command to check the current CPUfreq governor mode: + +``` +# cpupower frequency-info --policy +analyzing CPU 0: + current policy: frequency should be within 1.20 GHz and 3.20 GHz. + The governor "powersave" may decide which speed to use + within this range. +``` + +As the above code shows, the current mode is `powersave` in this example. + +### Change the governor mode + +- You can run the following command to change the current mode to `performance`: + + ``` + # cpupower frequency-set --governor performance + ``` + +- You can also run the following command to set the mode on the target machine in batches: + + ``` + $ ansible -i hosts.ini all -m shell -a "cpupower frequency-set --governor performance" -u tidb -b + ``` + +## Step 8: Mount the data disk ext4 filesystem with options on the target machines + +Log in to the Control Machine using the `root` user account. + +Format your data disks to the ext4 filesystem and mount the filesystem with the `nodelalloc` and `noatime` options. It is required to mount the `nodelalloc` option, or else the Ansible deployment cannot pass the test. The `noatime` option is optional. + +> **Note:** If your data disks have been formatted to ext4 and have mounted the options, you can uninstall it by running the `# umount /dev/nvme0n1` command, follow the steps starting from editing the `/etc/fstab` file, and remount the filesystem with options. + +Take the `/dev/nvme0n1` data disk as an example: + +1. View the data disk. + + ``` + # fdisk -l + Disk /dev/nvme0n1: 1000 GB + ``` + +2. Create the partition table. + + ``` + # parted -s -a optimal /dev/nvme0n1 mklabel gpt -- mkpart primary ext4 1 -1 + ``` + +3. Format the data disk to the ext4 filesystem. + + ``` + # mkfs.ext4 /dev/nvme0n1 + ``` + +4. View the partition UUID of the data disk. + + In this example, the UUID of `nvme0n1` is `c51eb23b-195c-4061-92a9-3fad812cc12f`. + + ``` + # lsblk -f + NAME FSTYPE LABEL UUID MOUNTPOINT + sda + ├─sda1 ext4 237b634b-a565-477b-8371-6dff0c41f5ab /boot + ├─sda2 swap f414c5c0-f823-4bb1-8fdf-e531173a72ed + └─sda3 ext4 547909c1-398d-4696-94c6-03e43e317b60 / + sr0 + nvme0n1 ext4 c51eb23b-195c-4061-92a9-3fad812cc12f + ``` + +5. Edit the `/etc/fstab` file and add the mount options. + + ``` + # vi /etc/fstab + UUID=c51eb23b-195c-4061-92a9-3fad812cc12f /data1 ext4 defaults,nodelalloc,noatime 0 2 + ``` + +6. Mount the data disk. + + ``` + # mkdir /data1 + # mount -a + ``` + +7. Check using the following command. + + ``` + # mount -t ext4 + /dev/nvme0n1 on /data1 type ext4 (rw,noatime,nodelalloc,data=ordered) + ``` + + If the filesystem is ext4 and `nodelalloc` is included in the mount options, you have successfully mount the data disk ext4 filesystem with options on the target machines. + +## Step 9: Edit the `inventory.ini` file to orchestrate the TiKV cluster + +Edit the `tidb-ansible/inventory.ini` file to orchestrate the TiKV cluster. The standard TiKV cluster contains 6 machines: 3 PD nodes and 3 TiKV nodes. + +- Deploy at least 3 instances for TiKV. +- Do not deploy TiKV together with PD on the same machine. +- Use the first PD machine as the monitoring machine. + +> **Note:** +> +> - Leave `[tidb_servers]` in the `inventory.ini` file empty, because this deployment is for the TiKV cluster, not the TiDB cluster. +> - It is required to use the internal IP address to deploy. If the SSH port of the target machines is not the default 22 port, you need to add the `ansible_port` variable. For example, `TiDB1 ansible_host=172.16.10.1 ansible_port=5555`. + +You can choose one of the following two types of cluster topology according to your scenario: + +- [The cluster topology of a single TiKV instance on each TiKV node](#option-1-use-the-cluster-topology-of-a-single-tikv-instance-on-each-tikv-node) + + In most cases, it is recommended to deploy one TiKV instance on each TiKV node for better performance. However, if the CPU and memory of your TiKV machines are much better than the required in [Hardware and Software Requirements](../op-guide/recommendation.md), and you have more than two disks in one node or the capacity of one SSD is larger than 2 TB, you can deploy no more than 2 TiKV instances on a single TiKV node. + +- [The cluster topology of multiple TiKV instances on each TiKV node](#option-2-use-the-cluster-topology-of-multiple-tikv-instances-on-each-tikv-node) + +### Option 1: Use the cluster topology of a single TiKV instance on each TiKV node + +| Name | Host IP | Services | +|-------|-------------|----------| +| node1 | 172.16.10.1 | PD1 | +| node2 | 172.16.10.2 | PD2 | +| node3 | 172.16.10.3 | PD3 | +| node4 | 172.16.10.4 | TiKV1 | +| node5 | 172.16.10.5 | TiKV2 | +| node6 | 172.16.10.6 | TiKV3 | + +Edit the `inventory.ini` file as follows: + +```ini +[tidb_servers] + +[pd_servers] +172.16.10.1 +172.16.10.2 +172.16.10.3 + +[tikv_servers] +172.16.10.4 +172.16.10.5 +172.16.10.6 + +[monitoring_servers] +172.16.10.1 + +[grafana_servers] +172.16.10.1 + +[monitored_servers] +172.16.10.1 +172.16.10.2 +172.16.10.3 +172.16.10.4 +172.16.10.5 +172.16.10.6 +``` + +### Option 2: Use the cluster topology of multiple TiKV instances on each TiKV node + +Take two TiKV instances on each TiKV node as an example: + +| Name | Host IP | Services | +|-------|-------------|------------------| +| node1 | 172.16.10.1 | PD1 | +| node2 | 172.16.10.2 | PD2 | +| node3 | 172.16.10.3 | PD3 | +| node4 | 172.16.10.4 | TiKV1-1, TiKV1-2 | +| node5 | 172.16.10.5 | TiKV2-1, TiKV2-2 | +| node6 | 172.16.10.6 | TiKV3-1, TiKV3-2 | + +```ini +[tidb_servers] + +[pd_servers] +172.16.10.1 +172.16.10.2 +172.16.10.3 + +[tikv_servers] +TiKV1-1 ansible_host=172.16.10.4 deploy_dir=/data1/deploy tikv_port=20171 labels="host=tikv1" +TiKV1-2 ansible_host=172.16.10.4 deploy_dir=/data2/deploy tikv_port=20172 labels="host=tikv1" +TiKV2-1 ansible_host=172.16.10.5 deploy_dir=/data1/deploy tikv_port=20171 labels="host=tikv2" +TiKV2-2 ansible_host=172.16.10.5 deploy_dir=/data2/deploy tikv_port=20172 labels="host=tikv2" +TiKV3-1 ansible_host=172.16.10.6 deploy_dir=/data1/deploy tikv_port=20171 labels="host=tikv3" +TiKV3-2 ansible_host=172.16.10.6 deploy_dir=/data2/deploy tikv_port=20172 labels="host=tikv3" + +[monitoring_servers] +172.16.10.1 + +[grafana_servers] +172.16.10.1 + +[monitored_servers] +172.16.10.1 +172.16.10.2 +172.16.10.3 +172.16.10.4 +172.16.10.5 +172.16.10.6 + +... + +[pd_servers:vars] +location_labels = ["host"] +``` + +Edit the parameters in the service configuration file: + +1. For the cluster topology of multiple TiKV instances on each TiKV node, you need to edit the `block-cache-size` parameter in `tidb-ansible/conf/tikv.yml`: + + - `rocksdb defaultcf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 30% + - `rocksdb writecf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 45% + - `rocksdb lockcf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 2.5% (128 MB at a minimum) + - `raftdb defaultcf block-cache-size(GB)`: MEM * 80% / TiKV instance number * 2.5% (128 MB at a minimum) + +2. For the cluster topology of multiple TiKV instances on each TiKV node, you need to edit the `high-concurrency`, `normal-concurrency` and `low-concurrency` parameters in the `tidb-ansible/conf/tikv.yml` file: + + ``` + readpool: + coprocessor: + # Notice: if CPU_NUM > 8, default thread pool size for coprocessors + # will be set to CPU_NUM * 0.8. + # high-concurrency: 8 + # normal-concurrency: 8 + # low-concurrency: 8 + ``` + + Recommended configuration: `number of instances * parameter value = CPU_Vcores * 0.8`. + +3. If multiple TiKV instances are deployed on a same physical disk, edit the `capacity` parameter in `conf/tikv.yml`: + + - `capacity`: total disk capacity / number of TiKV instances (the unit is GB) + +## Step 10: Edit variables in the `inventory.ini` file + +1. Edit the `deploy_dir` variable to configure the deployment directory. + + The global variable is set to `/home/tidb/deploy` by default, and it applies to all services. If the data disk is mounted on the `/data1` directory, you can set it to `/data1/deploy`. For example: + + ```bash + ## Global variables + [all:vars] + deploy_dir = /data1/deploy + ``` + + **Note:** To separately set the deployment directory for a service, you can configure the host variable while configuring the service host list in the `inventory.ini` file. It is required to add the first column alias, to avoid confusion in scenarios of mixed services deployment. + + ```bash + TiKV1-1 ansible_host=172.16.10.4 deploy_dir=/data1/deploy + ``` + +2. Set the `deploy_without_tidb` variable to `True`. + + ```bash + deploy_without_tidb = True + ``` + +> **Note:** If you need to edit other variables, see [the variable description table](../op-guide/ansible-deployment.md#edit-other-variables-optional). + +## Step 11: Deploy the TiKV cluster + +When `ansible-playbook` executes the Playbook, the default concurrent number is 5. If many target machines are deployed, you can add the `-f` parameter to specify the concurrency, such as `ansible-playbook deploy.yml -f 10`. + +The following example uses `tidb` as the user who runs the service. + +1. Check the `tidb-ansible/inventory.ini` file to make sure `ansible_user = tidb`. + + ```bash + ## Connection + # ssh via normal user + ansible_user = tidb + ``` + +2. Make sure the SSH mutual trust and sudo without password are successfully configured. + + - Run the following command and if all servers return `tidb`, then the SSH mutual trust is successfully configured: + + ```bash + ansible -i inventory.ini all -m shell -a 'whoami' + ``` + + - Run the following command and if all servers return `root`, then sudo without password of the `tidb` user is successfully configured: + + ```bash + ansible -i inventory.ini all -m shell -a 'whoami' -b + ``` + +3. Download the TiKV binary to the Control Machine. + + ```bash + ansible-playbook local_prepare.yml + ``` + +4. Initialize the system environment and modify the kernel parameters. + + ```bash + ansible-playbook bootstrap.yml + ``` + +5. Deploy the TiKV cluster. + + ```bash + ansible-playbook deploy.yml + ``` + +6. Start the TiKV cluster. + + ```bash + ansible-playbook start.yml + ``` + +You can check whether the TiKV cluster has been successfully deployed using the following command: + +```bash +curl 172.16.10.1:2379/pd/api/v1/stores +``` + +## Stop the TiKV cluster + +If you want to stop the TiKV cluster, run the following command: + +```bash +ansible-playbook stop.yml +``` + +## Destroy the TiKV cluster + +> **Warning:** Before you clean the cluster data or destroy the TiKV cluster, make sure you do not need it any more. + +- If you do not need the data any more, you can clean up the data for test using the following command: + + ``` + ansible-playbook unsafe_cleanup_data.yml + ``` + +- If you do not need the TiKV cluster any more, you can destroy it using the following command: + + ```bash + ansible-playbook unsafe_cleanup.yml + ``` + + > **Note:** If the deployment directory is a mount point, an error might be reported, but the implementation result remains unaffected. You can just ignore the error. \ No newline at end of file diff --git a/v2.0/tikv/deploy-tikv-using-binary.md b/v2.0/tikv/deploy-tikv-using-binary.md new file mode 100755 index 0000000000000..98cad512e2eda --- /dev/null +++ b/v2.0/tikv/deploy-tikv-using-binary.md @@ -0,0 +1,149 @@ +--- +title: Install and Deploy TiKV Using Binary Files +summary: Use binary files to deploy a TiKV cluster on a single machine or on multiple nodes for testing. +category: user guide +--- + +# Install and Deploy TiKV Using Binary Files + +This guide describes how to deploy a TiKV cluster using binary files. + +- To quickly understand and try TiKV, see [Deploy the TiKV cluster on a single machine](#deploy-the-tikv-cluster-on-a-single-machine). +- To try TiKV out and explore the features, see [Deploy the TiKV cluster on multiple nodes for testing](#deploy-the-tikv-cluster-on-multiple-nodes-for-testing). + +## Deploy the TiKV cluster on a single machine + +This section describes how to deploy TiKV on a single machine installed with the Linux system. Take the following steps: + +1. Download the official binary package. + + ```bash + # Download the package. + wget https://download.pingcap.org/tidb-latest-linux-amd64.tar.gz + wget http://download.pingcap.org/tidb-latest-linux-amd64.sha256 + + # Check the file integrity. If the result is OK, the file is correct. + sha256sum -c tidb-latest-linux-amd64.sha256 + + # Extract the package. + tar -xzf tidb-latest-linux-amd64.tar.gz + cd tidb-latest-linux-amd64 + ``` + +2. Start PD. + + ```bash + ./bin/pd-server --name=pd1 \ + --data-dir=pd1 \ + --client-urls="http://127.0.0.1:2379" \ + --peer-urls="http://127.0.0.1:2380" \ + --initial-cluster="pd1=http://127.0.0.1:2380" \ + --log-file=pd1.log + ``` + +3. Start TiKV. + + To start the 3 TiKV instances, open a new terminal tab or window, come to the `tidb-latest-linux-amd64` directory, and start the instances using the following command: + + ```bash + ./bin/tikv-server --pd-endpoints="127.0.0.1:2379" \ + --addr="127.0.0.1:20160" \ + --data-dir=tikv1 \ + --log-file=tikv1.log + + ./bin/tikv-server --pd-endpoints="127.0.0.1:2379" \ + --addr="127.0.0.1:20161" \ + --data-dir=tikv2 \ + --log-file=tikv2.log + + ./bin/tikv-server --pd-endpoints="127.0.0.1:2379" \ + --addr="127.0.0.1:20162" \ + --data-dir=tikv3 \ + --log-file=tikv3.log + ``` + +You can use the [pd-ctl](https://github.com/pingcap/pd/tree/master/pdctl) tool to verify whether PD and TiKV are successfully deployed: + +``` +./bin/pd-ctl store -d -u http://127.0.0.1:2379 +``` + +If the state of all the TiKV instances is "Up", you have successfully deployed a TiKV cluster. + +## Deploy the TiKV cluster on multiple nodes for testing + +This section describes how to deploy TiKV on multiple nodes. If you want to test TiKV with a limited number of nodes, you can use one PD instance to test the entire cluster. + +Assume that you have four nodes, you can deploy 1 PD instance and 3 TiKV instances. For details, see the following table: + +| Name | Host IP | Services | +| :-- | :-- | :------------------- | +| Node1 | 192.168.199.113 | PD1 | +| Node2 | 192.168.199.114 | TiKV1 | +| Node3 | 192.168.199.115 | TiKV2 | +| Node4 | 192.168.199.116 | TiKV3 | + +To deploy a TiKV cluster with multiple nodes for test, take the following steps: + +1. Download the official binary package on each node. + + ```bash + # Download the package. + wget https://download.pingcap.org/tidb-latest-linux-amd64.tar.gz + wget http://download.pingcap.org/tidb-latest-linux-amd64.sha256 + + # Check the file integrity. If the result is OK, the file is correct. + sha256sum -c tidb-latest-linux-amd64.sha256 + + # Extract the package. + tar -xzf tidb-latest-linux-amd64.tar.gz + cd tidb-latest-linux-amd64 + ``` + +2. Start PD on Node1. + + ```bash + ./bin/pd-server --name=pd1 \ + --data-dir=pd1 \ + --client-urls="http://192.168.199.113:2379" \ + --peer-urls="http://192.168.199.113:2380" \ + --initial-cluster="pd1=http://192.168.199.113:2380" \ + --log-file=pd1.log + ``` + +3. Log in and start TiKV on other nodes: Node2, Node3 and Node4. + + Node2: + + ```bash + ./bin/tikv-server --pd-endpoints="192.168.199.113:2379" \ + --addr="192.168.199.114:20160" \ + --data-dir=tikv1 \ + --log-file=tikv1.log + ``` + + Node3: + + ```bash + ./bin/tikv-server --pd-endpoints="192.168.199.113:2379" \ + --addr="192.168.199.115:20160" \ + --data-dir=tikv2 \ + --log-file=tikv2.log + ``` + + Node4: + + ```bash + ./bin/tikv-server --pd-endpoints="192.168.199.113:2379" \ + --addr="192.168.199.116:20160" \ + --data-dir=tikv3 \ + --log-file=tikv3.log + ``` + +You can use the [pd-ctl](https://github.com/pingcap/pd/tree/master/pdctl) tool to verify whether PD and TiKV are successfully deployed: + +``` +./pd-ctl store -d -u http://192.168.199.113:2379 +``` + +The result displays the store count and detailed information regarding each store. If the state of all the TiKV instances is "Up", you have successfully deployed a TiKV cluster. \ No newline at end of file diff --git a/v2.0/tikv/deploy-tikv-using-docker.md b/v2.0/tikv/deploy-tikv-using-docker.md new file mode 100755 index 0000000000000..ba32b9154977b --- /dev/null +++ b/v2.0/tikv/deploy-tikv-using-docker.md @@ -0,0 +1,155 @@ +--- +title: Install and Deploy TiKV Using Docker +summary: Use Docker to deploy a TiKV cluster on multiple nodes. +category: user guide +--- + +# Install and Deploy TiKV Using Docker + +This guide describes how to deploy a multi-node TiKV cluster using Docker. + +## Prerequisites + +Make sure that Docker is installed on each machine. + +For more details about prerequisites, see [Hardware and Software Requirements](../op-guide/recommendation.md). + +## Deploy the TiKV cluster on multiple nodes + +Assume that you have 6 machines with the following details: + +| Name | Host IP | Services | Data Path | +| --------- | ------------- | ---------- | --------- | +| Node1 | 192.168.1.101 | PD1 | /data | +| Node2 | 192.168.1.102 | PD2 | /data | +| Node3 | 192.168.1.103 | PD3 | /data | +| Node4 | 192.168.1.104 | TiKV1 | /data | +| Node5 | 192.168.1.105 | TiKV2 | /data | +| Node6 | 192.168.1.106 | TiKV3 | /data | + +If you want to test TiKV with a limited number of nodes, you can also use one PD instance to test the entire cluster. + +### Step 1: Pull the latest images of TiKV and PD from Docker Hub + +Start Docker and pull the latest images of TiKV and PD from [Docker Hub](https://hub.docker.com) using the following command: + +```bash +docker pull pingcap/tikv:latest +docker pull pingcap/pd:latest +``` + +### Step 2: Log in and start PD + +Log in to the three PD machines and start PD respectively: + +1. Start PD1 on Node1: + + ```bash + docker run -d --name pd1 \ + -p 2379:2379 \ + -p 2380:2380 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/pd:latest \ + --name="pd1" \ + --data-dir="/data/pd1" \ + --client-urls="http://0.0.0.0:2379" \ + --advertise-client-urls="http://192.168.1.101:2379" \ + --peer-urls="http://0.0.0.0:2380" \ + --advertise-peer-urls="http://192.168.1.101:2380" \ + --initial-cluster="pd1=http://192.168.1.101:2380,pd2=http://192.168.1.102:2380,pd3=http://192.168.1.103:2380" + ``` + +2. Start PD2 on Node2: + + ```bash + docker run -d --name pd2 \ + -p 2379:2379 \ + -p 2380:2380 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/pd:latest \ + --name="pd2" \ + --data-dir="/data/pd2" \ + --client-urls="http://0.0.0.0:2379" \ + --advertise-client-urls="http://192.168.1.102:2379" \ + --peer-urls="http://0.0.0.0:2380" \ + --advertise-peer-urls="http://192.168.1.102:2380" \ + --initial-cluster="pd1=http://192.168.1.101:2380,pd2=http://192.168.1.102:2380,pd3=http://192.168.1.103:2380" + ``` + +3. Start PD3 on Node3: + + ```bash + docker run -d --name pd3 \ + -p 2379:2379 \ + -p 2380:2380 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/pd:latest \ + --name="pd3" \ + --data-dir="/data/pd3" \ + --client-urls="http://0.0.0.0:2379" \ + --advertise-client-urls="http://192.168.1.103:2379" \ + --peer-urls="http://0.0.0.0:2380" \ + --advertise-peer-urls="http://192.168.1.103:2380" \ + --initial-cluster="pd1=http://192.168.1.101:2380,pd2=http://192.168.1.102:2380,pd3=http://192.168.1.103:2380" + ``` + +### Step 3: Log in and start TiKV + +Log in to the three TiKV machines and start TiKV respectively: + +1. Start TiKV1 on Node4: + + ```bash + docker run -d --name tikv1 \ + -p 20160:20160 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/tikv:latest \ + --addr="0.0.0.0:20160" \ + --advertise-addr="192.168.1.104:20160" \ + --data-dir="/data/tikv1" \ + --pd="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" + ``` + +2. Start TiKV2 on Node5: + + ```bash + docker run -d --name tikv2 \ + -p 20160:20160 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/tikv:latest \ + --addr="0.0.0.0:20160" \ + --advertise-addr="192.168.1.105:20160" \ + --data-dir="/data/tikv2" \ + --pd="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" + ``` + +3. Start TiKV3 on Node6: + + ```bash + docker run -d --name tikv3 \ + -p 20160:20160 \ + -v /etc/localtime:/etc/localtime:ro \ + -v /data:/data \ + pingcap/tikv:latest \ + --addr="0.0.0.0:20160" \ + --advertise-addr="192.168.1.106:20160" \ + --data-dir="/data/tikv3" \ + --pd="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" + ``` + +You can check whether the TiKV cluster has been successfully deployed using the following command: + +``` +curl 192.168.1.101:2379/pd/api/v1/stores +``` + +If the state of all the TiKV instances is "Up", you have successfully deployed a TiKV cluster. + +## What's next? + +If you want to try the Go client, see [Try Two Types of APIs](go-client-api.md). \ No newline at end of file diff --git a/v2.0/tikv/go-client-api.md b/v2.0/tikv/go-client-api.md new file mode 100755 index 0000000000000..e7017a6382b33 --- /dev/null +++ b/v2.0/tikv/go-client-api.md @@ -0,0 +1,339 @@ +--- +title: Try Two Types of APIs +summary: Learn how to use the Raw Key-Value API and the Transactional Key-Value API in TiKV. +category: user guide +--- + +# Try Two Types of APIs + +To apply to different scenarios, TiKV provides [two types of APIs](tikv-overview.md#two-types-of-apis) for developers: the Raw Key-Value API and the Transactional Key-Value API. This document uses two examples to guide you through how to use the two APIs in TiKV. + +The usage examples are based on the [deployment of TiKV using binary files on multiple nodes for test](deploy-tikv-using-binary.md#deploy-the-tikv-cluster-on-multiple-nodes-for-test). You can also quickly try the two types of APIs on a single machine. + +## Try the Raw Key-Value API + +To use the Raw Key-Value API in applications developed by golang, take the following steps: + +1. Install the necessary packages. + + ```bash + go get -v -u github.com/pingcap/tidb/store/tikv + ``` + +2. Import the dependency packages. + + ```go + import ( + "fmt" + "github.com/pingcap/tidb/config" + "github.com/pingcap/tidb/store/tikv" + ) + ``` + +3. Create a Raw Key-Value client. + + ```go + cli, err := tikv.NewRawKVClient([]string{"192.168.199.113:2379"}, config.Security{}) + ``` + + Description of two parameters in the above command: + + - `string`: a list of PD servers’ addresses + - `config.Security`: used for establishing TLS connections, usually left empty when you do not need TLS + +4. Call the Raw Key-Value client methods to access the data on TiKV. The Raw Key-Value API contains the following methods, and you can also find them at [GoDoc](https://godoc.org/github.com/pingcap/tidb/store/tikv#RawKVClient). + + ```go + type RawKVClient struct + func (c *RawKVClient) Close() error + func (c *RawKVClient) ClusterID() uint64 + func (c *RawKVClient) Delete(key []byte) error + func (c *RawKVClient) Get(key []byte) ([]byte, error) + func (c *RawKVClient) Put(key, value []byte) error + func (c *RawKVClient) Scan(startKey []byte, limit int) (keys [][]byte, values [][]byte, err error) + ``` + +### Usage example of the Raw Key-Value API + +```go +package main + +import ( + "fmt" + + "github.com/pingcap/tidb/config" + "github.com/pingcap/tidb/store/tikv" +) + +func main() { + cli, err := tikv.NewRawKVClient([]string{"192.168.199.113:2379"}, config.Security{}) + if err != nil { + panic(err) + } + defer cli.Close() + + fmt.Printf("cluster ID: %d\n", cli.ClusterID()) + + key := []byte("Company") + val := []byte("PingCAP") + + // put key into tikv + err = cli.Put(key, val) + if err != nil { + panic(err) + } + fmt.Printf("Successfully put %s:%s to tikv\n", key, val) + + // get key from tikv + val, err = cli.Get(key) + if err != nil { + panic(err) + } + fmt.Printf("found val: %s for key: %s\n", val, key) + + // delete key from tikv + err = cli.Delete(key) + if err != nil { + panic(err) + } + fmt.Printf("key: %s deleted\n", key) + + // get key again from tikv + val, err = cli.Get(key) + if err != nil { + panic(err) + } + fmt.Printf("found val: %s for key: %s\n", val, key) +} +``` + +The result is like: + +```bash +INFO[0000] [pd] create pd client with endpoints [192.168.199.113:2379] +INFO[0000] [pd] leader switches to: http://127.0.0.1:2379, previous: +INFO[0000] [pd] init cluster id 6554145799874853483 +cluster ID: 6554145799874853483 +Successfully put Company:PingCAP to tikv +found val: PingCAP for key: Company +key: Company deleted +found val: for key: Company +``` + +RawKVClient is a client of the TiKV server and only supports the GET/PUT/DELETE/SCAN commands. The RawKVClient can be safely and concurrently accessed by multiple goroutines, as long as it is not closed. Therefore, for one process, one client is enough generally. + +## Try the Transactional Key-Value API + +The Transactional Key-Value API is more complicated than the Raw Key-Value API. Some transaction related concepts are listed as follows. For more details, see the [KV package](https://github.com/pingcap/tidb/tree/master/kv). + +- Storage + + Like the RawKVClient, a Storage is an abstract TiKV cluster. + +- Snapshot + + A Snapshot is the state of a Storage at a particular point of time, which provides some readonly methods. The multiple times read from a same Snapshot is guaranteed consistent. + +- Transaction + + Like the Transaction in SQL, a Transaction symbolizes a series of read and write operations performed within the Storage. Internally, a Transaction consists of a Snapshot for reads, and a MemBuffer for all writes. The default isolation level of a Transaction is Snapshot Isolation. + +To use the Transactional Key-Value API in applications developed by golang, take the following steps: + +1. Install the necessary packages. + + ```bash + go get -v -u github.com/juju/errors + go get -v -u github.com/pingcap/tidb/kv + go get -v -u github.com/pingcap/tidb/store/tikv + go get -v -u golang.org/x/net/context + ``` + +2. Import the dependency packages. + + ```go + import ( + "flag" + "fmt" + "os" + + "github.com/juju/errors" + "github.com/pingcap/tidb/kv" + "github.com/pingcap/tidb/store/tikv" + "github.com/pingcap/tidb/terror" + + goctx "golang.org/x/net/context" + ) + ``` + +3. Create Storage using a URL scheme. + + ```go + driver := tikv.Driver{} + storage, err := driver.Open("tikv://192.168.199.113:2379") + ``` + +4. (Optional) Modify the Storage using a Transaction. + + The lifecycle of a Transaction is: _begin → {get, set, delete, scan} → {commit, rollback}_. + +5. Call the Transactional Key-Value API's methods to access the data on TiKV. The Transactional Key-Value API contains the following methods: + + ```go + Begin() -> Txn + Txn.Get(key []byte) -> (value []byte) + Txn.Set(key []byte, value []byte) + Txn.Seek(begin []byte) -> Iterator + Txn.Delete(key []byte) + Txn.Commit() + ``` + +### Usage example of the Transactional Key-Value API + +```go +package main + +import ( + "flag" + "fmt" + "os" + + "github.com/juju/errors" + "github.com/pingcap/tidb/kv" + "github.com/pingcap/tidb/store/tikv" + "github.com/pingcap/tidb/terror" + + goctx "golang.org/x/net/context" +) + +type KV struct { + K, V []byte +} + +func (kv KV) String() string { + return fmt.Sprintf("%s => %s (%v)", kv.K, kv.V, kv.V) +} + +var ( + store kv.Storage + pdAddr = flag.String("pd", "192.168.199.113:2379", "pd address:192.168.199.113:2379") +) + +// Init initializes information. +func initStore() { + driver := tikv.Driver{} + var err error + store, err = driver.Open(fmt.Sprintf("tikv://%s", *pdAddr)) + terror.MustNil(err) +} + +// key1 val1 key2 val2 ... +func puts(args ...[]byte) error { + tx, err := store.Begin() + if err != nil { + return errors.Trace(err) + } + + for i := 0; i < len(args); i += 2 { + key, val := args[i], args[i+1] + err := tx.Set(key, val) + if err != nil { + return errors.Trace(err) + } + } + err = tx.Commit(goctx.Background()) + if err != nil { + return errors.Trace(err) + } + + return nil +} + +func get(k []byte) (KV, error) { + tx, err := store.Begin() + if err != nil { + return KV{}, errors.Trace(err) + } + v, err := tx.Get(k) + if err != nil { + return KV{}, errors.Trace(err) + } + return KV{K: k, V: v}, nil +} + +func dels(keys ...[]byte) error { + tx, err := store.Begin() + if err != nil { + return errors.Trace(err) + } + for _, key := range keys { + err := tx.Delete(key) + if err != nil { + return errors.Trace(err) + } + } + err = tx.Commit(goctx.Background()) + if err != nil { + return errors.Trace(err) + } + return nil +} + +func scan(keyPrefix []byte, limit int) ([]KV, error) { + tx, err := store.Begin() + if err != nil { + return nil, errors.Trace(err) + } + it, err := tx.Seek(keyPrefix) + if err != nil { + return nil, errors.Trace(err) + } + defer it.Close() + var ret []KV + for it.Valid() && limit > 0 { + ret = append(ret, KV{K: it.Key()[:], V: it.Value()[:]}) + limit-- + it.Next() + } + return ret, nil +} + +func main() { + pdAddr := os.Getenv("PD_ADDR") + if pdAddr != "" { + os.Args = append(os.Args, "-pd", pdAddr) + } + flag.Parse() + initStore() + + // set + err := puts([]byte("key1"), []byte("value1"), []byte("key2"), []byte("value2")) + terror.MustNil(err) + + // get + kv, err := get([]byte("key1")) + terror.MustNil(err) + fmt.Println(kv) + + // scan + ret, err := scan([]byte("key"), 10) + for _, kv := range ret { + fmt.Println(kv) + } + + // delete + err = dels([]byte("key1"), []byte("key2")) + terror.MustNil(err) +} +``` + +The result is like: + +```bash +INFO[0000] [pd] create pd client with endpoints [192.168.199.113:2379] +INFO[0000] [pd] leader switches to: http://192.168.199.113:2379, previous: +INFO[0000] [pd] init cluster id 6563858376412119197 +key1 => value1 ([118 97 108 117 101 49]) +key1 => value1 ([118 97 108 117 101 49]) +key2 => value2 ([118 97 108 117 101 50]) +``` diff --git a/v2.0/tikv/tikv-overview.md b/v2.0/tikv/tikv-overview.md new file mode 100755 index 0000000000000..2e10566fb3055 --- /dev/null +++ b/v2.0/tikv/tikv-overview.md @@ -0,0 +1,60 @@ +--- +title: Overview of TiKV +summary: Learn about the key features, architecture, and two types of APIs of TiKV. +category: overview +--- + +# Overview of TiKV + +TiKV (The pronunciation is: /'taɪkeɪvi:/ tai-K-V, etymology: titanium) is a distributed Key-Value database which is based on the design of Google Spanner and HBase, but it is much simpler without dependency on any distributed file system. + +As the storage layer of TiDB, TiKV can work separately and does not depend on the SQL layer of TiDB. To apply to different scenarios, TiKV provides [two types of APIs](#two-types-of-apis) for developers: the Raw Key-Value API and the Transactional Key-Value API. + +The key features of TiKV are as follows: + +- **Geo-Replication** + + TiKV uses [Raft](http://raft.github.io/) and the [Placement Driver](https://github.com/pingcap/pd/) to support Geo-Replication. + +- **Horizontal scalability** + + With Placement Driver and carefully designed Raft groups, TiKV excels in horizontal scalability and can easily scale to 100+ TBs of data. + +- **Consistent distributed transactions** + + Similar to Google's Spanner, TiKV supports externally-consistent distributed transactions. + +- **Coprocessor support** + + Similar to HBase, TiKV implements a Coprocessor framework to support distributed computing. + +- **Cooperates with [TiDB](https://github.com/pingcap/tidb)** + + Thanks to the internal optimization, TiKV and TiDB can work together to be a compelling database solution with high horizontal scalability, externally-consistent transactions, and support for RDMBS and NoSQL design patterns. + +## Architecture + +The TiKV server software stack is as follows: + +![The TiKV software stack](../media/tikv_stack.png) + +- **Placement Driver:** Placement Driver (PD) is the cluster manager of TiKV. PD periodically checks replication constraints to balance load and data automatically. +- **Store:** There is a RocksDB within each Store and it stores data into local disk. +- **Region:** Region is the basic unit of Key-Value data movement. Each Region is replicated to multiple Nodes. These multiple replicas form a Raft group. +- **Node:** A physical node in the cluster. Within each node, there are one or more Stores. Within each Store, there are many Regions. + +When a node starts, the metadata of the Node, Store and Region are recorded into PD. The status of each Region and Store is reported to PD regularly. + +## Two types of APIs + +TiKV provides two types of APIs for developers: + +- [The Raw Key-Value API](go-client-api.md#try-the-raw-key-value-api) + + If your application scenario does not need distributed transactions or MVCC (Multi-Version Concurrency Control) and only need to guarantee the atomicity towards one key, you can use the Raw Key-Value API. + +- [The Transactional Key-Value API](go-client-api.md#try-the-transactional-key-value-api) + + If your application scenario requires distributed ACID transactions and the atomicity of multiple keys within a transaction, you can use the Transactional Key-Value API. + +Compared to the Transactional Key-Value API, the Raw Key-Value API is more performant with lower latency and easier to use. \ No newline at end of file diff --git a/v2.0/tispark/tispark-quick-start-guide.md b/v2.0/tispark/tispark-quick-start-guide.md new file mode 100755 index 0000000000000..fc19378e63c76 --- /dev/null +++ b/v2.0/tispark/tispark-quick-start-guide.md @@ -0,0 +1,192 @@ +--- +title: TiSpark Quick Start Guide +summary: Learn how to use TiSpark quickly. +category: User Guide +--- + +# TiSpark Quick Start Guide + +To make it easy to [try TiSpark](tispark-user-guide.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default. + +## Deployment information + +- Spark is deployed by default in the `spark` folder in the TiDB instance deployment directory. +- The TiSpark jar package is deployed by default in the `jars` folder in the Spark deployment directory. + + ``` + spark/jars/tispark-SNAPSHOT-jar-with-dependencies.jar + ``` + +- TiSpark sample data and import scripts are deployed by default in the TiDB-Ansible directory. + + ``` + tidb-ansible/resources/bin/tispark-sample-data + ``` + +## Prepare the environment + +### Install JDK on the TiDB instance + +Download the latest version of JDK 1.8 from [Oracle JDK official download page](http://www.oracle.com/technetwork/java/javase/downloads/java-archive-javase8-2177648.html). The version used in the following example is `jdk-8u141-linux-x64.tar.gz`. + +Extract the package and set the environment variables based on your JDK deployment directory. + +Edit the `~/.bashrc` file. For example: + +```bashrc +export JAVA_HOME=/home/pingcap/jdk1.8.0_144 +export PATH=$JAVA_HOME/bin:$PATH +``` + +Verify the validity of JDK: + +``` +$ java -version +java version "1.8.0_144" +Java(TM) SE Runtime Environment (build 1.8.0_144-b01) +Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode) +``` + +### Import the sample data + +Assume that the TiDB cluster is started. The service IP of one TiDB instance is `192.168.0.2`, the port is `4000`, the user name is `root`, and the password is null. + +``` +cd tidb-ansible/resources/bin/tispark-sample-data +``` + +Edit the TiDB login information in `sample_data.sh`. For example: + +``` +mysql -h 192.168.0.2 -P 4000 -u root < dss.ddl +``` + +Run the script: + +``` +./sample_data.sh +``` + +> **Note**: You need to install the MySQL client on the machine that runs the script. If you are a CentOS user, you can install it through the command `yum -y install mysql`. + +Log into TiDB and verify that the `TPCH_001` database and the following tables are included. + +``` +$ mysql -uroot -P4000 -h192.168.0.2 +MySQL [(none)]> show databases; ++--------------------+ +| Database | ++--------------------+ +| INFORMATION_SCHEMA | +| PERFORMANCE_SCHEMA | +| TPCH_001 | +| mysql | +| test | ++--------------------+ +5 rows in set (0.00 sec) + +MySQL [(none)]> use TPCH_001 +Reading table information for completion of table and column names +You can turn off this feature to get a quicker startup with -A + +Database changed +MySQL [TPCH_001]> show tables; ++--------------------+ +| Tables_in_TPCH_001 | ++--------------------+ +| CUSTOMER | +| LINEITEM | +| NATION | +| ORDERS | +| PART | +| PARTSUPP | +| REGION | +| SUPPLIER | ++--------------------+ +8 rows in set (0.00 sec) +``` + +## Use example + +First start the spark-shell in the spark deployment directory: + +``` +$ cd spark +$ bin/spark-shell +``` + +```scala +import org.apache.spark.sql.TiContext +val ti = new TiContext(spark) + +// Mapping all TiDB tables from `TPCH_001` database as Spark SQL tables +ti.tidbMapDatabase("TPCH_001") +``` + +Then you can call Spark SQL directly: + +```scala +scala> spark.sql("select count(*) from lineitem").show +``` + +The result is: + +``` ++--------+ +|count(1)| ++--------+ +| 60175| ++--------+ +``` + +Now run a more complex Spark SQL: + +```scala +scala> spark.sql( + """select + | l_returnflag, + | l_linestatus, + | sum(l_quantity) as sum_qty, + | sum(l_extendedprice) as sum_base_price, + | sum(l_extendedprice * (1 - l_discount)) as sum_disc_price, + | sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, + | avg(l_quantity) as avg_qty, + | avg(l_extendedprice) as avg_price, + | avg(l_discount) as avg_disc, + | count(*) as count_order + |from + | lineitem + |where + | l_shipdate <= date '1998-12-01' - interval '90' day + |group by + | l_returnflag, + | l_linestatus + |order by + | l_returnflag, + | l_linestatus + """.stripMargin).show +``` + +The result is: + +``` ++------------+------------+---------+--------------+--------------+ +|l_returnflag|l_linestatus| sum_qty|sum_base_price|sum_disc_price| ++------------+------------+---------+--------------+--------------+ +| A| F|380456.00| 532348211.65|505822441.4861| +| N| F| 8971.00| 12384801.37| 11798257.2080| +| N| O|742802.00| 1041502841.45|989737518.6346| +| R| F|381449.00| 534594445.35|507996454.4067| ++------------+------------+---------+--------------+--------------+ +(Continued) +-----------------+---------+------------+--------+-----------+ + sum_charge| avg_qty| avg_price|avg_disc|count_order| +-----------------+---------+------------+--------+-----------+ + 526165934.000839|25.575155|35785.709307|0.050081| 14876| + 12282485.056933|25.778736|35588.509684|0.047759| 348| +1029418531.523350|25.454988|35691.129209|0.049931| 29181| + 528524219.358903|25.597168|35874.006533|0.049828| 14902| +-----------------+---------+------------+--------+-----------+ +``` + +See [more examples](https://github.com/ilovesoup/tpch/tree/master/sparksql). diff --git a/v2.0/tispark/tispark-user-guide.md b/v2.0/tispark/tispark-user-guide.md new file mode 100755 index 0000000000000..68408bd22fb33 --- /dev/null +++ b/v2.0/tispark/tispark-user-guide.md @@ -0,0 +1,250 @@ +--- +title: TiSpark User Guide +summary: Use TiSpark to provide an HTAP solution to serve as a one-stop solution for both online transactions and analysis. +category: user guide +--- + +# TiSpark User Guide + +[TiSpark](https://github.com/pingcap/tispark) is a thin layer built for running Apache Spark on top of TiDB/TiKV to answer the complex OLAP queries. It takes advantages of both the Spark platform and the distributed TiKV cluster and seamlessly glues to TiDB, the distributed OLTP database, to provide a Hybrid Transactional/Analytical Processing (HTAP) solution to serve as a one-stop solution for both online transactions and analysis. + +TiSpark depends on the TiKV cluster and the PD cluster. You also need to set up a Spark cluster. This document provides a brief introduction to how to setup and use TiSpark. It requires some basic knowledge of Apache Spark. For more information, see [Spark website](https://spark.apache.org/docs/latest/index.html). + +## Overview + +TiSpark is an OLAP solution that runs Spark SQL directly on TiKV, the distributed storage engine. + +![TiSpark architecture](../media/tispark-architecture.png) + ++ TiSpark integrates with Spark Catalyst Engine deeply. It provides precise control of the computing, which allows Spark read data from TiKV efficiently. It also supports index seek, which improves the performance of the point query execution significantly. ++ It utilizes several strategies to push down the computing to reduce the size of dataset handling by Spark SQL, which accelerates the query execution. It also uses the TiDB built-in statistical information for the query plan optimization. ++ From the data integration point of view, TiSpark and TiDB serve as a solution runs both transaction and analysis directly on the same platform without building and maintaining any ETLs. It simplifies the system architecture and reduces the cost of maintenance. ++ also, you can deploy and utilize tools from the Spark ecosystem for further data processing and manipulation on TiDB. For example, using TiSpark for data analysis and ETL; retrieving data from TiKV as a machine learning data source; generating reports from the scheduling system and so on. + +## Environment setup + ++ The current version of TiSpark supports Spark 2.1. For Spark 2.0 and Spark 2.2, it has not been fully tested yet. It does not support any versions earlier than 2.0. ++ TiSpark requires JDK 1.8+ and Scala 2.11 (Spark2.0 + default Scala version). ++ TiSpark runs in any Spark mode such as YARN, Mesos, and Standalone. + +## Recommended configuration + +This section describes the configuration of independent deployment of TiKV and TiSpark, independent deployment of Spark and TiSpark, and hybrid deployment of TiKV and TiSpark. + +### Configuration of independent deployment of TiKV and TiSpark + +For independent deployment of TiKV and TiSpark, it is recommended to refer to the following recommendations: + ++ Hardware configuration + - For general purposes, please refer to the TiDB and TiKV hardware configuration [recommendations](../op-guide/recommendation.md#deployment-recommendations). + - If the usage is more focused on the analysis scenarios, you can increase the memory of the TiKV nodes to at least 64G. + ++ TiKV parameters (default) + + ``` + [server] + end-point-concurrency = 8 # For OLAP scenarios, consider increasing this parameter + [raftstore] + sync-log = false + + [rocksdb] + max-background-compactions = 6 + max-background-flushes = 2 + + [rocksdb.defaultcf] + block-cache-size = "10GB" + + [rocksdb.writecf] + block-cache-size = "4GB" + + [rocksdb.raftcf] + block-cache-size = "1GB" + + [rocksdb.lockcf] + block-cache-size = "1GB" + + [storage] + scheduler-worker-pool-size = 4 + ``` + +### Configuration of independent deployment of Spark and TiSpark + +See the [Spark official website](https://spark.apache.org/docs/latest/hardware-provisioning.html) for the detail hardware recommendations. + +The following is a short overview of TiSpark configuration. + +It is recommended to allocate 32G memory for Spark. Please reserve at least 25% of the memory for the operating system and buffer cache. + +It is recommended to provision at least 8 to 16 cores on per machine for Spark. Initially, you can assign all the CPU cores to Spark. + +See the [official configuration](https://spark.apache.org/docs/latest/spark-standalone.html) on the Spark website. The following is an example based on the `spark-env.sh` configuration: + +```sh +SPARK_EXECUTOR_MEMORY = 32g +SPARK_WORKER_MEMORY = 32g +SPARK_WORKER_CORES = 8 +``` + +### Configuration of hybrid deployment of TiKV and TiSpark + +For the hybrid deployment of TiKV and TiSpark, add TiSpark required resources to the TiKV reserved resources, and allocate 25% of the memory for the system. + +## Deploy the TiSpark cluster + +Download TiSpark's jar package [here](http://download.pingcap.org/tispark-0.1.0-SNAPSHOT-jar-with-dependencies.jar). + +### Deploy TiSpark on the existing Spark cluster + +Running TiSpark on an existing Spark cluster does not require a reboot of the cluster. You can use Spark's `--jars` parameter to introduce TiSpark as a dependency: + +```sh +spark-shell --jars $PATH/tispark-0.1.0.jar +``` + +If you want to deploy TiSpark as a default component, simply place the TiSpark jar package into the jars path for each node of the Spark cluster and restart the Spark cluster: + +```sh +${SPARK_INSTALL_PATH}/jars + +``` + +In this way, you can use either `Spark-Submit` or `Spark-Shell` to use TiSpark directly. + +### Deploy TiSpark without the Spark cluster + +If you do not have a Spark cluster, we recommend using the standalone mode. To use the Spark Standalone model, you can simply place a compiled version of Spark on each node of the cluster. If you encounter problems, see its [official website](https://spark.apache.org/docs/latest/spark-standalone.html). And you are welcome to [file an issue](https://github.com/pingcap/tispark/issues/new) on our GitHub. + +#### Download and install + +You can download [Apache Spark](https://spark.apache.org/downloads.html) + +For the Standalone mode without Hadoop support, use Spark 2.1.x and any version of Pre-build with Apache Hadoop 2.x with Hadoop dependencies. If you need to use the Hadoop cluster, please choose the corresponding Hadoop version. You can also choose to build from the [source code](https://spark.apache.org/docs/2.1.0/building-spark.html) to match the previous version of the official Hadoop 2.6. Please note that TiSpark currently only supports Spark 2.1.x version. + +Suppose you already have a Spark binaries, and the current PATH is `SPARKPATH`, please copy the TiSpark jar package to the `${SPARKPATH}/jars` directory. + +#### Start a Master node + +Execute the following command on the selected Spark Master node: + +```sh +cd $SPARKPATH + +./sbin/start-master.sh +``` + +After the above step is completed, a log file will be printed on the screen. Check the log file to confirm whether the Spark-Master is started successfully. You can open the [http://spark-master-hostname:8080](http://spark-master-hostname:8080) to view the cluster information (if you does not change the Spark-Master default port number). When you start Spark-Slave, you can also use this panel to confirm whether the Slave is joined to the cluster. + +#### Start a Slave node + +Similarly, you can start a Spark-Slave node with the following command: + +```sh +./sbin/start-slave.sh spark://spark-master-hostname:7077 +``` + +After the command returns, you can see if the Slave node is joined to the Spark cluster correctly from the panel as well. Repeat the above command at all Slave nodes. After all Slaves are connected to the master, you have a Standalone mode Spark cluster. + +#### Spark SQL shell and JDBC server + +If you want to use JDBC server and interactive SQL shell, please copy `start-tithriftserver.sh stop-tithriftserver.sh` to your Spark's sbin folder and `tispark-sql` to the bin folder. + +To start interactive shell: +```sh +./bin/tispark-sql +``` + +To use Thrift Server, you can start it similar way as default Spark Thrift Server: +```sh +./sbin/start-tithriftserver.sh +``` + +And stop it like below: +```sh +./sbin/stop-tithriftserver.sh +``` + +## Demo + +Assuming that you have successfully started the TiSpark cluster as described above, here's a quick introduction to how to use Spark SQL for OLAP analysis. Here we use a table named `lineitem` in the `tpch` database as an example. + +Assuming that your PD node is located at `192.168.1.100`, port `2379`, add the following command to `$SPARK_HOME/conf/spark-defaults.conf`: + +``` +spark.tispark.pd.addresses 192.168.1.100:2379 +``` + +And then enter the following command in the Spark-Shell: + +```sh +import org.apache.spark.sql.TiContext +val ti = new TiContext(spark) +ti.tidbMapDatabase ("tpch") +``` +After that you can call Spark SQL directly: + +```sh +spark.sql("select count(*)from lineitem").show +``` + +The result is: + +```sql ++-------------+ +| Count (1) | ++-------------+ +| 600000000 | ++-------------+ +``` + +TiSpark's SQL Interactive shell is almost the same as the spark-SQL shell. + +```sh +tispark-sql> use tpch; +Time taken: 0.015 seconds + +tispark-sql> select count(*) from lineitem; +2000 +Time taken: 0.673 seconds, Fetched 1 row(s) +``` + +For JDBC connection with Thrift Server, you can try it with various JDBC supported tools including SQuirreLSQL and hive-beeline. +For example, to use it with beeline: + +```sh +./beeline +Beeline version 1.2.2 by Apache Hive +beeline> !connect jdbc:hive2://localhost:10000 + +1: jdbc:hive2://localhost:10000> use testdb; ++---------+--+ +| Result | ++---------+--+ ++---------+--+ +No rows selected (0.013 seconds) + +select count(*) from account; ++-----------+--+ +| count(1) | ++-----------+--+ +| 1000000 | ++-----------+--+ +1 row selected (1.97 seconds) +``` + +## TiSparkR + +TiSparkR is a thin layer built to support the R language with TiSpark. Refer to [this document](https://github.com/pingcap/tispark/blob/master/R/README.md) for usage. + +## TiSpark on PySpark + +TiSpark on PySpark is a Python package build to support the Python language with TiSpark. Refer to [this document](https://github.com/pingcap/tispark/blob/master/python/README.md) for usage. + +## FAQ + +Q: What are the pros/cons of independent deployment as opposed to a shared resource with an existing Spark / Hadoop cluster? + +A: You can use the existing Spark cluster without a separate deployment, but if the existing cluster is busy, TiSpark will not be able to achieve the desired speed. + +Q: Can I mix Spark with TiKV? + +A: If TiDB and TiKV are overloaded and run critical online tasks, consider deploying TiSpark separately. You also need to consider using different NICs to ensure that OLTP's network resources are not compromised and affect online business. If the online business requirements are not high or the loading is not large enough, you can consider mixing TiSpark with TiKV deployment. \ No newline at end of file diff --git a/v2.0/tools/loader.md b/v2.0/tools/loader.md new file mode 100755 index 0000000000000..a6ed18d416bdf --- /dev/null +++ b/v2.0/tools/loader.md @@ -0,0 +1,147 @@ +--- +title: Loader Instructions +summary: Use Loader to load data to TiDB. +category: advanced +--- + +# Loader Instructions + +## What is Loader? + +Loader is a data import tool to load data to TiDB. + +[Download the Binary](http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.tar.gz). + +## Why did we develop Loader? + +Since tools like mysqldump will take us days to migrate massive amounts of data, we used the [mydumper/myloader suite](https://github.com/maxbube/mydumper) to multi-thread export and import data. During the process, we found that mydumper works well. However, as myloader lacks functions of error retry and savepoint, it is inconvenient for us to use. Therefore, we developed loader, which reads the output data files of mydumper and imports data to TiDB through the MySQL protocol. + +## What can Loader do? + ++ Multi-thread import data + ++ Support table level concurrent import and scattered hot spot write + ++ Support concurrent import of a single large table and scattered hot spot write + ++ Support mydumper data format + ++ Support error retry + ++ Support savepoint + ++ Improve the speed of importing data through system variable + +## Usage + +> **Note:** +> +> - Do not import the `mysql` system database from the MySQL instance to the downstream TiDB instance. +> - If mydumper uses the `-m` parameter, the data is exported without the table structure and the loader can not import the data. +> - If you use the default `checkpoint-schema` parameter, after importing the data of a database, run `drop database tidb_loader` before you begin to import the next database. +> - It is recommended to specify the `checkpoint-schema = "tidb_loader"` parameter when importing data. + +### Parameter description + +``` + -L string: the log level setting, which can be set as debug, info, warn, error, fatal (default: "info") + + -P int: the port of TiDB (default: 4000) + + -V boolean: prints version and exit + + -c string: config file + + -checkpoint-schema string: the database name of checkpoint. In the execution process, loader will constantly update this database. After recovering from an interruption, loader will get the process of the last run through this database. (default: "tidb_loader") + + -d string: the storage directory of data that need to import (default: "./") + + -h string: the host of TiDB (default: "127.0.0.1") + + -p string: the account and password of TiDB + + -pprof-addr string: the pprof address of Loader. It tunes the performance of Loader (default: ":10084") + + -t int: the number of thread,increase this as TiKV nodes increase (default: 16) + + -u string: the user name of TiDB (default: "root") +``` + +### Configuration file + +Apart from command line parameters, you can also use configuration files. The format is shown as below: + +```toml +# Loader log level, which can be set as "debug", "info", "warn", "error" and "fatal" (default: "info") +log-level = "info" + +# Loader log file +log-file = "loader.log" + +# Directory of the dump to import (default: "./") +dir = "./" + +# Loader pprof address, used to tune the performance of Loader (default: "127.0.0.1:10084") +pprof-addr = "127.0.0.1:10084" + +# The checkpoint data is saved to TiDB, and the schema name is defined here. +checkpoint-schema = "tidb_loader" + +# Number of threads restoring concurrently for worker pool (default: 16). Each worker restore one file at a time. +pool-size = 16 + +# The target database information +[db] +host = "127.0.0.1" +user = "root" +password = "" +port = 4000 + +# The sharding synchronising rules support wildcharacter. +# 1. The asterisk character (*, also called "star") matches zero or more characters, +# for example, "doc*" matches "doc" and "document" but not "dodo"; +# asterisk character must be in the end of the wildcard word, +# and there is only one asterisk in one wildcard word. +# 2. The question mark '?' matches exactly one character. +# [[route-rules]] +# pattern-schema = "shard_db_*" +# pattern-table = "shard_table_*" +# target-schema = "shard_db" +# target-table = "shard_table" +``` + +### Usage example + +Command line parameter: + +``` +./bin/loader -d ./test -h 127.0.0.1 -u root -P 4000 +``` + +Or use configuration file "config.toml": + +``` +./bin/loader -c=config.toml +``` + +## FAQ + +### The scenario of synchronising data from sharded tables + +Loader supports importing data from sharded tables into one table within one database according to the route-rules. Before synchronising, check the following items: + +- Whether the sharding rules can be represented using the `route-rules` syntax. +- Whether the sharded tables contain monotone increasing primary keys, or whether there are conflicts in the unique indexes or the primary keys after the combination. + +To combine tables, start the `route-rules` parameter in the configuration file of Loader: + +- To use the table combination function, it is required to fill the `pattern-schema` and `target-schema`. +- If the `pattern-table` and `target-table` are NULL, the table name is not combined or converted. + +``` +[[route-rules]] +pattern-schema = "example_db" +pattern-table = "table_*" +target-schema = "example_db" +target-table = "table" +``` \ No newline at end of file diff --git a/v2.0/tools/pd-control.md b/v2.0/tools/pd-control.md new file mode 100755 index 0000000000000..ae5932b6092ec --- /dev/null +++ b/v2.0/tools/pd-control.md @@ -0,0 +1,622 @@ +--- +title: PD Control User Guide +summary: Use PD Control to obtain the state information of a cluster and tune a cluster. +category: tools +--- + +# PD Control User Guide + +As a command line tool of PD, PD Control obtains the state information of the cluster and tunes the cluster. + +## Source code compiling + +1. [Go](https://golang.org/) Version 1.9 or later +2. In the root directory of the [PD project](https://github.com/pingcap/pd), use the `make` command to compile and generate `bin/pd-ctl` + +> **Note:** Generally, you don't need to compile source code as the PD Control tool already exists in the released Binary or Docker. However, dev users can refer to the above instruction for compiling source code. + +## Usage + +Single-command mode: + + ./pd-ctl store -d -u http://127.0.0.1:2379 + +Interactive mode: + + ./pd-ctl -u http://127.0.0.1:2379 + +Use environment variables: + +```bash +export PD_ADDR=http://127.0.0.1:2379 +./pd-ctl +``` + +Use TLS to encrypt: + +```bash +./pd-ctl -u https://127.0.0.1:2379 --cacert="path/to/ca" --cert="path/to/cert" --key="path/to/key" +``` + +## Command line flags + +### \-\-pd,-u + ++ PD address ++ Default address: http://127.0.0.1:2379 ++ Enviroment variable: PD_ADDR + +### \-\-detach,-d + ++ Use single command line mode (not entering readline) ++ Default: false + +### --cacert + ++ Specify the path to the certificate file of the trusted CA in PEM format ++ Default: "" + +### --cert + ++ Specify the path to the certificate of SSL in PEM format ++ Default: "" + +### --key + ++ Specify the path to the certificate key file of SSL in PEM format, which is the private key of the certificate specified by `--cert` ++ Default: "" + +### --version,-V + ++ Print the version information and exit ++ Default: false + +## Command + +### `cluster` + +Use this command to view the basic information of the cluster. + +Usage: + +```bash +>> cluster // To show the cluster information +{ + "id": 6493707687106161130, + "max_peer_count": 3 +} +``` + +### `config [show | set \ \]` + +Use this command to view or modify the configuration information. + +Usage: + +```bash +>> config show // Display the config information of the scheduler +{ + "max-snapshot-count": 3, + "max-pending-peer-count": 16, + "max-merge-region-size": 50, + "max-merge-region-rows": 200000, + "split-merge-interval": "1h", + "patrol-region-interval": "100ms", + "max-store-down-time": "1h0m0s", + "leader-schedule-limit": 4, + "region-schedule-limit": 4, + "replica-schedule-limit":8, + "merge-schedule-limit": 8, + "tolerant-size-ratio": 5, + "low-space-ratio": 0.8, + "high-space-ratio": 0.6, + "disable-raft-learner": "false", + "disable-remove-down-replica": "false", + "disable-replace-offline-replica": "false", + "disable-make-up-replica": "false", + "disable-remove-extra-replica": "false", + "disable-location-replacement": "false", + "disable-namespace-relocation": "false", + "schedulers-v2": [ + { + "type": "balance-region", + "args": null + }, + { + "type": "balance-leader", + "args": null + }, + { + "type": "hot-region", + "args": null + } + ] +} +>> config show all // Display all config information +>> config show namespace ts1 // Display the config information of the namespace named ts1 +{ + "leader-schedule-limit": 4, + "region-schedule-limit": 4, + "replica-schedule-limit": 8, + "max-replicas": 3, +} +>> config show replication // Display the config information of replication +{ + "max-replicas": 3, + "location-labels": "" +} +>> config show cluster-version // Display the current version of the cluster, which is the current minimum version of TiKV nodes in the cluster and does not correspond to the binary version. +"2.0.0" +``` + +- `max-snapshot-count` controls the maximum number of snapshots that a single store receives or sends out at the same time. The scheduler is restricted by this configuration to avoid taking up normal application resources. When you need to improve the speed of adding replicas or balancing, increase this value. + + ```bash + >> config set max-snapshort-count 16 // Set the maximum number of snapshots to 16 + ``` + +- `max-pending-peer-count` controls the maximum number of pending peers in a single store. The scheduler is restricted by this configuration to avoid producing a large number of Regions without the latest log in some nodes. When you need to improve the speed of adding replicas or balancing, increase this value. Setting it to 0 indicates no limit. + + ```bash + >> config set max-pending-peer-count 64 // Set the maximum number of pending peers to 64 + ``` + +- `max-merge-region-size` controls the upper limit on the size of Region Merge (the unit is M). When `regionSize` exceeds the specified value, PD does not merge it with the adjacent Region. Setting it to 0 indicates disabling Region Merge. + + ```bash + >> config set max-merge-region-size 16 // Set the upper limit on the size of Region Merge to 16M + ``` + +- `max-merge-region-rows` controls the upper limit on the row count of Region Merge. When `regionRowCount` exceeds the specified value, PD does not merge it with the adjacent Region. + + ```bash + >> config set max-merge-region-rows 50000 // Set the the upper limit on rowCount to 50000 + ``` + +- `split-merge-interval` controls the interval between the `split` and `merge` operations on a same Region. This means the newly split Region won't be merged within a period of time. + + ```bash + >> config set split-merge-interval 24h // Set the interval between `split` and `merge` to one day + ``` + +- `patrol-region-interval` controls the execution frequency that `replicaChecker` checks the health status of Regions. A shorter interval indicates a higher execution frequency. Generally, you do not need to adjust it. + + ```bash + >> config set patrol-region-interval 10ms // Set the execution frequency of replicaChecker to 10ms + ``` + +- `max-store-down-time` controls the time that PD decides the disconnected store cannot be restored if exceeded. If PD does not receive heartbeats from a store within the specified period of time, PD adds replicas in other nodes. + + ```bash + >> config set max-store-down-time 30m // Set the time within which PD receives no heartbeats and after which PD starts to add replicas to 30 minutes + ``` + +- `leader-schedule-limit` controls the number of tasks scheduling the leader at the same time. This value affects the speed of leader balance. A larger value means a higher speed and setting the value to 0 closes the scheduling. Usually the leader scheduling has a small load, and you can increase the value in need. + + ```bash + >> config set leader-schedule-limit 4 // 4 tasks of leader scheduling at the same time at most + ``` + +- `region-schedule-limit` controls the number of tasks scheduling the Region at the same time. This value affects the speed of Region balance. A larger value means a higher speed and setting the value to 0 closes the scheduling. Usually the Region scheduling has a large load, so do not set a too large value. + + ```bash + >> config set region-schedule-limit 2 // 2 tasks of Region scheduling at the same time at most + ``` + +- `replica-schedule-limit` controls the number of tasks scheduling the replica at the same time. This value affects the scheduling speed when the node is down or removed. A larger value means a higher speed and setting the value to 0 closes the scheduling. Usually the replica scheduling has a large load, so do not set a too large value. + + ```bash + >> config set replica-schedule-limit 4 // 4 tasks of replica scheduling at the same time at most + ``` + +- `merge-schedule-limit` controls the number of Region Merge scheduling tasks. Setting the value to 0 closes Region Merge. Usually the Merge scheduling has a large load, so do not set a too large value. + + ```bash + >> config set merge-schedule-limit 16 // 16 tasks of Merge scheduling at the same time at most + ``` + +The configuration above is global. You can also tune the configuration by configuring different namespaces. The global configuration is used if the corresponding configuration of the namespace is not set. + +> **Note:** The configuration of the namespace only supports editing `leader-schedule-limit`, `region-schedule-limit`, `replica-schedule-limit` and `max-replicas`. + + ```bash + >> config set namespace ts1 leader-schedule-limit 4 // 4 tasks of leader scheduling at the same time at most for the namespace named ts1 + >> config set namespace ts2 region-schedule-limit 2 // 2 tasks of region scheduling at the same time at most for the namespace named ts2 + ``` + +- `tolerant-size-ratio` controls the size of the balance buffer area. When the score difference between the leader or Region of the two stores is less than specified multiple times of the Region size, it is considered in balance by PD. + + ```bash + >> config set tolerant-size-ratio 20 // Set the size of the buffer area to about 20 times of the average regionSize + ``` + +- `low-space-ratio` controls the threshold value that is considered as insufficient store space. When the ratio of the space occupied by the node exceeds the specified value, PD tries to avoid migrating data to the corresponding node as much as possible. At the same time, PD mainly schedules the remaining space to avoid using up the disk space of the corresponding node. + + ```bash + config set low-space-ratio 0.9 // Set the threshold value of insufficient space to 0.9 + ``` + +- `high-space-ratio` controls the threshold value that is considered as sufficient store space. When the ratio of the space occupied by the node is less than the specified value, PD ignores the remaining space and mainly schedules the actual data volume. + + ```bash + config set high-space-ratio 0.5 // Set the threshold value of sufficient space to 0.5 + ``` + +- `disable-raft-learner` is used to disable Raft learner. By default, PD uses Raft learner when adding replicas to reduce the risk of unavailability due to downtime or network failure. + + ```bash + config set disable-raft-learner true // Disable Raft learner + ``` + +- `cluster-version` is the version of the cluster, which is used to enable or disable some features and to deal with the compatibility issues. By default, it is the minimum version of all normally running TiKV nodes in the cluster. You can set it manually only when you need to roll it back to an earlier version. + + ```bash + config set cluster-version 1.0.8 // Set the version of the cluster to 1.0.8 + ``` + +- `disable-remove-down-replica` is used to disable the feature of automatically deleting DownReplica. When you set it to `true`, PD does not automatically clean up the downtime replicas. + +- `disable-replace-offline-replica` is used to disable the feature of migrating OfflineReplica. When you set it to `true`, PD does not migrate the offline replicas. + +- `disable-make-up-replica` is used to disable the feature of making up replicas. When you set it to `true`, PD does not adding replicas for Regions without sufficient replicas. + +- `disable-remove-extra-replica` is used to disable the feature of removing extra replicas. When you set it to `true`, PD does not remove extra replicas for Regions with redundant replicas. + +- `disable-location-replacement` is used to disable the isolation level check. When you set it to `true`, PD does not improve the isolation level of Region replicas by scheduling. + +- `disable-namespace-relocation` is used to disable Region relocation to the store of its namespace. When you set it to `true`, PD does not move Regions to stores where they belong to. + +### `config delete namespace \ [\]` + +Use this command to delete the configuration of namespace. + +Usage: + +After you configure the namespace, if you want it to continue to use global configuration, delete the configuration information of the namespace using the following command: + +```bash +>> config delete namespace ts1 // Delete the configuration of the namespace named ts1 +``` + +If you want to use global configuration only for a certain configuration of the namespace, use the following command: + +```bash +>> config delete namespace region-schedule-limit ts2 // Delete the region-schedule-limit configuration of the namespace named ts2 +``` + +### `health` + +Use this command to view the health information of the cluster. + +Usage: + +```bash +>> health // Display the health information +{"health": "true"} +``` + +### `hot [read | write | store]` + +Use this command to view the hot spot information of the cluster. + +Usage: + +```bash +>> hot read // Display hot spot for the read operation +>> hot write // Display hot spot for the write operation +>> hot store // Display hot spot for all the read and write operations +``` + +### `label [store \ \]` + +Use this command to view the label information of the cluster. + +Usage: + +```bash +>> label // Display all labels +>> label store zone cn // Display all stores including the "zone":"cn" label +``` + +### `member [delete | leader_priority | leader [show | resign | transfer \]]` + +Use this command to view the PD members, remove a specified member, or configure the priority of leader. + +Usage: + +```bash +>> member // Display the information of all members +{ + "members": [......], + "leader": {......}, + "etcd_leader": {......}, +} +>> member delete name pd2 // Delete "pd2" +Success! +>> member delete id 1319539429105371180 // Delete a node using id +Success! +>> member leader show // Display the leader information +{ + "name": "pd", + "addr": "http://192.168.199.229:2379", + "id": 9724873857558226554 +} +>> member leader resign // Move leader away from the current member +...... +>> member leader transfer pd3 // Migrate leader to a specified member +...... +``` + +### `operator [show | add | remove]` + +Use this command to view and control the scheduling operation. + +Usage: + +```bash +>> operator show // Display all operators +>> operator show admin // Display all admin operators +>> operator show leader // Display all leader operators +>> operator show region // Display all Region operators +>> operator add add-peer 1 2 // Add a replica of Region 1 on store 2 +>> operator remove remove-peer 1 2 // Remove a replica of Region 1 on store 2 +>> operator add transfer-leader 1 2 // Schedule the leader of Region 1 to store 2 +>> operator add transfer-region 1 2 3 4 // Schedule Region 1 to stores 2,3,4 +>> operator add transfer-peer 1 2 3 // Schedule the replica of Region 1 on store 2 to store 3 +>> operator add merge-region 1 2 // Merge Region 1 with Region 2 +>> operator add split-region 1 --policy=approximate // Split Region 1 into two Regions in halves, based on approximately estimated value +>> operator add split-region 1 --policy=scan // Split Region 1 into two Regions in halves, based on accurate scan value +>> operator remove 1 // Remove the scheduling operation of Region 1 +``` + +### `ping` + +Use this command to view the time that `ping` PD takes. + +Usage: + +```bash +>> ping +time: 43.12698ms +``` + +### `region \ [--jq=""]` + +Use this command to view the region information. For a jq formatted output, see [jq-formatted-json-output-usage](#jq-formatted-json-output-usage). + +Usage: + +```bash +>> region // Display the information of all regions +{ + "count": 1, + "regions": [......] +} + +>> region 2 // Display the information of the region with the id of 2 +{ + "region": { + "id": 2, + ...... + } + "leader": { + ...... + } +} +``` + +### `region key [--format=raw|pb|proto|protobuf] \` + +Use this command to query the region that a specific key resides in. It supports the raw and protobuf formats. + +Raw format usage (default): + +```bash +>> region key abc +{ + "region": { + "id": 2, + ...... + } +} +``` + +Protobuf format usage: + +```bash +>> region key --format=pb t\200\000\000\000\000\000\000\377\035_r\200\000\000\000\000\377\017U\320\000\000\000\000\000\372 +{ + "region": { + "id": 2, + ...... + } +} +``` + +### `region sibling \` + +Use this command to check the adjacent Regions of a specific Region. + +Usage: + +```bash +>> region sibling 2 +{ + "count": 2, + "regions": [......], +} +``` + +### `region check [miss-peer | extra-peer | down-peer | pending-peer | incorrect-ns]` + +Use this command to check the Regions in abnormal conditions. + +Description of various types: + +- miss-peer: the Region without enough replicas +- extra-peer: the Region with extra replicas +- down-peer: the Region in which some replicas are Down +- pending-peer:the Region in which some replicas are Pending +- incorrect-ns:the Region in which some replicas deviate from the namespace constraints + +Usage: + +```bash +>> region miss-peer +{ + "count": 2, + "regions": [......], +} +``` + +### `scheduler [show | add | remove]` + +Use this command to view and control the scheduling strategy. + +Usage: + +```bash +>> scheduler show // Display all schedulers +>> scheduler add grant-leader-scheduler 1 // Schedule all the leaders of the regions on store 1 to store 1 +>> scheduler add evict-leader-scheduler 1 // Move all the region leaders on store 1 out +>> scheduler add shuffle-leader-scheduler // Randomly exchange the leader on different stores +>> scheduler add shuffle-region-scheduler // Randomly scheduling the regions on different stores +>> scheduler remove grant-leader-scheduler-1 // Remove the corresponding scheduler +``` + +### `store [delete | label | weight] \ [--jq=""]` + +Use this command to view the store information or remove a specified store. For a jq formatted output, see [jq-formatted-json-output-usage](#jq-formatted-json-output-usage). + +Usage: + +```bash +>> store // Display information of all stores +{ + "count": 3, + "stores": [...] +} +>> store 1 // Get the store with the store id of 1 + ...... +>> store delete 1 // Delete the store with the store id of 1 + ...... +>> store label 1 zone cn // Set the value of the label with the "zone" key to "cn" for the store with the store id of 1 +>> store weight 1 5 10 // Set the leader weight to 5 and region weight to 10 for the store with the store id of 1 +``` + +### `table_ns [create | add | remove | set_store | rm_store | set_meta | rm_meta]` + +Use this command to view the namespace information of the table. + +Usage: + +```bash +>> table_ns add ts1 1 // Add the table with the table id of 1 to the namespace named ts1 +>> table_ns create ts1 // Add the namespace named ts1 +>> table_ns remove ts1 1 // Remove the table with the table id of 1 from the namespace named ts1 +>> table_ns rm_meta ts1 // Remove the metadata from the namespace named ts1 +>> table_ns rm_store 1 ts1 // Remove the table with the store id of 1 from the namespace named ts1 +>> table_ns set_meta ts1 // Add the metadata to namespace named ts1 +>> table_ns set_store 1 ts1 // Add the table with the store id of 1 to the namespace named ts1 +``` + +### `tso` + +Use this command to parse the physical and logical time of TSO. + +Usage: + +```bash +>> tso 395181938313123110 // Parse TSO +system: 2017-10-09 05:50:59 +0800 CST +logic: 120102 +``` + +## Jq formatted JSON output usage + +### Simplify the output of `store` + +```bash +» store --jq=".stores[].store | { id, address, state_name}" +{"id":1,"address":"127.0.0.1:20161","state_name":"Up"} +{"id":30,"address":"127.0.0.1:20162","state_name":"Up"} +... +``` + +### Query the remaining space of the node + +```bash +» store --jq=".stores[] | {id: .store.id, avaiable: .status.available}" +{"id":1,"avaiable":"10 GiB"} +{"id":30,"avaiable":"10 GiB"} +... +``` + +### Query the distribution status of the Region replicas + +```bash +» region --jq=".regions[] | {id: .id, peer_stores: [.peers[].store_id]}" +{"id":2,"peer_stores":[1,30,31]} +{"id":4,"peer_stores":[1,31,34]} +... +``` + +### Filter Regions according to the number of replicas + +For example, to filter out all Regions whose number of replicas is not 3: + +```bash +» region --jq=".regions[] | {id: .id, peer_stores: [.peers[].store_id] | select(length != 3)}" +{"id":12,"peer_stores":[30,32]} +{"id":2,"peer_stores":[1,30,31,32]} +``` + +### Filter Regions according to the store ID of replicas + +For example, to filter out all Regions that have a replica on store30: + +```bash +» region --jq=".regions[] | {id: .id, peer_stores: [.peers[].store_id] | select(any(.==30))}" +{"id":6,"peer_stores":[1,30,31]} +{"id":22,"peer_stores":[1,30,32]} +... +``` + +You can also find out all Regions that have a replica on store30 or store31 in the same way: + +```bash +» region --jq=".regions[] | {id: .id, peer_stores: [.peers[].store_id] | select(any(.==(30,31)))}" +{"id":16,"peer_stores":[1,30,34]} +{"id":28,"peer_stores":[1,30,32]} +{"id":12,"peer_stores":[30,32]} +... +``` + +### Look for relevant Regions when restoring data + +For example, when [store1, store30, store31] is unavailable at its downtime, you can find all Regions whose Down replicas are more than normal replicas: + +```bash +» region --jq=".regions[] | {id: .id, peer_stores: [.peers[].store_id] | select(length as $total | map(if .==(1,30,31) then . else empty end) | length>=$total-length) }" +{"id":2,"peer_stores":[1,30,31,32]} +{"id":12,"peer_stores":[30,32]} +{"id":14,"peer_stores":[1,30,32]} +... +``` + +Or when [store1, store30, store31] fails to start, you can find Regions where the data can be manually removed safely on store1. In this way, you can filter out all Regions that have a replica on store1 but don't have other DownPeers: + +```bash +» region --jq=".regions[] | {id: .id, peer_stores: [.peers[].store_id] | select(length>1 and any(.==1) and all(.!=(30,31)))}" +{"id":24,"peer_stores":[1,32,33]} +``` + +When [store30, store31] is down, find out all Regions that can be safely processed by creating the `remove-peer` Operator, that is, Regions with one and only DownPeer: + +```bash +» region --jq=".regions[] | {id: .id, remove_peer: [.peers[].store_id] | select(length>1) | map(if .==(30,31) then . else empty end) | select(length==1)}" +{"id":12,"remove_peer":[30]} +{"id":4,"remove_peer":[31]} +{"id":22,"remove_peer":[30]} +... +``` \ No newline at end of file diff --git a/v2.0/tools/pd-recover.md b/v2.0/tools/pd-recover.md new file mode 100755 index 0000000000000..0b24dec7837e6 --- /dev/null +++ b/v2.0/tools/pd-recover.md @@ -0,0 +1,47 @@ +--- +title: PD Recover User Guide +summary: Use PD Recover to recover a PD cluster which cannot start or provide services normally. +category: tools +--- + +# PD Recover User Guide + +PD Recover is a disaster recovery tool of PD, used to recover the PD cluster which cannot start or provide services normally. + +## Source code compiling + +1. [Go](https://golang.org/) Version 1.9 or later +2. In the root directory of the [PD project](https://github.com/pingcap/pd), use the `make` command to compile and generate `bin/pd-recover` + +## Usage + +This section describes how to recover a PD cluster which cannot start or provide services normally. + +### Flags description + +``` +-alloc-id uint + Specify a number larger than the allocated ID of the original cluster +-cacert string + Specify the path to the trusted CA certificate file in PEM format +-cert string + Specify the path to the SSL certificate file in PEM format +-key string + Specify the path to the SSL certificate key file in PEM format, which is the private key of the certificate specified by `--cert` +-cluster-id uint + Specify the Cluster ID of the original cluster +-endpoints string + Specify the PD address (default: "http://127.0.0.1:2379") +``` + +### Recovery flow + +1. Obtain the Cluster ID and the Alloc ID from the current cluster. + + - Obtain the Cluster ID from the PD, TiKV and TiDB log. + - Obtain the allocated Alloc ID from either the PD log or the `Metadata Information` in the PD monitoring panel. + + Specifying `alloc-id` requires a number larger than the current largest Alloc ID. If you fail to obtain the Alloc ID, you can make an estimate of a larger number according to the number of Regions and Stores in the cluster. Generally, you can specify a number that is several orders of magnitude larger. +2. Stop the whole cluster, clear the PD data directory, and restart the PD cluster. +3. Use PD Recover to recover and make sure that you use the correct `cluster-id` and appropriate `alloc-id`. +4. When the recovery success information is prompted, restart the whole cluster. diff --git a/v2.0/tools/syncer.md b/v2.0/tools/syncer.md new file mode 100755 index 0000000000000..7e0823c6c91ad --- /dev/null +++ b/v2.0/tools/syncer.md @@ -0,0 +1,524 @@ +--- +title: Syncer User Guide +summary: Use Syncer to import data incrementally to TiDB. +category: advanced +--- + +# Syncer User Guide + +## About Syncer + +Syncer is a tool used to import data incrementally. It is a part of the TiDB enterprise toolset. To obtain Syncer, see [Download the TiDB enterprise toolset](#download-the-tidb-enterprise-toolset-linux). + +## Syncer architecture + +![syncer sharding](../media/syncer_architecture.png) + +## Download the TiDB enterprise toolset (Linux) + +```bash +# Download the tool package. +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c tidb-enterprise-tools-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf tidb-enterprise-tools-latest-linux-amd64.tar.gz +cd tidb-enterprise-tools-latest-linux-amd64 +``` + +## Where to deploy Syncer + +You can deploy Syncer to any of the machines that can connect to MySQL or the TiDB cluster. But it is recommended to deploy Syncer to the TiDB cluster. + +## Use Syncer to import data incrementally + +Before importing data, read [Check before importing data using Syncer](#check-before-importing-data-using-syncer). + +### 1. Set the position to synchronize + +Edit the meta file of Syncer, assuming the meta file is `syncer.meta`: + +```bash +# cat syncer.meta +binlog-name = "mysql-bin.000003" +binlog-pos = 930143241 +binlog-gtid = "2bfabd22-fff7-11e6-97f7-f02fa73bcb01:1-23,61ccbb5d-c82d-11e6-ac2e-487b6bd31bf7:1-4" +``` + +> **Note:** +> +> - The `syncer.meta` file only needs to be configured when it is first used. The position is automatically updated when the new subsequent binlog is synchronized. +> - If you use the binlog position to synchronize, you only need to configure `binlog-name` and `binlog-pos`; if you use `binlog-gtid` to synchronize, you need to configure `binlog-gtid` and set `--enable-gtid` when starting Syncer. + +### 2. Start Syncer + +Description of Syncer command line options: + +``` +Usage of Syncer: + -L string + log level: debug, info, warn, error, fatal (default "info") + -V + to print Syncer version info (default false) + -auto-fix-gtid + to automatically fix the gtid info when MySQL master and slave switches (default false) + -b int + the size of batch transactions (default 10) + -c int + the number of batch threads that Syncer processes (default 16) + -config string + to specify the corresponding configuration file when starting Syncer; for example, `--config config.toml` + -enable-gtid + to start Syncer using the mode; default false; before enabling this option, you need to enable GTID in the upstream MySQL + -log-file string + to specify the log file directory, such as `--log-file ./syncer.log` + -log-rotate string + to specify the log file rotating cycle, hour/day (default "day") + -meta string + to specify the meta file of Syncer upstream (in the same directory with the configuration file by default "syncer.meta") + -server-id int + to specify MySQL slave sever-id (default 101) + -status-addr string + to specify Syncer metrics, such as `--status-addr 127:0.0.1:10088` +``` + +The `config.toml` configuration file of Syncer: + +```toml +log-level = "info" + +server-id = 101 + +# The file path for meta: +meta = "./syncer.meta" + +worker-count = 16 +batch = 10 + +# The testing address for pprof. It can also be used by Prometheus to pull Syncer metrics. +# Change "127.0.0.1" to the IP address of the corresponding host +status-addr = "127.0.0.1:10086" + +# Note: skip-sqls is abandoned, and use skip-ddls instead. +# skip-ddls skips the DDL statements that are incompatible with TiDB, and supports regular expressions. +# skip-ddls = ["^CREATE\\s+USER"] + +# Note: skip-events is abandoned, and use skip-dmls instead. +# skip-dmls skips the DML statements. The type value can be 'insert', 'update' and 'delete'. +# The 'delete' statements that skip-dmls skips in the foo.bar table: +# [[skip-dmls]] +# db-name = "foo" +# tbl-name = "bar" +# type = "delete" +# +# The 'delete' statements that skip-dmls skips in all tables: +# [[skip-dmls]] +# type = "delete" +# +# The 'delete' statements that skip-dmls skips in all foo.* tables: +# [[skip-dmls]] +# db-name = "foo" +# type = "delete" + +# Specify the database name to be synchronized. Support regular expressions. Start with '~' to use regular expressions. +# replicate-do-db = ["~^b.*","s1"] + +# Specify the db.table to be synchronized. +# db-name and tbl-name do not support the `db-name ="dbname, dbname2"` format. +# [[replicate-do-table]] +# db-name ="dbname" +# tbl-name = "table-name" + +# [[replicate-do-table]] +# db-name ="dbname1" +# tbl-name = "table-name1" + +# Specify the db.table to be synchronized. Support regular expressions. Start with '~' to use regular expressions. +# [[replicate-do-table]] +# db-name ="test" +# tbl-name = "~^a.*" + +# Specify the database you want to ignore in synchronization. Support regular expressions. Start with '~' to use regular expressions. +# replicate-ignore-db = ["~^b.*","s1"] + +# Specify the database table you want to ignore in synchronization. +# db-name and tbl-name do not support the `db-name ="dbname, dbname2"` format. +# [[replicate-ignore-table]] +# db-name = "your_db" +# tbl-name = "your_table" + +# Specify the database table you want to ignore in synchronization. Support regular expressions. Start with '~' to use regular expressions. +# [[replicate-ignore-table]] +# db-name ="test" +# tbl-name = "~^a.*" + +# The sharding synchronizing rules support wildcharacter. +# 1. The asterisk character ("*", also called "star") matches zero or more characters, +# For example, "doc*" matches "doc" and "document" but not "dodo"; +# The asterisk character must be in the end of the wildcard word, +# and there is only one asterisk in one wildcard word. +# 2. The question mark ("?") matches any single character. +# [[route-rules]] +# pattern-schema = "route_*" +# pattern-table = "abc_*" +# target-schema = "route" +# target-table = "abc" + +# [[route-rules]] +# pattern-schema = "route_*" +# pattern-table = "xyz_*" +# target-schema = "route" +# target-table = "xyz" + +[from] +host = "127.0.0.1" +user = "root" +password = "" +port = 3306 + +[to] +host = "127.0.0.1" +user = "root" +password = "" +port = 4000 +``` + +Start Syncer: + +```bash +./bin/syncer -config config.toml + +2016/10/27 15:22:01 binlogsyncer.go:226: [info] begin to sync binlog from position (mysql-bin.000003, 1280) +2016/10/27 15:22:01 binlogsyncer.go:130: [info] register slave for master server 127.0.0.1:3306 +2016/10/27 15:22:01 binlogsyncer.go:552: [info] rotate to (mysql-bin.000003, 1280) +2016/10/27 15:22:01 syncer.go:549: [info] rotate binlog to (mysql-bin.000003, 1280) +``` + +### 3. Insert data into MySQL + +```sql +INSERT INTO t1 VALUES (4, 4), (5, 5); +``` + +### 4. Log in to TiDB and view the data + +```sql +mysql -h127.0.0.1 -P4000 -uroot -p +mysql> select * from t1; ++----+------+ +| id | age | ++----+------+ +| 1 | 1 | +| 2 | 2 | +| 3 | 3 | +| 4 | 4 | +| 5 | 5 | ++----+------+ +``` + +Syncer outputs the current synchronized data statistics every 30 seconds: + +```bash +2017/06/08 01:18:51 syncer.go:934: [info] [syncer]total events = 15, total tps = 130, recent tps = 4, +master-binlog = (ON.000001, 11992), master-binlog-gtid=53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-74, +syncer-binlog = (ON.000001, 2504), syncer-binlog-gtid = 53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-17 +2017/06/08 01:19:21 syncer.go:934: [info] [syncer]total events = 15, total tps = 191, recent tps = 2, +master-binlog = (ON.000001, 11992), master-binlog-gtid=53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-74, +syncer-binlog = (ON.000001, 2504), syncer-binlog-gtid = 53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-35 +``` + +The update in MySQL is automatically synchronized in TiDB. + +## Description of Syncer configuration + +### Specify the database to be synchronized + +This section describes the priority of parameters when you use Syncer to synchronize the database. + +- To use the route-rules, see [Support for synchronizing data from sharded tables](#support-for-synchronizing-data-from-sharded-tables). +- Priority: replicate-do-db --> replicate-do-table --> replicate-ignore-db --> replicate-ignore-table + +```toml +# Specify the ops database to be synchronized. +# Specify to synchronize the database starting with ti. +replicate-do-db = ["ops","~^ti.*"] + +# The "china" database includes multiple tables such as guangzhou, shanghai and beijing. You only need to synchronize the shanghai and beijing tables. +# Specify to synchronize the shanghai table in the "china" database. +[[replicate-do-table]] +db-name ="china" +tbl-name = "shanghai" + +# Specify to synchronize the beijing table in the "china" database. +[[replicate-do-table]] +db-name ="china" +tbl-name = "beijing" + +# The "ops" database includes multiple tables such as ops_user, ops_admin, weekly. You only need to synchronize the ops_user table. +# Because replicate-do-db has a higher priority than replicate-do-table, it is invalid if you only set to synchronize the ops_user table. In fact, the whole "ops" database is synchronized. +[[replicate-do-table]] +db-name ="ops" +tbl-name = "ops_user" + +# The "history" database includes multiple tables such as 2017_01 2017_02 ... 2017_12/2016_01 2016_02 ... 2016_12. You only need to synchronize the tables of 2017. +[[replicate-do-table]] +db-name ="history" +tbl-name = "~^2017_.*" + +# Ignore the "ops" and "fault" databases in synchronization +# Ignore the databases starting with "www" in synchronization +# Because replicate-do-db has a higher priority than replicate-ignore-db, it is invalid to ignore the "ops" database here in synchronization. +replicate-ignore-db = ["ops","fault","~^www"] + +# The "fault" database includes multiple tables such as faults, user_feedback, ticket. +# Ignore the user_feedback table in synchronization. +# Because replicate-ignore-db has a higher priority than replicate-ignore-table, it is invalid if you only set to synchronize the user_feedback table. In fact, the whole "fault" database is synchronized. +[[replicate-ignore-table]] +db-name = "fault" +tbl-name = "user_feedback" + +# The "order" database includes multiple tables such as 2017_01 2017_02 ... 2017_12/2016_01 2016_02 ... 2016_12. You need to ignore the tables of 2016. +[[replicate-ignore-table]] +db-name ="order" +tbl-name = "~^2016_.*" +``` + +### Support for synchronizing data from sharded tables + +You can use Syncer to import data from sharded tables into one table within one database according to the `route-rules`. But before synchronizing, you need to check: + +- Whether the sharding rules can be represented using the `route-rules` syntax. +- Whether the sharded tables contain unique increasing primary keys, or whether conflicts exist in the unique indexes or the primary keys after the combination. + +Currently, the support for DDL is still in progress. + +![syncer sharding](../media/syncer_sharding.png) + +#### Usage of synchronizing data from sharded tables + +1. Start Syncer in all MySQL instances and configure the route-rules. +2. In scenarios using replicate-do-db & replicate-ignore-db and route-rules at the same time, you need to specify the target-schema & target-table content in route-rules. + +```toml +# The scenarios are as follows: +# Database A includes multiple databases such as order_2016 and history_2016. +# Database B includes multiple databases such as order_2017 and history_2017. +# Specify to synchronize order_2016 in database A; the data tables are 2016_01 2016_02 ... 2016_12 +# Specify to synchronize order_2017 in database B; the data tables are 2017_01 2017_02 ... 2017_12 +# Use order_id as the primary key in the table, and the primary keys among data do not conflict. +# Ignore the history_2016 and history_2017 databases in synchronization +# The target database is "order" and the target data tables are order_2017 and order_2016. + +# When Syncer finds that the route-rules is enabled after Syncer gets the upstream data, it first combines databases and tables, and then determines do-db & do-table. +# You need to configure the database to be synchronized, which is required when you determine the target-schema & target-table. +[[replicate-do-table]] +db-name ="order" +tbl-name = "order_2016" + +[[replicate-do-table]] +db-name ="order" +tbl-name = "order_2017" + +[[route-rules]] +pattern-schema = "order_2016" +pattern-table = "2016_??" +target-schema = "order" +target-table = "order_2016" + +[[route-rules]] +pattern-schema = "order_2017" +pattern-table = "2017_??" +target-schema = "order" +target-table = "order_2017" +``` + +### Check before importing data using Syncer + +1. Check the `server-id` of the source database. + + - Check the `server-id` using the following command: + + ``` + mysql> show global variables like 'server_id'; + +---------------+------- + | Variable_name | Value | + +---------------+-------+ + | server_id | 1 | + +---------------+-------+ + 1 row in set (0.01 sec) + ``` + + - If the result is null or 0, Syncer cannot synchronize data. + - Syncer server-id must be different from MySQL server-id, and must be unique in the MySQL cluster. + +2. Check the related binlog parameters + + - Check whether the binlog is enabled in MySQL using the following command: + + ``` + mysql> show global variables like 'log_bin'; + +--------------------+---------+ + | Variable_name | Value | + +--------------------+---------+ + | log_bin | ON | + +--------------------+---------+ + 1 row in set (0.00 sec) + ``` + + - If the result is `log_bin = OFF`, you need to enable the binlog. See the [document about enabling the binlog](https://dev.mysql.com/doc/refman/5.7/en/replication-howto-masterbaseconfig.html). + +3. Check whether the binlog format in MySQL is ROW. + + - Check the binlog format using the following command: + + ``` + mysql> show global variables like 'binlog_format'; + +--------------------+----------+ + | Variable_name | Value | + +--------------------+----------+ + | binlog_format | ROW | + +--------------------+----------+ + 1 row in set (0.00 sec) + ``` + + - If the binlog format is not ROW, set it to ROW using the following command: + + ``` + mysql> set global binlog_format=ROW; + mysql> flush logs; + Query OK, 0 rows affected (0.01 sec) + ``` + + - If MySQL is connected, it is recommended to restart MySQL or kill all connections. + +4. Check whether MySQL `binlog_row_image` is FULL. + + - Check `binlog_row_image` using the following command: + + ``` + mysql> show global variables like 'binlog_row_image'; + +--------------------------+---------+ + | Variable_name | Value | + +--------------------------+---------+ + | binlog_row_image | FULL | + +--------------------------+----------+ + 1 row in set (0.01 sec) + ``` + + - If the result of `binlog_row_image` is not FULL, set it to FULL using the following command: + + ``` + mysql> set global binlog_row_image = FULL; + Query OK, 0 rows affected (0.01 sec) + ``` + +5. Check user privileges of mydumper. + + - To export data using mydumper, the user must have the privilege of `select, reload`. + - You can add the `--no-locks` option when the operation object is RDS, to avoid applying for the privilege of `reload`. + +6. Check user privileges of synchronizing the upstream and downstream data. + + - The following privileges granted by the upstream MySQL synchronization account at least: + + `select, replication slave, replication client` + + - For the downstream TiDB, you can temporarily use the root account with the same privileges. + +7. Check the GTID and POS related information. + + Check the binlog information using the following statement: + + ``` + show binlog events in 'mysql-bin.000023' from 136676560 limit 10; + ``` + +## Syncer monitoring solution + +The `syncer` monitoring solution contains the following components: + +- Prometheus, an open source time series database, used to store the monitoring and performance metrics +- Grafana, an open source project for analyzing and visualizing metrics, used to display the performance metrics +- AlertManager, combined with Grafana to implement the alerting mechanism + +See the following diagram: + +![syncer_monitor_scheme](../media/syncer_monitor_scheme.png) + +### Configure Syncer monitor and alert + +Syncer provides the metric interface, and requires Prometheus to actively obtain data. Take the following steps to configure Syncer monitor and alert: + +1. To add the Syncer job information to Prometheus, flush the following content to the configuration file of Prometheus. The monitor is enabled when you restart Prometheus. + + ```yaml + - job_name: 'syncer_ops' // name of the job, to distinguish the reported data + static_configs: + - targets: ['10.1.1.4:10086'] // Syncer monitoring address and port; to inform Prometheus of obtaining the monitoring data of Syncer + ``` + +2. To configure Prometheus [alert](https://prometheus.io/docs/alerting/alertmanager/), flush the following content to the `alert.rule` configuration file. The alert is enabled when you restart Prometheus. + + ``` + # syncer + ALERT syncer_status + IF syncer_binlog_file{node='master'} - ON(instance, job) syncer_binlog_file{node='syncer'} > 1 + FOR 1m + LABELS {channels="alerts", env="test-cluster"} + ANNOTATIONS { + summary = "syncer status error", + description="alert: syncer_binlog_file{node='master'} - ON(instance, job) syncer_binlog_file{node='syncer'} > 1 instance: {{ $labels.instance }} values: {{ $value }}", + } + ``` + +#### Configure Grafana + +1. Log in to the Grafana Web interface. + + - The default address is: http://localhost:3000 + - The default account name: admin + - The password for the default account: admin + +2. Import the configuration file of Grafana dashboard. + + Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source. + +### Description of Grafana Syncer metrics + +#### title: binlog events + +- metrics: `irate(syncer_binlog_events_total[1m])` +- info: the master binlog statistics that has been synchronized by Syncer, including the five major types of `query`, `rotate`, `update_rows`, `write_rows` and `delete_rows` + +#### title: syncer_binlog_file + +- metrics: `syncer_binlog_file` +- info: the number of master binlog files synchronized by Syncer + +#### title: binlog pos + +- metrics: `syncer_binlog_pos` +- info: the binlog-pos information that Syncer synchronizes the current master binlog + +#### title: syncer_gtid + +- metrics: `syncer_gtid` +- info: the binlog-gtid information that Syncer synchronizes the current master binlog + +#### title: syncer_binlog_file + +- metrics: `syncer_binlog_file{node="master"} - ON(instance, job) syncer_binlog_file{node="syncer"}` +- info: the number of different binlog files between the upstream and the downstream in the process of synchronization; the normal value is 0, which indicates real-time synchronization; a larger value indicates a larger number of binlog files discrepancy + +#### title: binlog skipped events + +- metrics: `irate(syncer_binlog_skipped_events_total[1m])` +- info: the total number of SQL statements that Syncer skips when the upstream synchronizes binlog files with the downstream; you can configure the format of SQL statements skipped by Syncer using the `skip-sqls` parameter in the `syncer.toml` file. + +#### title: syncer_txn_costs_gauge_in_second + +- metrics: `syncer_txn_costs_gauge_in_second` +- info: the time consumed by Syncer when it processes one batch (unit: second) \ No newline at end of file diff --git a/v2.0/tools/tidb-binlog-kafka.md b/v2.0/tools/tidb-binlog-kafka.md new file mode 100755 index 0000000000000..eafcd2aa12f63 --- /dev/null +++ b/v2.0/tools/tidb-binlog-kafka.md @@ -0,0 +1,435 @@ +--- +title: TiDB-Binlog user guide +summary: Learn how to deploy the Kafka version of TiDB-Binlog. +category: tool +--- + +# TiDB-Binlog User Guide + +This document describes how to deploy the Kafka version of TiDB-Binlog. If you need to deploy the local version of TiDB-Binlog, see the [TiDB-Binlog user guide for the local version](tidb-binlog.md). + +## About TiDB-Binlog + +TiDB-Binlog is a tool for enterprise users to collect binlog files for TiDB and provide real-time backup and synchronization. + +TiDB-Binlog supports the following scenarios: + +- **Data synchronization**: to synchronize TiDB cluster data to other databases +- **Real-time backup and recovery**: to back up TiDB cluster data, and recover in case of cluster outages + +## TiDB-Binlog architecture + +The TiDB-Binlog architecture is as follows: + +![TiDB-Binlog architecture](../media/tidb_binlog_kafka_architecture.png) + +The TiDB-Binlog cluster mainly consists of three components: + +### Pump + +Pump is a daemon that runs on the background of each TiDB host. Its main function is to record the binlog files generated by TiDB in real time and write to the file in the disk sequentially. + +### Drainer + +Drainer collects binlog files from each Pump node, converts them into specified database-compatible SQL statements in the commit order of the transactions in TiDB, and synchronizes to the target database or writes to the file sequentially. + +### Kafka & ZooKeeper + +The Kafka cluster stores the binlog data written by Pump and provides the binlog data to Drainer for reading. + +> **Note:** In the local version of TiDB-Binlog, the binlog is stored in files, while in the latest version, the binlog is stored using Kafka. + +## Install TiDB-Binlog + +### Download Binary for the CentOS 7.3+ platform + +```bash +# Download the tool package. +wget http://download.pingcap.org/tidb-binlog-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/tidb-binlog-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c tidb-binlog-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf tidb-binlog-latest-linux-amd64.tar.gz +cd tidb-binlog-latest-linux-amd64 +``` + +## Deploy TiDB-Binlog + +### Note + +- You need to deploy a Pump for each TiDB server in the TiDB cluster. Currently, the TiDB server only supports the binlog in UNIX socket. + +- When you deploy a Pump manually, to start the service, follow the order of Pump -> TiDB; to stop the service, follow the order of TiDB -> Pump. + + We set the startup parameter `binlog-socket` as the specified unix socket file path of the corresponding parameter `socket` in Pump. The final deployment architecture is as follows: + + ![TiDB Pump deployment architecture](../media/tidb_pump_deployment.jpeg) + +- Drainer does not support renaming DDL on the table of the ignored schemas (schemas in the filter list). + +- To start Drainer in the existing TiDB cluster, usually you need to do a full backup, get the savepoint, import the full backup, and start Drainer and synchronize from the savepoint. + + To guarantee the integrity of data, perform the following operations 10 minutes after Pump is started: + + - Use [binlogctl](https://github.com/pingcap/tidb-tools/tree/master/tidb_binlog/binlogctl) of the [tidb-tools](https://github.com/pingcap/tidb-tools) project to generate the `position` for the initial start of Drainer. + - Do a full backup. For example, back up TiDB using Mydumper. + - Import the full backup to the target system. + - The savepoint metadata started by the Kafka version of Drainer is stored in the `checkpoint` table of the downstream database `tidb_binlog` by default. If no valid data exists in the `checkpoint` table, configure `initial-commit-ts` to make Drainer work from a specified position when it is started: + + ``` + bin/drainer --config=conf/drainer.toml --initial-commit-ts=${position} + ``` + +- The drainer outputs `pb` and you need to set the following parameters in the configuration file: + + ``` + [syncer] + db-type = "pb" + disable-dispatch = true + + [syncer.to] + dir = "/path/pb-dir" + ``` + +- The drainer outputs `kafka` and you need to set the following parameters in the configuration file: + + ``` + [syncer] + db-type = "kafka" + + # when db-type is kafka, you can uncomment this to config the down stream kafka, or it will be the same kafka addrs where drainer pulls binlog from. + # [syncer.to] + # kafka-addrs = "127.0.0.1:9092" + # kafka-version = "0.8.2.0" + ``` + + The data which outputs to kafka follows the binlog format sorted by ts and defined by protobuf. See [driver](https://github.com/pingcap/tidb-tools/tree/master/tidb_binlog/driver) to access the data and sync to the down stream. + +- Deploy Kafka and ZooKeeper cluster before deploying TiDB-Binlog. Make sure that Kafka is 0.9 version or later. + +#### Recommended Kafka cluster configuration + +| Name | Number | Memory size | CPU | Hard disk | +| :---: | :---: | :---: | :---: | :---: | +| Kafka | 3+ | 16G | 8+ | 2+ 1TB | +| ZooKeeper | 3+ | 8G | 4+ | 2+ 300G | + +#### Recommended Kafka parameter configuration + +- `auto.create.topics.enable = true`: if no topic exists, Kafka automatically creates a topic on the broker. +- `broker.id`: a required parameter to identify the Kafka cluster. Keep the parameter value unique. For example, `broker.id = 1`. +- `fs.file-max = 1000000`: Kafka uses a lot of files and network sockets. It is recommended to change the parameter value to 1000000. Change the value using `vi /etc/sysctl.conf`. + +### Deploy Pump using TiDB-Ansible + +- If you have not deployed the Kafka cluster, use the [Kafka-Ansible](https://github.com/pingcap/thirdparty-ops/tree/master/kafka-ansible) to deploy. +- When you deploy the TiDB cluster using [TiDB-Ansible](https://github.com/pingcap/tidb-ansible), edit the `tidb-ansible/inventory.ini` file, set `enable_binlog = True`, and configure the `zookeeper_addrs` variable as the ZooKeeper address of the Kafka cluster. In this way, Pump is deployed while you deploy the TiDB cluster. + +Configuration example: + +``` +# binlog trigger +enable_binlog = True +# ZooKeeper address of the Kafka cluster. Example: +# zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181" +# You can also append an optional chroot string to the URLs to specify the root directory for all Kafka znodes. Example: +# zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181/kafka/123" +zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181" +``` + +### Deploy Pump using Binary + +A usage example: + +Assume that we have three PDs, three ZooKeepers, and one TiDB. The information of each node is as follows: + +``` +TiDB="192.168.0.10" +PD1="192.168.0.16" +PD2="192.168.0.15" +PD3="192.168.0.14" +ZK1="192.168.0.13" +ZK2="192.168.0.12" +ZK3="192.168.0.11" +``` + +Deploy Drainer/Pump on the machine with the IP address "192.168.0.10". + +The IP address of the corresponding PD cluster is "192.168.0.16,192.168.0.15,192.168.0.14". + +The ZooKeeper IP address of the corresponding Kafka cluster is "192.168.0.13,192.168.0.12,192.168.0.11". + +This example describes how to use Pump/Drainer. + +1. Description of Pump command line options + + ``` + Usage of Pump: + -L string + log level: debug, info, warn, error, fatal (default "info") + -V + to print Pump version info + -addr string + the RPC address that Pump provides service (-addr= "192.168.0.10:8250") + -advertise-addr string + the RPC address that Pump provides external service (-advertise-addr="192.168.0.10:8250") + -config string + to configure the file path of Pump; if you specifies the configuration file, Pump reads the configuration first; if the corresponding configuration also exists in the command line argument, Pump uses the command line configuration to cover that in the configuration file + -data-dir string + the path of storing Pump data + -enable-tolerant + after enabling tolerant, Pump wouldn't return error if it fails to write binlog (default true) + -zookeeper-addrs string (-zookeeper_addrs="192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181") + the ZooKeeper address; this option gets the Kafka address from ZooKeeper, and you need to keep it the same with the configuration in Kafka + -gc int + the maximum days that the binlog is retained (default 7), and 0 means retaining the binlog permanently + -heartbeat-interval int + the interval between heartbeats that Pump sends to PD (unit: second) + -log-file string + the path of the log file + -log-rotate string + the log file rotating frequency (hour/day) + -metrics-addr string + the Prometheus pushgateway address; leaving it empty disables Prometheus push + -metrics-interval int + the frequency of reporting monitoring information (default 15, unit: second) + -pd-urls string + the node address of the PD cluster (-pd-urls="http://192.168.0.16:2379,http://192.168.0.15:2379,http://192.168.0.14:2379") + -socket string + the monitoring address of the unix socket service (default "unix:///tmp/pump.sock") + ``` + +2. Pump configuration file + + ```toml + # Pump configuration. + # the RPC address that Pump provides service (default "192.168.0.10:8250") + addr = "192.168.0.10:8250" + + # the RPC address that Pump provides external service (default "192.168.0.10:8250") + advertise-addr = "" + + # an integer value to control expiry date of the binlog data, indicates how long (in days) the binlog data is stored. + # (default value is 0, means binlog data would never be removed) + gc = 7 + + # the path of storing Pump data + data-dir = "data.pump" + + # the ZooKeeper address; You can set the option to get the Kafka address from ZooKeeper; if the namespace is configured in Kafka, you need to keep the same configuration here + zookeeper-addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181" + # example of the ZooKeeper address that configures the namespace + zookeeper-addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181/kafka/123" + + # the interval between heartbeats that Pump sends to PD (unit: second) + heartbeat-interval = 3 + + # the node address of the PD cluster + pd-urls = "http://192.168.0.16:2379,http://192.168.0.15:2379,http://192.168.0.14:2379" + + # the monitoring address of the unix socket service (default "unix:///tmp/pump.sock") + socket = "unix:///tmp/pump.sock" + ``` + +3. Startup example + + ```bash + ./bin/pump -config pump.toml + ``` + +### Deploy Drainer using Binary + +1. Description of Drainer command line arguments + + ``` + Usage of Drainer: + -L string + log level: debug, info, warn, error, fatal (default "info") + -V + to print Pump version info + -addr string + the address that Drainer provides service (default "192.168.0.10:8249") + -c int + to synchronize the downstream concurrency number, and a bigger value means better throughput performance (default 1) + -config string + to configure the file path of Drainer; if you specifies the configuration file, Drainer reads the configuration first; if the corresponding configuration also exists in the command line argument, Pump uses the command line configuration to cover that in the configuration file + -data-dir string + the path of storing Drainer data (default "data.drainer") + -zookeeper-addrs string (-zookeeper-addrs="192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181") + the ZooKeeper address; you can set this option to get the Kafka address from ZooKeeper, and you need to keep it the same with the configuration in Kafka + -dest-db-type string + the downstream service type of Drainer (default "mysql") + -detect-interval int + the interval of detecting Pump's status from PD (default 10, unit: second) + -disable-dispatch + whether to disable dispatching sqls in a single binlog; if you set the value to true, it is restored into a single transaction to synchronize in the order of each binlog (If the downstream service type is "mysql", set the value to false) + -ignore-schemas string + the DB filtering list (default "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql,test"); does not support the rename DDL operation on the table of ignore schemas + -initial-commit-ts (default 0) + If Drainer does not provide related breakpoint information, you can use this option to configure the related breakpoint information + -log-file string + the path of the log file + -log-rotate string + the log file rotating frequency (hour/day) + -metrics-addr string + the Prometheus pushgateway address; leaving it empty disables Prometheus push + -metrics-interval int + the frequency of reporting monitoring information (default 15, unit: second) + -pd-urls string + the node address of the PD cluster (-pd-urls="http://192.168.0.16:2379,http://192.168.0.15:2379,http://192.168.0.14:2379") + -txn-batch int + the number of SQL statements in a single transaction that is output to the downstream database (default 1) + ``` + +2. Drainer configuration file + + ```toml + # Drainer configuration + + # the address that Drainer provides service ("192.168.0.10:8249") + addr = "192.168.0.10:8249" + + # the interval of detecting Pump's status from PD (default 10, unit: second) + detect-interval = 10 + + # the path of storing Drainer data (default "data.drainer") + data-dir = "data.drainer" + + # the ZooKeeper address; you can use this option to get the Kafka address from ZooKeeper; if the namespace is configured in Kafka, you need to keep the same configuration here + zookeeper-addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181" + # example of the ZooKeeper address that configures the namespace + zookeeper-addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181/kafka/123" + + # the node address of the PD cluster + pd-urls = "http://192.168.0.16:2379,http://192.168.0.15:2379,http://192.168.0.14:2379" + + # the path of the log file + log-file = "drainer.log" + + # Syncer configuration. + [syncer] + + # the DB filtering list (default "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql,test") + # does not support the rename DDL operation on the table of ignore schemas + ignore-schemas = "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql" + + # the number of SQL statements in a single transaction that is output to the downstream database (default 1) + txn-batch = 1 + + # to synchronize the downstream concurrency number, and a bigger value means better throughput performance (default 1) + worker-count = 1 + + # whether to disable dispatching sqls in a single binlog; + # if you set the value to true, it is restored into a single transaction to synchronize in the order of each binlog (If the downstream service type is "mysql", set the value to false) + disable-dispatch = false + + # the downstream service type of Drainer (default "mysql") + # valid values: "mysql", "pb" + db-type = "mysql" + + # replicate-do-db priority over replicate-do-table if have same db name + # and we support regex expression, + # the regex expression starts with '~' + + # replicate-do-db = ["~^b.*","s1"] + + # [[syncer.replicate-do-table]] + # db-name ="test" + # tbl-name = "log" + + # [[syncer.replicate-do-table]] + # db-name ="test" + # tbl-name = "~^a.*" + + # server parameters of the downstream database when the db-type is set to "mysql" + [syncer.to] + host = "192.168.0.10" + user = "root" + password = "" + port = 3306 + + # the directory of the binlog file when the db-type is set to "pb" + # [syncer.to] + # dir = "data.drainer" + ``` + +3. Startup example + + ```bash + ./bin/drainer -config drainer.toml + ``` + +## Download PbReader (Linux) + +PbReader parses the pb file generated by Drainer and translates it into SQL statements. + +CentOS 7+ + +```bash +# Download PbReader package +wget http://download.pingcap.org/pb_reader-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/pb_reader-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c pb_reader-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf pb_reader-latest-linux-amd64.tar.gz +cd pb_reader-latest-linux-amd64 +``` + +The PbReader usage example + +```bash +./bin/pbReader -binlog-file=${path}/binlog-0000000000000000 +``` + +## Monitor TiDB-Binlog + +This section introduces how to monitor TiDB-Binlog's status and performance, and display the metrics using Prometheus and Grafana. + +### Configure Pump/Drainer + +Use the Pump service deployed using Ansible. Set metrics in startup parameters. + +When you start Drainer, set the two parameters of `--metrics-addr` and `--metrics-interval`. Set `--metrics-addr` as the address of Push Gateway. Set `--metrics-interval` as the frequency of push (default 15 seconds). + +### Configure Grafana + +#### Create a Prometheus data source + +1. Login the Grafana Web interface. + + - The default address is: [http://localhost:3000](http://localhost:3000) + + - The default account name: admin + + - The password for the default account: admin + +2. Click the Grafana logo to open the sidebar menu. + +3. Click "Data Sources" in the sidebar. + +4. Click "Add data source". + +5. Specify the data source information: + + - Specify the name for the data source. + - For Type, select Prometheus. + - For Url, specify the Prometheus address. + - Specify other fields as needed. + +6. Click "Add" to save the new data source. + +#### Create a Grafana dashboard + +1. Click the Grafana logo to open the sidebar menu. + +2. On the sidebar menu, click "Dashboards" -> "Import" to open the "Import Dashboard" window. + +3. Click "Upload .json File" to upload a JSON file (Download [TiDB Grafana Config](https://grafana.com/tidb)). + +4. Click "Save & Open". A Prometheus dashboard is created. diff --git a/v2.0/tools/tidb-binlog.md b/v2.0/tools/tidb-binlog.md new file mode 100755 index 0000000000000..4d9d5985fdd96 --- /dev/null +++ b/v2.0/tools/tidb-binlog.md @@ -0,0 +1,346 @@ +--- +title: TiDB-Binlog user guide +summary: Learn how to install, deploy and monitor TiDB-Binlog. +category: tool +--- + +# TiDB-Binlog User Guide + +## About TiDB-Binlog + +TiDB-Binlog is a tool for enterprise users to collect binlog files for TiDB and provide real-time backup and synchronization. + +TiDB-Binlog supports the following scenarios: + +- **Data synchronization**: to synchronize TiDB cluster data to other databases +- **Real-time backup and recovery**: to back up TiDB cluster data, and recover in case of cluster outages + +## TiDB-Binlog architecture + +The TiDB-Binlog architecture is as follows: + +![TiDB-Binlog architecture](../media/architecture.jpeg) + +The TiDB-Binlog cluster mainly consists of two components: + +### Pump + +Pump is a daemon that runs on the background of each TiDB host. Its main function is to record the binlog files generated by TiDB in real time and write to the file in the disk sequentially. + +### Drainer + +Drainer collects binlog files from each Pump node, converts them into specified database-compatible SQL statements in the commit order of the transactions in TiDB, and synchronizes to the target database or writes to the file sequentially. + +## Install TiDB-Binlog + +### Download Binary for the CentOS 7.3+ platform + +```bash +# Download the tool package. +wget http://download.pingcap.org/tidb-binlog-latest-linux-amd64.tar.gz +wget http://download.pingcap.org/tidb-binlog-latest-linux-amd64.sha256 + +# Check the file integrity. If the result is OK, the file is correct. +sha256sum -c tidb-binlog-latest-linux-amd64.sha256 + +# Extract the package. +tar -xzf tidb-binlog-latest-linux-amd64.tar.gz +cd tidb-binlog-latest-linux-amd64 +``` + +### Deploy TiDB-Binlog + +- It is recommended to deploy Pump using Ansible. +- Build a new TiDB cluster with a startup order of pd-server -> tikv-server -> pump -> tidb-server -> drainer. + - Edit the `tidb-ansible inventory.ini` file: + + ```ini + enable_binlog = True + ``` + + - Run `ansible-playbook deploy.yml` + - Run `ansible-playbook start.yml` + +- Deploy Binlog for an existing TiDB cluster. + - Edit the `tidb-ansible inventory.ini` file: + + ```ini + enable_binlog = True + ``` + + - Run `ansible-playbook rolling_update.yml` + +### Note + +- You need to deploy a Pump for each TiDB server in a TiDB cluster. Currently, the TiDB server only supports the binlog in UNIX socket. + + We set the startup parameter `binlog-socket` as the specified unix socket file path of the corresponding parameter `socket` in Pump. The final deployment architecture is as follows: + + ![TiDB pump deployment architecture](../media/tidb_pump_deployment.jpeg) + +- Currently, you need to deploy Drainer manually. + +- Drainer does not support renaming DDL on the table of the ignored schemas (schemas in the filter list). + +- To start Drainer in the existing TiDB cluster, usually you need to do a full backup, get the savepoint, import the full backup, and start Drainer and synchronize from the savepoint. + +- To guarantee the integrity of data, perform the following operations 10 minutes after Pump is started: + + - Run Drainer at the `gen-savepoint` model and generate the Drainer savepoint file: + + ``` + bin/drainer -gen-savepoint --data-dir= ${drainer_savepoint_dir} --pd-urls=${pd_urls} + ``` + + - Do a full backup. For example, back up TiDB using mydumper. + - Import the full backup to the target system. + - Set the file path of the savepoint and start Drainer: + + ``` + bin/drainer --config=conf/drainer.toml --data-dir=${drainer_savepoint_dir} + ``` + +- The drainer outputs `pb` and you need to set the following parameters in the configuration file. + + ``` + [syncer] + db-type = "pb" + disable-dispatch = true + + [syncer.to] + dir = "/path/pb-dir" + ``` + +### Examples and parameters explanation + +#### Pump + +Example + +```bash +./bin/pump -config pump.toml +``` + +Parameters Explanation + +``` +Usage of Pump: +-L string + log level: debug, info, warn, error, fatal (default "info") +-V + print Pump version info +-addr string + addr(i.e. 'host:port') to listen on for client traffic (default "127.0.0.1:8250"). +-advertise-addr string + addr(i.e. 'host:port') to advertise to the public +-config string + path to the Pump configuration file +-data-dir string + path to store binlog data +-gc int + recycle binlog files older than gc days, zero means never recycle (default 7) +-heartbeat-interval int + number of seconds between heartbeat ticks (default 2) +-log-file string + log file path +-log-rotate string + log file rotate type, hour/day +-metrics-addr string + Prometheus pushgataway address; leaving it empty will disable Prometheus push +-metrics-interval int + Prometheus client push interval in second, set "0" to disable Prometheus push (default 15) +-pd-urls string + a comma separated list of the PD endpoints (default "http://127.0.0.1:2379") +-socket string + unix socket addr to listen on for client traffic +``` + +Configuration file + +``` +# Pump Configuration. + +# addr(i.e. 'host:port') to listen on for client traffic +addr = "127.0.0.1:8250" + +# addr(i.e. 'host:port') to advertise to the public +advertise-addr = "" + +# a integer value to control expiry date of the binlog data, indicates for how long (in days) the binlog data would be stored. + +# (default value is 0, means binlog data would never be removed) +gc = 7 + +# path to the data directory of Pump's data +data-dir = "data.pump" + +# number of seconds between heartbeat ticks (in 2 seconds) +heartbeat-interval = 2 + +# a comma separated list of PD endpoints +pd-urls = "http://127.0.0.1:2379" + +# unix socket addr to listen on for client traffic +socket = "unix:///tmp/pump.sock" +``` + +#### Drainer + +Example + +```bash +./bin/drainer -config drainer.toml +``` + +Parameters Explanation + +``` +Usage of Drainer: +-L string + log level: debug, info, warn, error, fatal (default "info") +-V + print version info +-addr string + addr (i.e. 'host:port') to listen on for Drainer connections (default "127.0.0.1:8249") +-c int + parallel worker count (default 1) +-config string + path to the configuration file +-data-dir string + Drainer data directory path (default data.drainer) (default "data.drainer") +-dest-db-type string + target db type: mysql or pb; see syncer section in conf/drainer.toml (default "mysql") +-detect-interval int + the interval time (in seconds) of detecting Pumps' status (default 10) +-disable-dispatch + disable dispatching sqls that in one same binlog; if set true, work-count and txn-batch would be useless +-gen-savepoint + generate the savepoint from cluster +-ignore-schemas string + disable synchronizing those schemas (default "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql") +-log-file string + log file path +-log-rotate string + log file rotate type, hour/day +-metrics-addr string + Prometheus pushgateway address; leaving it empty will disable Prometheus push +-metrics-interval int + Prometheus client push interval in second, set "0" to disable Prometheus push (default 15) +-pd-urls string + a comma separated list of PD endpoints (default "http://127.0.0.1:2379") +-txn-batch int + number of binlog events in a transaction batch (default 1) +``` + +Configuration file + +``` +# Drainer Configuration + +# addr (i.e. 'host:port') to listen on for Drainer connections +addr = "127.0.0.1:8249" + +# the interval time (in seconds) of detect Pumps' status +detect-interval = 10 + +# Drainer meta data directory path +data-dir = "data.drainer" + +# a comma separated list of PD endpoints +pd-urls = "http://127.0.0.1:2379" + +# The file path of log +log-file = "drainer.log" + +# syncer Configuration +[syncer] + +# disable synchronizing these schemas +ignore-schemas = "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql" + +# number of binlog events in a transaction batc +txn-batch = 1 + +# worker count to execute binlogs +worker-count = 1 + +disable-dispatch = false + +# downstream storage, equal to --dest-db-type +# valid values are "mysql", "pb" +db-type = "mysql" + +# The replicate-do-db prioritizes over replicate-do-table if having the same db name. +# Regular expressions are supported, and starting with '~' declares the use of regular expressions. +# replicate-do-db = ["~^b.*","s1"] +# [[syncer.replicate-do-table]] +# db-name ="test" +# tbl-name = "log" + +# [[syncer.replicate-do-table]] +# db-name ="test" +# tbl-name = "~^a.*" + +# the downstream mysql protocol database +[syncer.to] +host = "127.0.0.1" +user = "root" +password = "" +port = 3306 + +# uncomment this if you want to use pb as db-type +# [syncer.to] +# dir = "data.drainer" +``` + +## Monitor TiDB-Binlog + +This section introduces how to monitor TiDB-Binlog's status and performance, and display the metrics using Prometheus and Grafana. + +### Configure Pump/Drainer + +Use the Pump service deployed using Ansible. Set metrics in startup parameters. + +When you start Drainer, set the two parameters of `--metrics-addr` and `--metrics-interval`. Set `--metrics-addr` as the address of Push Gateway. Set `--metrics-interval` as the frequency of push (default 15 seconds). + +### Configure Grafana + +#### Create a Prometheus data source + +1. Login the Grafana Web interface. + + - The default address is: [http://localhost:3000](http://localhost:3000) + + - The default account name: admin + + - The password for the default account: admin + +2. Click the Grafana logo to open the sidebar menu. + +3. Click "Data Sources" in the sidebar. + +4. Click "Add data source". + +5. Specify the data source information: + + - Specify the name for the data source. + + - For Type, select Prometheus. + + - For Url, specify the Prometheus address. + + - Specify other fields as needed. + +6. Click "Add" to save the new data source. + +#### Create a Grafana dashboard + +1. Click the Grafana logo to open the sidebar menu. + +2. On the sidebar menu, click "Dashboards" -> "Import" to open the "Import Dashboard" window. + +3. Click "Upload .json File" to upload a JSON file (Download [TiDB Grafana Config](https://grafana.com/tidb)). + +4. Click "Save & Open". + +5. A Prometheus dashboard is created. \ No newline at end of file diff --git a/v2.0/tools/tidb-controller.md b/v2.0/tools/tidb-controller.md new file mode 100755 index 0000000000000..996d5d5b16ade --- /dev/null +++ b/v2.0/tools/tidb-controller.md @@ -0,0 +1,111 @@ +--- +title: TiDB Controller User Guide +summary: Use TiDB Controller to obtain TiDB status information for debugging. +category: tools +--- + +# TiDB Controller User Guide + +TiDB Controller is a command line tool of TiDB, usually used to obtain the status information of TiDB for debugging. + +## Compile from source code + +- Compilation environment requirement: [Go](https://golang.org/) Version 1.7 or later +- Compilation procedures: Go to the root directory of the [TiDB Controller project](https://github.com/pingcap/tidb-ctl), use the `make` command to compile, and generate `tidb-ctl`. +- Compilation documentation: you can find the help files in the `doc` directory; if the help files are lost or you want to update them, use the `make doc` command to generate the help files. + +## Usage introduction + +The usage of `tidb-ctl` consists of command (including subcommand), option, and flag. + +- command: characters without `-` or `--` +- option: characters with `-` or `--` +- flag: characters exactly following the command or option, passing value to the command or option + +Usage example: `tidb-ctl schema in mysql -n db` + +- `schema`: the command +- `in`: the subcommand of schema +- `mysql`: the flag of `in` +- `-n`: the option +- `db`: the flag of `-n` + +### Get help + +Use `tidb-ctl -h/--help` to get the help information. `tidb-ctl` consists of multiple layers of commands. You can use `-h/--help` to get the help information of `tidb-ctl` and all other subcommands. + +### Connect + +``` +tidb-ctl -H/--host {TiDB service address} -P/--port {TiDB service port} +``` + +If you do not add an address or a port, the default value is used. The default address is `127.0.0.1` (service address must be the IP address); the default port is `10080`. Connection options are top-level options and apply to all of the following commands. + +Currently, TiDB Controller can obtain four categories of information using the following four commands: + +- `tidb-ctl mvcc`: MVCC information +- `tidb-ctl region`: Region information +- `tidb-ctl schema`: Schema information +- `tidb-ctl table`: Table information + +### Examples + +The following example shows how to obtain the schema information: + +Use `tidb-ctl schema -h` to get the help information of the subcommands. `schema` has two subcommands: `in` and `tid`. + +- `in` is used to obtain the table schema of all tables in the database through the database name. +- `tid` is used to obtain the table schema through the unique `table_id` in the whole database. + +#### The `in` command + +You can also use `tidb-ctl schema in -h/--help` to get the help information of the `in` subcommand. + +##### Basic usage + +``` +tidb-ctl schema in {database name} +``` + +For example, `tidb-ctl schema in mysql` returns the following result: + +```text +[ + { + "id": 13, + "name": { + "O": "columns_priv", + "L": "columns_priv" + }, + ... + "update_timestamp": 399494726837600268, + "ShardRowIDBits": 0, + "Partition": null + } +] +``` + +The result is long and displayed in JSON. The above result is a truncated one. + +- If you want to specify the table name, use `tidb-ctl schema in {database} -n {table name}` to filter. + + For example, `tidb-ctl schema in mysql -n db` returns the table schema of the `db` table in the `mysql` database: + + ```text + { + "id": 9, + "name": { + "O": "db", + "L": "db" + }, + ... + "Partition": null + } + ``` + + The above result is a truncated one, too. + +- If you want to specify the server address, use the `-H -P` option. + + For example, `tidb-ctl -H 127.0.0.1 -P 10080 schema in mysql -n db`. \ No newline at end of file diff --git a/v2.0/tools/tikv-control.md b/v2.0/tools/tikv-control.md new file mode 100755 index 0000000000000..c183beb76a857 --- /dev/null +++ b/v2.0/tools/tikv-control.md @@ -0,0 +1,257 @@ +--- +title: TiKV Control User Guide +summary: Use TiKV Control to manage a TiKV cluster. +category: tools +--- + +# TiKV Control User Guide + +TiKV Control (`tikv-ctl`) is a command line tool of TiKV, used to manage the cluster. + +When you compile TiKV, the `tikv-ctl` command is also compiled at the same time. If the cluster is deployed using Ansible, the `tikv-ctl` binary file exists in the corresponding `tidb-ansible/resources/bin` directory. If the cluster is deployed using the binary, the `tikv-ctl` file is in the `bin` directory together with other files such as `tidb-server`, `pd-server`, `tikv-server`, etc. + +## General options + +`tikv-ctl` provides two operation modes: + +- Remote mode: use the `--host` option to accept the service address of TiKV as the argument + + For this mode, if SSL is enabled in TiKV, `tikv-ctl` also needs to specify the related certificate file. For example: + + ``` + $ tikv-ctl --ca-path ca.pem --cert-path client.pem --key-path client-key.pem --host 127.0.0.1:21060 + ``` + + However, sometimes `tikv-ctl` communicates with PD instead of TiKV. In this case, you need to use the `--pd` option instead of `--host`. Here is an example: + + ``` + $ tikv-ctl --pd 127.0.0.1:2379 compact-cluster + store:"127.0.0.1:20160" compact db:KV cf:default range:([], []) success! + ``` + +- Local mode: use the `--db` option to specify the local TiKV data directory path + +Unless otherwise noted, all commands supports both the remote mode and the local mode. + +Additionally, `tikv-ctl` has two simple commands `--to-hex` and `--to-escaped`, which are used to make simple changes to the form of the key. + +Generally, use the `escaped` form of the key. For example: + +```bash +$ tikv-ctl --to-escaped 0xaaff +\252\377 +$ tikv-ctl --to-hex "\252\377" +AAFF +``` + +> **Note:** When you specify the `escaped` form of the key in a command line, it is required to enclose it in double quotes. Otherwise, bash eats the backslash and a wrong result is returned. + +## Subcommands, some options and flags + +This section describes the subcommands that `tikv-ctl` supports in detail. Some subcommands support a lot of options. For all details, run `tikv-ctl --help `. + +### View information of the Raft state machine + +Use the `raft` subcommand to view the status of the Raft state machine at a specific moment. The status information includes two parts: three structs (**RegionLocalState**, **RaftLocalState**, and **RegionApplyState**) and the corresponding Entries of a certain piece of log. + +Use the `region` and `log` subcommands to obtain the above information respectively. The two subcommands both support the remote mode and the local mode at the same time. Their usage and output are as follows: + +```bash +$ tikv-ctl --host 127.0.0.1:21060 raft region -r 2 +region id: 2 +region state key: \001\003\000\000\000\000\000\000\000\002\001 +region state: Some(region {id: 2 region_epoch {conf_ver: 3 version: 1} peers {id: 3 store_id: 1} peers {id: 5 store_id: 4} peers {id: 7 store_id: 6}}) +raft state key: \001\002\000\000\000\000\000\000\000\002\002 +raft state: Some(hard_state {term: 307 vote: 5 commit: 314617} last_index: 314617) +apply state key: \001\002\000\000\000\000\000\000\000\002\003 +apply state: Some(applied_index: 314617 truncated_state {index: 313474 term: 151}) +``` + +### View the Region size + +Use the `size` command to view the Region size: + +```bash +$ tikv-ctl --db /path/to/tikv/db size -r 2 +region id: 2 +cf default region size: 799.703 MB +cf write region size: 41.250 MB +cf lock region size: 27616 +``` + +### Scan to view MVCC of a specific range + +The `--from` and `--to` options of the `scan` command accept two escaped forms of raw key, and use the `--show-cf` flag to specify the column families that you need to view. + +```bash +$ tikv-ctl --db /path/to/tikv/db scan --from 'zm' --limit 2 --show-cf lock,default,write +key: zmBootstr\377a\377pKey\000\000\377\000\000\373\000\000\000\000\000\377\000\000s\000\000\000\000\000\372 + write cf value: start_ts: 399650102814441473 commit_ts: 399650102814441475 short_value: "20" +key: zmDB:29\000\000\377\000\374\000\000\000\000\000\000\377\000H\000\000\000\000\000\000\371 + write cf value: start_ts: 399650105239273474 commit_ts: 399650105239273475 short_value: "\000\000\000\000\000\000\000\002" + write cf value: start_ts: 399650105199951882 commit_ts: 399650105213059076 short_value: "\000\000\000\000\000\000\000\001" +``` + +### View MVCC of a given key + +Similar to the `scan` command, the `mvcc` command can be used to view MVCC of a given key. + +```bash +$ tikv-ctl --db /path/to/tikv/db mvcc -k "zmDB:29\000\000\377\000\374\000\000\000\000\000\000\377\000H\000\000\000\000\000\000\371" --show-cf=lock,write,default +key: zmDB:29\000\000\377\000\374\000\000\000\000\000\000\377\000H\000\000\000\000\000\000\371 + write cf value: start_ts: 399650105239273474 commit_ts: 399650105239273475 short_value: "\000\000\000\000\000\000\000\002" + write cf value: start_ts: 399650105199951882 commit_ts: 399650105213059076 short_value: "\000\000\000\000\000\000\000\001" +``` + +In this command, the key is also the escaped form of raw key. + +### Print a specific key value + +To print the value of a key, use the `print` command. + +### Print some properties about Region + +In order to record Region state details, TiKV writes some statistics into the SST files of Regions. To view these properties, run `tikv-ctl` with the `region-properties` sub-command: + +```bash +$ tikv-ctl --host localhost:20160 region-properties -r 2 +num_files: 0 +num_entries: 0 +num_deletes: 0 +mvcc.min_ts: 18446744073709551615 +mvcc.max_ts: 0 +mvcc.num_rows: 0 +mvcc.num_puts: 0 +mvcc.num_versions: 0 +mvcc.max_row_versions: 0 +middle_key_by_approximate_size: +``` + +The properties can be used to check whether the Region is healthy or not. If not, you can use them to fix the Region. For example, splitting the Region manually by `middle_key_approximate_size`. + +### Compact data of each TiKV manually + +Use the `compact` command to manually compact data of each TiKV. If you specify the `--from` and `--to` options, then their flags are also in the form of escaped raw key. You can use the `--db` option to specify the RocksDB that you need to compact. The optional values are `kv` and `raft`. Also, the `--threads` option allows you to specify the concurrency that you compact and its default value is 8. Generally, a higher concurrency comes with a faster compact speed, which might yet affect the service. You need to choose an appropriate concurrency based on the scenario. + +```bash +$ tikv-ctl --db /path/to/tikv/db compact -d kv +success! +``` + +### Compact data of the whole TiKV cluster manually + +Use the `compact-cluster` command to manually compact data of the whole TiKV cluster. The flags of this command have the same meanings and usage as those of the `compact` command. + +### Set a Region to tombstone + +The `tombstone` command is usually used in circumstances where the sync-log is not enabled, and some data written in the Raft state machine is lost caused by power down. + +In a TiKV instance, you can use this command to set the status of some Regions to Tombstone. Then when you restart the instance, those Regions are skipped. Those Regions need to have enough healthy replicas in other TiKV instances to be able to continue writing and reading through the Raft mechanism. + +```bash +pd-ctl>> operator add remove-peer +$ tikv-ctl --db /path/to/tikv/db tombstone -p 127.0.0.1:2379 -r 2 +success! +``` + +> **Note:** +> +> - This command only supports the local mode. +> - The argument of the `--pd/-p` option specifies the PD endpoints without the `http` prefix. Specifying the PD endpoints is to query whether PD can securely switch to Tombstone. Therefore, before setting a PD instance to Tombstone, you need to take off the corresponding Peer of this Region on the machine in `pd-ctl`. + +### Send a `consistency-check` request to TiKV + +Use the `consistency-check` command to execute a consistency check among replicas in the corresponding Raft of a specific Region. If the check fails, TiKV itself panics. If the TiKV instance specified by `--host` is not the Region leader, an error is reported. + +```bash +$ tikv-ctl --host 127.0.0.1:21060 consistency-check -r 2 +success! +$ tikv-ctl --host 127.0.0.1:21061 consistency-check -r 2 +DebugClient::check_region_consistency: RpcFailure(RpcStatus { status: Unknown, details: Some("StringError(\"Leader is on store 1\")") }) +``` + +> **Note:** +> +> - This command only supports the remote mode. +> - Even if this command returns `success!`, you need to check whether TiKV panics. This is because this command is only a proposal that requests a consistency check for the leader, and you cannot know from the client whether the whole check process is successful or not. + +### Dump snapshot meta + +This sub-command is used to parse a snapshot meta file at given path and print the result. + +### Print the Regions where the Raft state machine corrupts + +To avoid checking the Regions while TiKV is started, you can use the `tombstone` command to set the Regions where the Raft state machine reports an error to Tombstone. Before running this command, use the `bad-regions` command to find out the Regions with errors, so as to combine multiple tools for automated processing. + +```bash +$ tikv-ctl --db /path/to/tikv/db bad-regions +all regions are healthy +``` + +If the command is successfully executed, it prints the above information. If the command fails, it prints the list of bad Regions. Currently, the errors that can be detected include the mismatches between `last index`, `commit index` and `apply index`, and the loss of Raft log. Other conditions like the damage of snapshot files still need further support. + +### View Region properties + +- To view in local the properties of Region 2 on the TiKV instance that is deployed in `/path/to/tikv`: + + ```bash + $ tikv-ctl --db /path/to/tikv/data/db region-properties -r 2 + ``` + +- To view online the properties of Region 2 on the TiKV instance that is running on `127.0.0.1:20160`: + + ```bash + $ tikv-ctl --host 127.0.0.1:20160 region-properties -r 2 + ``` + +### Modify the RocksDB configuration of TiKV dynamically + +You can use the `modify-tikv-config` command to dynamically modify the configuration arguments. Currently, it only supports dynamically modifying RocksDB related arguments. + +- `-m` is used to specify the target RocksDB. You can set it to `kvdb` or `raftdb`. +- `-n` is used to specify the configuration name. + You can refer to the arguments of `[rocksdb]` and `[raftdb]` (corresponding to `kvdb` and `raftdb`) in the [TiKV configuration template](https://github.com/pingcap/tikv/blob/master/etc/config-template.toml#L213-L500). + You can use `default|write|lock + . + argument name` to specify the configuration of different CFs. For `kvdb`, you can set it to `default`, `write`, or `lock`; for `raftdb`, you can only set it to `default`. +- `-v` is used to specify the configuration value. + +```bash +$ tikv-ctl modify-tikv-config -m kvdb -n max_background_jobs -v 8 +success! +$ tikv-ctl modify-tikv-config -m kvdb -n write.block-cache-size -v 256MB +success! +$ tikv-ctl modify-tikv-config -m raftdb -n default.disable_auto_compactions -v true +success! +``` + +### Force Region to recover the service from failure of multiple replicas + +Use the `unsafe-recover remove-fail-stores` command to remove the failed machines from the peer list of Regions. Then after you restart TiKV, these Regions can continue to provide services using the other healthy replicas. This command is usually used in circumstances where multiple TiKV stores are damaged or deleted. + +The `--stores` option accepts multiple `store_id` separated by comma and uses the `--regions` flag to specify involved Regions. Otherwise, all Regions' peers located on these stores will be removed by default. + +```bash +$ tikv-ctl --db /path/to/tikv/db unsafe-recover remove-fail-stores --stores 3 --regions 1001,1002 +success! +``` + +> **Note:** +> +> - This command only supports the local mode. It prints `success!` when successfully run. +> - You must run this command for all stores where specified Regions' peers locate. If `--regions` is not set, all Regions are involved, and you need to run this command for all stores. + +### Recover from MVCC data corruption + +Use the `recover-mvcc` command in circumstances where TiKV cannot run normally caused by MVCC data corruption. It cross-checks 3 CFs ("default", "write", "lock") to recover from various kinds of inconsistency. + +Use the `--regions` option to specify involved Regions by `region_id`. Use the `--pd` option to specify PD endpoints. + +```bash +$ tikv-ctl --db /path/to/tikv/db recover-mvcc --regions 1001,1002 --pd 127.0.0.1:2379 +success! +``` + +> **Note**: +> +> - This command only supports the local mode. It prints `success!` when successfully run. +> - The argument of the `--pd/-p` option specifies the PD endpoints without the `http` prefix. Specifying the PD endpoints is to query whether the specified `region_id` is validated or not. +> - You need to run this command for all stores where specified Regions' peers locate. diff --git a/v2.0/trouble-shooting.md b/v2.0/trouble-shooting.md new file mode 100755 index 0000000000000..f9986f4bbc2ca --- /dev/null +++ b/v2.0/trouble-shooting.md @@ -0,0 +1,108 @@ +--- +title: TiDB Cluster Troubleshooting Guide +summary: Learn how to diagnose and resolve issues when you use TiDB. +category: advanced +--- + +# TiDB Cluster Troubleshooting Guide + +You can use this guide to help you diagnose and solve basic problems while using TiDB. If your problem is not resolved, please collect the following information and [create an issue](https://github.com/pingcap/tidb/issues/new): + +- The exact error message and the operations while the error occurs +- The state of all the components +- The `error` / `fatal` / `panic` information in the log of the component that reports the error +- The configuration and deployment topology +- The TiDB component related issue in `dmesg` + +For other information, see [Frequently Asked Questions (FAQ)](FAQ.md). + +## Cannot connect to the database + +1. Make sure all the services are started, including `tidb-server`, `pd-server`, and `tikv-server`. +2. Use the `ps` command to check if all the processes are running. + + - If a certain process is not running, see the following corresponding sections to diagnose and solve the issue. + + If all the processes are running, check the `tidb-server` log to see if the following messages are displayed: + - InfomationSchema is out of date: This message is displayed if the `tikv-server` cannot be connected. Check the state and log of `pd-server` and `tikv-server`. + - panic: This message is displayed if there is an issue with the program. Please provide the detailed panic log and [create an issue](https://github.com/pingcap/tidb/issues/new). + +3. If the data is cleared and the services are re-deployed, make sure that: + + - All the data in `tikv-server` and `pd-server` are cleared. + The specific data is stored in `tikv-server` and the metadata is stored in `pd-server`. If only one of the two servers is cleared, the data will be inconsistent. + - After the data in `pd-server` and `tikv-server` are cleared and the `pd-server` and `tikv-server` are restarted, the `tidb-server` must be restarted too. + The cluster ID is randomly allocated when the `pd-server` is initialized. So when the cluster is re-deployed, the cluster ID changes and you need to restart the `tidb-server` to get the new cluster ID. + +## Cannot start `tidb-server` + +See the following for the situations when the `tidb-server` cannot be started: + +- Error in the startup parameters. + See the [TiDB configuration and options](op-guide/configuration.md#tidb). +- The port is occupied. + Use the `lsof -i:port` command to show all the networking related to a given port and make sure the port to start the `tidb-server` is not occupied. ++ Cannot connect to `pd-server`. + + - Check if the network between TiDB and PD is running smoothly, including whether the network can be pinged or if there is any issue with the Firewall configuration. + - If there is no issue with the network, check the state and log of the `pd-server` process. + +## Cannot start `tikv-server` + +See the following for the situations when the `tikv-server` cannot be started: + +- Error in the startup parameters: See the [TiKV configuration and options](op-guide/configuration.md#tikv). +- The port is occupied: Use the `lsof -i:port` command to show all the networking related to a given port and make sure the port to start the `tikv-server` is not occupied. ++ Cannot connect to `pd-server`. + - Check if the network between TiDB and PD is running smoothly, including whether the network can be pinged or if there is any issue with the Firewall configuration. + - If there is no issue with the network, check the state and log of the `pd-server` process. +- The file is occupied. + Do not open two TiKV files on one database file directory. + +## Cannot start `pd-server` + +See the following for the situations when the `pd-server` cannot be started: + +- Error in the startup parameters. + See the [PD configuration and options](op-guide/configuration.md##placement-driver-pd). +- The port is occupied. + Use the `lsof -i:port` command to show all the networking related to a given port and make sure the port to start the `pd-server` is not occupied. + +## The TiDB/TiKV/PD process aborts unexpectedly + +- Is the process started on the foreground? The process might exit because the client aborts. + +- Is `nohup+&` run in the command line? This might cause the process to abort because it receives the hup signal. It is recommended to write and run the startup command in a script. + +## TiDB panic + +Please provide panic log and [create an issue](https://github.com/pingcap/tidb/issues/new). + +## The connection is rejected + +Make sure the network parameters of the operating system are correct, including but not limited to: + +- The port in the connection string is consistent with the `tidb-server` starting port. +- The firewall is configured correctly. + +## Open too many files + +Before starting the process, make sure the result of `ulimit -n` is large enough. It is recommended to set the value to `unlimited` or larger than `1000000`. + +## Database access times out and the system load is too high + +First, check the [SLOW-QUERY](./sql/slow-query.md) log and see if it is because of some inappropriate SQL statement. +If you failed to solve the problem, provide the following information: + ++ The deployment topology + - How many `tidb-server`/`pd-server`/`tikv-server` instances are deployed? + - How are these instances distributed in the machines? ++ The hardware configuration of the machines where these instances are deployed: + - The number of CPU cores + - The size of the memory + - The type of the disk (SSD or Hard Drive Disk) + - Are they physical machines or virtual machines? +- Are there other services besides the TiDB cluster? +- Are the `pd-server`s and `tikv-server`s deployed separately? +- What is the current operation? +- Check the CPU thread name using the `top -H` command. +- Are there any exceptions in the network or IO monitoring data recently?