Skip to content

Commit

Permalink
Remove all TiDB Ansible related contents from dev and v5.0 (pingcap#4513
Browse files Browse the repository at this point in the history
)

* Remove all TiDB Ansible related contents from dev

* remove test docs

* Update upgrade-tidb-using-tiup.md

* Update best-practices/grafana-monitor-best-practices.md

Co-authored-by: Keke Yi <40977455+yikeke@users.noreply.github.com>

* Update faq/deploy-and-maintain-faq.md

Co-authored-by: Keke Yi <40977455+yikeke@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Keke Yi <40977455+yikeke@users.noreply.github.com>

Co-authored-by: Keke Yi <40977455+yikeke@users.noreply.github.com>
  • Loading branch information
TomShawn and yikeke authored Dec 30, 2020
1 parent fac3e93 commit 3ec3cea
Show file tree
Hide file tree
Showing 39 changed files with 58 additions and 2,623 deletions.
8 changes: 0 additions & 8 deletions TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,6 @@
+ [TiDB 4.0 Experimental Features](/experimental-features-4.0.md)
+ [Basic Features](/basic-features.md)
+ Benchmarks
+ [v4.0 Sysbench Performance Test](/benchmark/benchmark-sysbench-v4-vs-v3.md)
+ [v4.0 TPC-H Performance Test](/benchmark/v4.0-performance-benchmarking-with-tpch.md)
+ [v4.0 TPC-C Performance Test](/benchmark/v4.0-performance-benchmarking-with-tpcc.md)
+ [Interaction Test on Online Workloads and `ADD INDEX`](/benchmark/online-workloads-and-add-index-operations.md)
+ [MySQL Compatibility](/mysql-compatibility.md)
+ [TiDB Limitations](/tidb-limitations.md)
Expand All @@ -39,8 +36,6 @@
+ [Use TiUP (Recommended)](/production-deployment-using-tiup.md)
+ [Use TiUP Offline (Recommended)](/production-offline-deployment-using-tiup.md)
+ [Deploy in Kubernetes](https://docs.pingcap.com/tidb-in-kubernetes/stable)
+ [Use TiDB Ansible](/online-deployment-using-ansible.md)
+ [Use TiDB Ansible Offline](/offline-deployment-using-ansible.md)
+ [Verify Cluster Status](/post-installation-check.md)
+ Migrate
+ [Overview](/migration-overview.md)
Expand All @@ -56,10 +51,8 @@
+ [Use TiUP (Recommended)](/upgrade-tidb-using-tiup.md)
+ [Use TiUP Offline (Recommended)](/upgrade-tidb-using-tiup-offline.md)
+ [Use TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/v1.1/upgrade-a-tidb-cluster)
+ [Use TiDB Ansible](/upgrade-tidb-using-ansible.md)
+ Scale
+ [Use TiUP (Recommended)](/scale-tidb-using-tiup.md)
+ [Use TiDB Ansible](/scale-tidb-using-ansible.md)
+ [Use TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/v1.1/scale-a-tidb-cluster)
+ Backup and Restore
+ Use BR Tool (Recommended)
Expand All @@ -73,7 +66,6 @@
+ [Daily Checklist](/daily-check.md)
+ [Maintain TiFlash](/tiflash/maintain-tiflash.md)
+ [Maintain TiDB Using TiUP](/maintain-tidb-using-tiup.md)
+ [Maintain TiDB Using Ansible](/maintain-tidb-using-ansible.md)
+ [Modify Configuration Online](/dynamic-config.md)
+ Monitor and Alert
+ [Monitoring Framework Overview](/tidb-monitoring-framework.md)
Expand Down
6 changes: 3 additions & 3 deletions best-practices/grafana-monitor-best-practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ aliases: ['/docs/dev/best-practices/grafana-monitor-best-practices/','/docs/dev/

# Best Practices for Monitoring TiDB Using Grafana

When you [deploy a TiDB cluster using TiDB Ansible](/online-deployment-using-ansible.md), a set of [Grafana + Prometheus monitoring platform](/tidb-monitoring-framework.md) is deployed simultaneously to collect and display metrics for various components and machines in the TiDB cluster. This document describes best practices for monitoring TiDB using Grafana. It aims to help you use metrics to analyze the status of the TiDB cluster and diagnose problems.
When you [deploy a TiDB cluster using TiUP](/production-deployment-using-tiup.md) and have added Grafana and Prometheus in the topology configuration, a set of [Grafana + Prometheus monitoring platform](/tidb-monitoring-framework.md) is deployed simultaneously to collect and display metrics for various components and machines in the TiDB cluster. This document describes best practices for monitoring TiDB using Grafana. It aims to help you use metrics to analyze the status of the TiDB cluster and diagnose problems.

## Monitoring architecture

Expand All @@ -17,7 +17,7 @@ When you [deploy a TiDB cluster using TiDB Ansible](/online-deployment-using-ans
For TiDB 2.1.3 or later versions, TiDB monitoring supports the pull method. It is a good adjustment with the following benefits:

- There is no need to restart the entire TiDB cluster if you need to migrate Prometheus. Before adjustment, migrating Prometheus requires restarting the entire cluster because the target address needs to be updated.
- You can deploy 2 separate sets of Grafana + Prometheus monitoring platforms (not highly available) to prevent a single point of monitoring. To do this, execute the deployment command of TiDB ansible twice with different IP addresses.
- You can deploy 2 separate sets of Grafana + Prometheus monitoring platforms (not highly available) to prevent a single point of monitoring.
- The Pushgateway which might become a single point of failure is removed.

## Source and display of monitoring data
Expand Down Expand Up @@ -203,4 +203,4 @@ curl -u user:pass 'http://__grafana_ip__:3000/api/datasources/proxy/1/api/v1/que

## Summary

The Grafana + Prometheus monitoring platform is a very powerful tool. Making good use of it can improve efficiency, saving you a lot of time on analyzing the status of the TiDB cluster. More importantly, it can help you diagnose problems. This tool is very useful in the operation and maintenance of TiDB clusters, especially when there is a large amount of data.
The Grafana + Prometheus monitoring platform is a very powerful tool. Making good use of it can improve efficiency, saving you a lot of time on analyzing the status of the TiDB cluster. More importantly, it can help you diagnose problems. This tool is very useful in the operation and maintenance of TiDB clusters, especially when there is a large amount of data.
2 changes: 1 addition & 1 deletion dashboard/dashboard-diagnostics-access.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The cluster diagnostics feature in TiDB Dashboard diagnoses the problems that mi

> **Note:**
>
> The cluster diagnostics feature depends on Prometheus deployed in the cluster. For details about how to deploy this monitoring component, see [TiUP](/tiup/tiup-overview.md) or [TiDB Ansible](/online-deployment-using-ansible.md) deployment document. If no monitoring component is deployed in the cluster, the generated diagnostic report will indicate a failure.
> The cluster diagnostics feature depends on Prometheus deployed in the cluster. For details about how to deploy this monitoring component, see the [TiUP](/tiup/tiup-overview.md) deployment document. If no monitoring component is deployed in the cluster, the generated diagnostic report will indicate a failure.
## Access the page

Expand Down
2 changes: 1 addition & 1 deletion dashboard/dashboard-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ If you have deployed TiDB using the `tiup cluster` or `tiup playground` command,

The **QPS** and **Latency** sections on the **Overview** page require a cluster with Prometheus deployed. Otherwise, the error is shown. You can solve this problem by deploying a Prometheus instance in the cluster.

If you still encounter this problem when the Prometheus instance has been deployed, the possible reason is that your deployment tool is out of date (TiUP, TiDB Operator, or TiDB Ansible), and your tool does not automatically report metrics addresses, which makes TiDB Dashboard unable to query metrics. You can upgrade you deployment tool to the latest version and try again.
If you still encounter this problem when the Prometheus instance has been deployed, the possible reason is that your deployment tool is out of date (TiUP or TiDB Operator), and your tool does not automatically report metrics addresses, which makes TiDB Dashboard unable to query metrics. You can upgrade you deployment tool to the latest version and try again.

If your deployment tool is TiUP, take the following steps to solve this problem. For other deployment tools, refer to the corresponding documents of those tools.

Expand Down
2 changes: 1 addition & 1 deletion dashboard/dashboard-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ The content displayed in this area is consistent with the more detailed [Slow Qu

> **Note:**
>
> This feature is available only in the cluster with slow query logs enabled. By default, slow query logs are enabled in the cluster deployed using TiUP or TiDB Ansible.
> This feature is available only in the cluster with slow query logs enabled. By default, slow query logs are enabled in the cluster deployed using TiUP.
## Instances

Expand Down
47 changes: 5 additions & 42 deletions faq/deploy-and-maintain-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Check the time difference between the machine time of the monitor and the time w
| Variable | Description |
| ---- | ------- |
| `cluster_name` | the name of a cluster, adjustable |
| `tidb_version` | the version of TiDB, configured by default in TiDB Ansible branches |
| `tidb_version` | the version of TiDB |
| `deployment_method` | the method of deployment, binary by default, Docker optional |
| `process_supervision` | the supervision way of processes, systemd by default, supervise optional |
| `timezone` | the timezone of the managed node, adjustable, `Asia/Shanghai` by default, used with the `set_timezone` variable |
Expand All @@ -104,25 +104,13 @@ Check the time difference between the machine time of the monitor and the time w
| `enable_slow_query_log` | to record the slow query log of TiDB into a single file: ({{ deploy_dir }}/log/tidb_slow_query.log). False by default, to record it into the TiDB log |
| `deploy_without_tidb` | the Key-Value mode, deploy only PD, TiKV and the monitoring service, not TiDB; set the IP of the tidb_servers host group to null in the `inventory.ini` file |

### Deploy TiDB offline using TiDB Ansible(not recommended since TiDB v4.0)

> **Warning:**
>
> It is not recommended to deploy TiDB using TiDB Ansible since TiDB v4.0. [Use TiUP to deploy TiDB](/production-deployment-using-tiup.md) instead.
If the central control machine cannot access the Internet, you can [deploy TiDB offline using TiDB Ansible](https://docs.pingcap.com/tidb/stable/offline-deployment-using-ansible).

### How to deploy TiDB quickly using Docker Compose on a single machine?

You can use Docker Compose to build a TiDB cluster locally, including the cluster monitoring components. You can also customize the version and number of instances for each component. The configuration file can also be customized. You can only use this deployment method for testing and development environment. For details, see [TiDB Docker Compose Deployment](/deploy-test-cluster-using-docker-compose.md).

### How to separately record the slow query log in TiDB? How to locate the slow query SQL statement?

1. The slow query definition for TiDB is in the `conf/tidb.yml` configuration file of `tidb-ansible`. The `slow-threshold: 300` parameter is used to configure the threshold value of the slow query (unit: millisecond).

The slow query log is recorded in `tidb.log` by default. If you want to generate a slow query log file separately, set `enable_slow_query_log` in the `inventory.ini` configuration file to `True`.

Then run `ansible-playbook rolling_update.yml --tags=tidb` to perform a rolling update on the `tidb-server` instance. After the update is finished, the `tidb-server` instance will record the slow query log in `tidb_slow_query.log`.
1. The slow query definition for TiDB is in the TiDB configuration file. The `slow-threshold: 300` parameter is used to configure the threshold value of the slow query (unit: millisecond).

2. If a slow query occurs, you can locate the `tidb-server` instance where the slow query is and the slow query time point using Grafana and find the SQL statement information recorded in the log on the corresponding node.

Expand Down Expand Up @@ -156,30 +144,10 @@ The Direct mode wraps the Write request into the I/O command and sends this comm
./fio -ioengine=psync -bs=32k -fdatasync=1 -thread -rw=randrw -percentage_random=100,0 -size=10G -filename=fio_randread_write_test.txt -name='fio mixed randread and sequential write test' -iodepth=4 -runtime=60 -numjobs=4 -group_reporting --output-format=json --output=fio_randread_write_test.json
```

### Error `UNREACHABLE! "msg": "Failed to connect to the host via ssh: "` when deploying TiDB using TiDB Ansible

Two possible reasons and solutions:

- The SSH mutual trust is not configured as required. It’s recommended to follow [the steps described in the official document](/online-deployment-using-ansible.md#step-5-configure-the-ssh-mutual-trust-and-sudo-rules-on-the-control-machine) and check whether it is successfully configured using `ansible -i inventory.ini all -m shell -a 'whoami' -b`.
- If it involves the scenario where a single server is assigned multiple roles, for example, the mixed deployment of multiple components or multiple TiKV instances are deployed on a single server, this error might be caused by the SSH reuse mechanism. You can use the option of `ansible … -f 1` to avoid this error.

## Cluster management

### Daily management

#### What are the common operations of TiDB Ansible?

| Job | Playbook |
|:----------------------------------|:-----------------------------------------|
| Start the cluster | `ansible-playbook start.yml` |
| Stop the cluster | `ansible-playbook stop.yml` |
| Destroy the cluster | `ansible-playbook unsafe_cleanup.yml` (If the deployment directory is a mount point, an error will be reported, but implementation results will remain unaffected) |
| Clean data (for test) | `ansible-playbook unsafe_cleanup_data.yml` |
| Apply rolling updates | `ansible-playbook rolling_update.yml` |
| Apply rolling updates to TiKV | `ansible-playbook rolling_update.yml --tags=tikv` |
| Apply rolling updates to components except PD | `ansible-playbook rolling_update.yml --skip-tags=pd` |
| Apply rolling updates to the monitoring components | `ansible-playbook rolling_update_monitor.yml` |

#### How to log into TiDB?

You can log into TiDB like logging into MySQL. For example:
Expand All @@ -206,7 +174,7 @@ By default, TiDB/PD/TiKV outputs standard error in the logs. If a log file is sp

#### How to safely stop TiDB?

If the cluster is deployed using TiDB Ansible, you can use the `ansible-playbook stop.yml` command to stop the TiDB cluster. If the cluster is not deployed using TiDB Ansible, `kill` all the services directly. The components of TiDB will do `graceful shutdown`.
Kill all the services using `kill` directly. The components of TiDB will do `graceful shutdown`.

#### Can `kill` be executed in TiDB?

Expand All @@ -227,11 +195,11 @@ Take `Release Version: v1.0.3-1-ga80e796` as an example of version number descri
- `-1` indicates the current version has one commit.
- `ga80e796` indicates the version `git-hash`.

#### What's the difference between various TiDB master versions? How to avoid using the wrong TiDB Ansible version?
#### What's the difference between various TiDB master versions?

The TiDB community is highly active. After the 1.0 GA release, the engineers have been keeping optimizing and fixing bugs. Therefore, the TiDB version is updated quite fast. If you want to keep informed of the latest version, see [TiDB Weekly update](https://pingcap.com/weekly/).

It is not recommended to deploy the TiDB cluster using TiDB Ansible. [Deploy TiDB using TiUP](/production-deployment-using-tiup.md) instead. TiDB has a unified management of the version number after the 1.0 GA release. You can view the version number using the following two methods:
It is recommeneded to [deploy TiDB using TiUP](/production-deployment-using-tiup.md). TiDB has a unified management of the version number after the 1.0 GA release. You can view the version number using the following two methods:

- `select tidb_version()`
- `tidb-server -V`
Expand Down Expand Up @@ -316,11 +284,6 @@ The offline node usually indicates the TiKV node. You can determine whether the
1. Manually stop the relevant services on the offline node.
2. Delete the `node_exporter` data of the corresponding node from the Prometheus configuration file.
3. Delete the data of the corresponding node from Ansible `inventory.ini`.
#### Why couldn't I connect to the PD server using `127.0.0.1` when I was using the PD Control?
If your TiDB cluster is deployed using TiDB Ansible, the PD external service port is not bound to `127.0.0.1`, so PD Control does not recognize `127.0.0.1` and you can only connect to it using the local IP address.
### TiDB server management
Expand Down
12 changes: 0 additions & 12 deletions faq/migration-tidb-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,18 +76,6 @@ See [Parsing TiDB online data synchronization tool Syncer](https://pingcap.com/b

See [Syncer User Guide](/syncer-overview.md).

#### How to configure to monitor Syncer status?

Download and import [Syncer Json](https://github.com/pingcap/docs/blob/master/etc/Syncer.json) to Grafana. Edit the Prometheus configuration file and add the following content:

```
- job_name: 'syncer_ops' // task name
static_configs:
- targets: [’10.10.1.1:10096’] // Syncer monitoring address and port, informing Prometheus to pull the data of Syncer
```
Restart Prometheus.
#### Is there a current solution to replicating data from TiDB to other databases like HBase and Elasticsearch?

No. Currently, the data replication depends on the application itself.
Expand Down
2 changes: 1 addition & 1 deletion get-started-with-tispark.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ aliases: ['/docs/dev/get-started-with-tispark/','/docs/dev/how-to/get-started/ti

# TiSpark Quick Start Guide

To make it easy to [try TiSpark](/tispark-overview.md), the TiDB cluster installed using TiDB Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default.
To make it easy to [try TiSpark](/tispark-overview.md), the TiDB cluster installed using TiUP integrates Spark and TiSpark jar package by default.

## Deployment information

Expand Down
2 changes: 1 addition & 1 deletion grafana-overview-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ aliases: ['/docs/dev/grafana-overview-dashboard/','/docs/dev/reference/key-monit

# Key Metrics

If you use TiDB Ansible or TiUP to deploy the TiDB cluster, the monitoring system (Prometheus & Grafana) is deployed at the same time. For more information, see [TiDB Monitoring Framework Overview](/tidb-monitoring-framework.md).
If you use TiUP to deploy the TiDB cluster, the monitoring system (Prometheus & Grafana) is deployed at the same time. For more information, see [TiDB Monitoring Framework Overview](/tidb-monitoring-framework.md).

The Grafana dashboard is divided into a series of sub dashboards which include Overview, PD, TiDB, TiKV, Node\_exporter, Disk Performance, and so on. A lot of metrics are there to help you diagnose.

Expand Down
2 changes: 1 addition & 1 deletion grafana-pd-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ aliases: ['/docs/dev/grafana-pd-dashboard/','/docs/dev/reference/key-monitoring-

# Key Monitoring Metrics of PD

If you use TiUP or TiDB Ansible to deploy the TiDB cluster, the monitoring system (Prometheus & Grafana) is deployed at the same time. For more information, see [Overview of the Monitoring Framework](/tidb-monitoring-framework.md).
If you use TiUP to deploy the TiDB cluster, the monitoring system (Prometheus & Grafana) is deployed at the same time. For more information, see [Overview of the Monitoring Framework](/tidb-monitoring-framework.md).

The Grafana dashboard is divided into a series of sub dashboards which include Overview, PD, TiDB, TiKV, Node\_exporter, Disk Performance, and so on. A lot of metrics are there to help you diagnose.

Expand Down
2 changes: 1 addition & 1 deletion grafana-tidb-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ aliases: ['/docs/dev/grafana-tidb-dashboard/','/docs/dev/reference/key-monitorin

# TiDB Monitoring Metrics

If you use TiDB Ansible or TiUP to deploy the TiDB cluster, the monitoring system (Prometheus & Grafana) is deployed at the same time. For the monitoring architecture, see [TiDB Monitoring Framework Overview](/tidb-monitoring-framework.md).
If you use TiUP to deploy the TiDB cluster, the monitoring system (Prometheus & Grafana) is deployed at the same time. For the monitoring architecture, see [TiDB Monitoring Framework Overview](/tidb-monitoring-framework.md).

The Grafana dashboard is divided into a series of sub dashboards which include Overview, PD, TiDB, TiKV, Node\_exporter, Disk Performance, and so on. The TiDB dashboard consists of the TiDB panel and the TiDB Summary panel. The differences between the two panels are different in the following aspects:

Expand Down
2 changes: 1 addition & 1 deletion grafana-tikv-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ aliases: ['/docs/dev/grafana-tikv-dashboard/','/docs/dev/reference/key-monitorin

# Key Monitoring Metrics of TiKV

If you use TiUP or TiDB Ansible to deploy the TiDB cluster, the monitoring system (Prometheus/Grafana) is deployed at the same time. For more information, see [Overview of the Monitoring Framework](/tidb-monitoring-framework.md).
If you use TiUP to deploy the TiDB cluster, the monitoring system (Prometheus/Grafana) is deployed at the same time. For more information, see [Overview of the Monitoring Framework](/tidb-monitoring-framework.md).

The Grafana dashboard is divided into a series of sub dashboards which include Overview, PD, TiDB, TiKV, Node\_exporter, and so on. A lot of metrics are there to help you diagnose.

Expand Down
Loading

0 comments on commit 3ec3cea

Please sign in to comment.