Skip to content

Commit

Permalink
*: update the Note format (pingcap#1079)
Browse files Browse the repository at this point in the history
* *: update the Note format

* Update note format

* Update the Note format
  • Loading branch information
CaitinChen authored and lilin90 committed Apr 24, 2019
1 parent 421bd39 commit 0cf5ddd
Show file tree
Hide file tree
Showing 154 changed files with 898 additions and 316 deletions.
4 changes: 3 additions & 1 deletion FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -1072,7 +1072,9 @@ The interval of `GC Life Time` is too short. The data that should have been read
update mysql.tidb set variable_value='30m' where variable_name='tikv_gc_life_time';
```
> **Note:** "30m" means only cleaning up the data generated 30 minutes ago, which might consume some extra storage space.
> **Note:**
>
> "30m" means only cleaning up the data generated 30 minutes ago, which might consume some extra storage space.
### MySQL native error messages
Expand Down
4 changes: 3 additions & 1 deletion benchmark/dm-v1-alpha.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,9 @@ This DM benchmark report describes the test purpose, environment, scenario, and

The purpose of this test is to test the performance of DM incremental replication.

> **Note**: The results of the testing might vary based on different environmental dependencies.
> **Note:**
>
> The results of the testing might vary based on different environmental dependencies.
## Test environment

Expand Down
8 changes: 6 additions & 2 deletions benchmark/sysbench-v4.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,9 @@ For more detailed information on TiKV performance tuning, see [Tune TiKV Perform

## Test process

> **Note:** This test was performed without load balancing tools such as HAproxy. We run the Sysbench test on individual TiDB node and added the results up. The load balancing tools and the parameters of different versions might also impact the performance.
> **Note:**
>
> This test was performed without load balancing tools such as HAproxy. We run the Sysbench test on individual TiDB node and added the results up. The load balancing tools and the parameters of different versions might also impact the performance.
### Sysbench configuration

Expand Down Expand Up @@ -137,7 +139,9 @@ Adjust the order in which Sysbench scripts create indexes. Sysbench imports data
1. Download the TiDB-modified [oltp_common.lua](https://raw.githubusercontent.com/pingcap/tidb-bench/master/sysbench-patch/oltp_common.lua) file and overwrite the `/usr/share/sysbench/oltp_common.lua` file with it.
2. Move the [235th](https://github.com/akopytov/sysbench/blob/1.0.14/src/lua/oltp_common.lua#L235) to [240th](https://github.com/akopytov/sysbench/blob/1.0.14/src/lua/oltp_common.lua#L240) lines of `/usr/share/sysbench/oltp_common.lua` to be right behind 198th lines.

> **Note:** This operation is optional and is only to save the time consumed by data import.
> **Note:**
>
> This operation is optional and is only to save the time consumed by data import.
At the command line, enter the following command to start importing data. The config file is the one configured in the previous step:

Expand Down
4 changes: 3 additions & 1 deletion benchmark/sysbench.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,9 @@ draft: true

The purpose of this test is to test the performance and horizontal scalability of TiDB in OLTP scenarios.

> **Note**: The results of the testing might vary based on different environmental dependencies.
> **Note:**
>
> The results of the testing might vary based on different environmental dependencies.
## Test version, date and place

Expand Down
4 changes: 3 additions & 1 deletion benchmark/tpch-v2.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,9 @@ category: benchmark

This test aims to compare the performances of TiDB 2.0 and TiDB 2.1 in the OLAP scenario.

> **Note**: Different test environments might lead to different test results.
> **Note:**
>
> Different test environments might lead to different test results.
## Test environment

Expand Down
4 changes: 3 additions & 1 deletion benchmark/tpch.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,9 @@ category: benchmark

This test aims to compare the performances of TiDB 1.0 and TiDB 2.0 in the OLAP scenario.

> **Note**: Different test environments might lead to different test results.
> **Note:**
>
> Different test environments might lead to different test results.
## Test environment

Expand Down
4 changes: 3 additions & 1 deletion dev-guide/deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@

## Overview

Note: **The easiest way to deploy TiDB is to use TiDB Ansible, see [Ansible Deployment](../op-guide/ansible-deployment.md).**
> **Note:**
>
> The easiest way to deploy TiDB is to use TiDB Ansible, see [Ansible Deployment](../op-guide/ansible-deployment.md).**
Before you start, check the [supported platforms](../dev-guide/requirements.md#supported-platforms) and [prerequisites](../dev-guide/requirements.md#prerequisites) first.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,9 @@ DBdeployer is designed to allow multiple versions of TiDB deployed concurrently.

Similar to [Homebrew](/dev/how-to/get-started/local-cluster/install-from-homebrew.md), the DBdeployer installation method installs the tidb-server **without** the tikv-server or pd-server. This is useful for development environments, since you can test your application's compatibility with TiDB without needing to deploy a full TiDB platform.

> **Note**: Internally this installation uses goleveldb as the storage engine. It is much slower than TiKV, and any benchmarks will be unreliable.
> **Note:**
>
> Internally this installation uses goleveldb as the storage engine. It is much slower than TiKV, and any benchmarks will be unreliable.
<main class="tabs">
<input id="tabMacOS" type="radio" name="tabs" value="MacOSContent" checked>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,9 @@ TiDB on Homebrew supports a minimal installation mode of the tidb-server **witho

This installation method is supported on macOS, Linux and Windows (via [WSL](https://docs.microsoft.com/en-us/windows/wsl/install-win10)).

> **Note**: Internally this installation uses goleveldb as the storage engine. It is much slower than TiKV, and any benchmarks will be unreliable.
> **Note:**
>
> Internally this installation uses goleveldb as the storage engine. It is much slower than TiKV, and any benchmarks will be unreliable.
## Installation steps

Expand Down
24 changes: 18 additions & 6 deletions dev/how-to/get-started/local-cluster/install-from-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,16 +17,22 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme

- Resources requirement: CPU 2+, Memory 4G+

> **Note:** For macOS, you need to allocate 2+ CPU and 4G+ Memory to Docker. For details, see [Docker for Mac configuration](https://docs.docker.com/docker-for-mac/#advanced).
> **Note:**
>
> For macOS, you need to allocate 2+ CPU and 4G+ Memory to Docker. For details, see [Docker for Mac configuration](https://docs.docker.com/docker-for-mac/#advanced).
- [Docker](https://docs.docker.com/install/): 17.03 or later

> **Note:** [Legacy Docker Toolbox](https://docs.docker.com/toolbox/toolbox_install_mac/) users must migrate to [Docker for Mac](https://store.docker.com/editions/community/docker-ce-desktop-mac) by uninstalling Legacy Docker Toolbox and installing Docker for Mac, because DinD cannot run on Docker Toolbox and Docker Machine.
> **Note:**
>
> [Legacy Docker Toolbox](https://docs.docker.com/toolbox/toolbox_install_mac/) users must migrate to [Docker for Mac](https://store.docker.com/editions/community/docker-ce-desktop-mac) by uninstalling Legacy Docker Toolbox and installing Docker for Mac, because DinD cannot run on Docker Toolbox and Docker Machine.
- [Helm Client](https://github.com/helm/helm/blob/master/docs/install.md#installing-the-helm-client): 2.9.0 or later
- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl): 1.10 or later

> **Note:** The outputs of different versions of `kubectl` might be slightly different.
> **Note:**
>
> The outputs of different versions of `kubectl` might be slightly different.
## Step 1: Deploy a Kubernetes cluster using DinD

Expand All @@ -36,7 +42,9 @@ $ cd tidb-operator
$ manifests/local-dind/dind-cluster-v1.12.sh up
```

> **Note:** If the cluster fails to pull Docker images during the startup due to the firewall, you can set the environment variable `KUBE_REPO_PREFIX` to `uhub.ucloud.cn/pingcap` before running the script `dind-cluster-v1.12.sh` as follows (the Docker images used are pulled from [UCloud Docker Registry](https://docs.ucloud.cn/compute/uhub/index)):
> **Note:**
>
> If the cluster fails to pull Docker images during the startup due to the firewall, you can set the environment variable `KUBE_REPO_PREFIX` to `uhub.ucloud.cn/pingcap` before running the script `dind-cluster-v1.12.sh` as follows (the Docker images used are pulled from [UCloud Docker Registry](https://docs.ucloud.cn/compute/uhub/index)):
```
$ KUBE_REPO_PREFIX=uhub.ucloud.cn/pingcap manifests/local-dind/dind-cluster-v1.12.sh up
Expand Down Expand Up @@ -157,7 +165,9 @@ You can scale out or scale in the TiDB cluster simply by modifying the number of
helm upgrade tidb-cluster charts/tidb-cluster --namespace=tidb
```

> **Note:** If you need to scale in TiKV, the consumed time depends on the volume of your existing data, because the data needs to be migrated safely.
> **Note:**
>
> If you need to scale in TiKV, the consumed time depends on the volume of your existing data, because the data needs to be migrated safely.

## Upgrade the TiDB cluster

Expand All @@ -179,7 +189,9 @@ When you are done with your test, use the following command to destroy the TiDB
$ helm delete tidb-cluster --purge
```

> **Note:** This only deletes the running pods and other resources, the data is persisted. If you do not need the data anymore, run the following commands to clean up the data. (Be careful, this permanently deletes the data).
> **Note:**
>
> This only deletes the running pods and other resources, the data is persisted. If you do not need the data anymore, run the following commands to clean up the data. (Be careful, this permanently deletes the data).

```sh
$ kubectl get pv -l app.kubernetes.io/namespace=tidb -o name | xargs -I {} kubectl patch {} -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'
Expand Down
16 changes: 12 additions & 4 deletions dev/how-to/get-started/read-historical-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,9 @@ The `tidb_snapshot` system variable is introduced to support reading history dat
- The variable accepts TSO (Timestamp Oracle) and datetime. TSO is a globally unique time service, which is obtained from PD. The acceptable datetime format is "2016-10-08 16:45:26.999". Generally, the datetime can be set using second precision, for example "2016-10-08 16:45:26".
- When the variable is set, TiDB creates a Snapshot using its value as the timestamp, just for the data structure and there is no any overhead. After that, all the `Select` operations will read data from this Snapshot.

> **Note:** Because the timestamp in TiDB transactions is allocated by Placement Driver (PD), the version of the stored data is also marked based on the timestamp allocated by PD. When a Snapshot is created, the version number is based on the value of the `tidb_snapshot` variable. If there is a large difference between the local time of the TiDB server and the PD server, use the time of the PD server.
> **Note:**
>
> Because the timestamp in TiDB transactions is allocated by Placement Driver (PD), the version of the stored data is also marked based on the timestamp allocated by PD. When a Snapshot is created, the version number is based on the value of the `tidb_snapshot` variable. If there is a large difference between the local time of the TiDB server and the PD server, use the time of the PD server.
After reading data from history versions, you can read data from the latest version by ending the current Session or using the `Set` statement to set the value of the `tidb_snapshot` variable to "" (empty string).

Expand Down Expand Up @@ -102,14 +104,18 @@ Pay special attention to the following two variables:

6. Set the `tidb_snapshot` variable whose scope is Session. The variable is set so that the latest version before the value can be read.

> **Note:** In this example, the value is set to be the time before the update operation.
> **Note:**
>
> In this example, the value is set to be the time before the update operation.

```sql
mysql> set @@tidb_snapshot="2016-10-08 16:45:26";
Query OK, 0 rows affected (0.00 sec)
```

> **Note:** You should use `@@` instead of `@` before `tidb_snapshot` because `@@` is used to denote the system variable while `@` is used to denote the user variable.
> **Note:**
>
> You should use `@@` instead of `@` before `tidb_snapshot` because `@@` is used to denote the system variable while `@` is used to denote the user variable.

**Result:** The read from the following statement is the data before the update operation, which is the history data.

Expand Down Expand Up @@ -144,4 +150,6 @@ Pay special attention to the following two variables:
3 rows in set (0.00 sec)
```

> **Note:** You should use `@@` instead of `@` before `tidb_snapshot` because `@@` is used to denote the system variable while `@` is used to denote the user variable.
> **Note:**
>
> You should use `@@` instead of `@` before `tidb_snapshot` because `@@` is used to denote the system variable while `@` is used to denote the user variable.
12 changes: 9 additions & 3 deletions op-guide/ansible-deployment-rolling-update.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ category: operations

When you perform a rolling update for a TiDB cluster, the service is shut down serially and is started after you update the service binary and the configuration file. If the load balancing is configured in the front-end, the rolling update of TiDB does not impact the running applications. Minimum requirements: `pd*3, tidb*2, tikv*3`.

> **Note:** If the binlog is enabled, and Pump and Drainer services are deployed in the TiDB cluster, stop the Drainer service before the rolling update. The Pump service is automatically updated in the rolling update of TiDB.
> **Note:**
>
> If the binlog is enabled, and Pump and Drainer services are deployed in the TiDB cluster, stop the Drainer service before the rolling update. The Pump service is automatically updated in the rolling update of TiDB.
## Upgrade the component version

Expand All @@ -29,7 +31,9 @@ When you perform a rolling update for a TiDB cluster, the service is shut down s
tidb_version = v2.0.7
```
> **Note:** If you use `tidb-ansible` of the master branch, you can keep `tidb_version = latest`. The installation package of the latest TiDB version is updated each day.
> **Note:**
>
> If you use `tidb-ansible` of the master branch, you can keep `tidb_version = latest`. The installation package of the latest TiDB version is updated each day.
2. Delete the existing `downloads` directory `/home/tidb/tidb-ansible/downloads/`.
Expand All @@ -52,7 +56,9 @@ You can also download the binary manually. Use `wget` to download the binary and
wget http://download.pingcap.org/tidb-v2.0.7-linux-amd64.tar.gz
```
> **Note:** Remember to replace the version number in the download link with the one you need.
> **Note:**
>
> Remember to replace the version number in the download link with the one you need.
If you use `tidb-ansible` of the master branch, download the binary using the following command:
Expand Down
16 changes: 12 additions & 4 deletions op-guide/ansible-deployment-scale.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,9 @@ For example, if you want to add two TiDB nodes (node101, node102) with the IP ad
ansible-playbook bootstrap.yml -l 172.16.10.101,172.16.10.102
```

> **Note:** If an alias is configured in the `inventory.ini` file, for example, `node101 ansible_host=172.16.10.101`, use `-l` to specify the alias when executing `ansible-playbook`. For example, `ansible-playbook bootstrap.yml -l node101,node102`. This also applies to the following steps.
> **Note:**
>
> If an alias is configured in the `inventory.ini` file, for example, `node101 ansible_host=172.16.10.101`, use `-l` to specify the alias when executing `ansible-playbook`. For example, `ansible-playbook bootstrap.yml -l node101,node102`. This also applies to the following steps.

3. Deploy the newly added node:

Expand Down Expand Up @@ -191,7 +193,9 @@ For example, if you want to add a PD node (node103) with the IP address `172.16.

1. Remove the `--initial-cluster="xxxx" \` configuration.

> **Note:** You cannot add the `#` character at the beginning of the line. Otherwise, the following configuration cannot take effect.
> **Note:**
>
> You cannot add the `#` character at the beginning of the line. Otherwise, the following configuration cannot take effect.

2. Add `--join="http://172.16.10.1:2379" \`. The IP address (`172.16.10.1`) can be any of the existing PD IP address in the cluster.
3. Manually start the PD service in the newly added PD node:
Expand All @@ -206,7 +210,9 @@ For example, if you want to add a PD node (node103) with the IP address `172.16.
./pd-ctl -u "http://172.16.10.1:2379"
```

> **Note:** `pd-ctl` is a command used to check the number of PD nodes.
> **Note:**
>
> `pd-ctl` is a command used to check the number of PD nodes.

5. Apply a rolling update to the entire cluster:

Expand Down Expand Up @@ -314,7 +320,9 @@ For example, if you want to remove a TiKV node (node9) with the IP address `172.
./pd-ctl -u "http://172.16.10.1:2379" -d store 10
```

> **Note:** It takes some time to remove the node. If the status of the node you remove becomes Tombstone, then this node is successfully removed.
> **Note:**
>
> It takes some time to remove the node. If the status of the node you remove becomes Tombstone, then this node is successfully removed.

3. After the node is successfully removed, stop the services on node9:

Expand Down
Loading

0 comments on commit 0cf5ddd

Please sign in to comment.