Skip to content

Commit

Permalink
tools, FAQ, op-guide: update compact, drainer, questions and upgrade …
Browse files Browse the repository at this point in the history
…info (#537)

* tools: add drainer output

via: pingcap/docs-cn#795

* tools: update compact command

via: pingcap/docs-cn#771

* FAQ: update questions

Transpose one question according to the Chinese version.
via: pingcap/docs-cn#792

* op-guide: add upgrade info

via: pingcap/docs-cn#799

* tools: address comment

via: #530

* tools, op-guide: address comment

via: #537
  • Loading branch information
yikeke authored and lilin90 committed Jul 12, 2018
1 parent 61a1a5d commit 17b6dfa
Show file tree
Hide file tree
Showing 4 changed files with 38 additions and 10 deletions.
10 changes: 5 additions & 5 deletions FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ This document lists the Most Frequently Asked Questions about TiDB.

#### What is TiDB?

TiDB is a distributed SQL database that features in horizontal scalability, high availability and consistent distributed transactions. It also enables you to use MySQLs SQL syntax and protocol to manage and retrieve data.
TiDB is a distributed SQL database that features in horizontal scalability, high availability and consistent distributed transactions. It also enables you to use MySQL's SQL syntax and protocol to manage and retrieve data.

#### What is TiDB's architecture?

Expand Down Expand Up @@ -827,6 +827,10 @@ There are [similar limits](https://cloud.google.com/spanner/docs/limits) on Goog

3. As for `delete` and `update`, you can use `limit` plus circulation to operate.

#### Does TiDB release space immediately after deleting data?

None of the `DELETE`, `TRUNCATE` and `DROP` operations release data immediately. For the `TRUNCATE` and `DROP` operations, after the TiDB GC (Garbage Collection) time (10 minutes by default), the data is deleted and the space is released. For the `DELETE` operation, the data is deleted but the space is not released according to TiDB GC. When subsequent data is written into RocksDB and executes `COMPACT`, the space is reused.

#### Can I execute DDL operations on the target table when loading data?

No. None of the DDL operations can be executed on the target table when you load data, otherwise the data fails to be loaded.
Expand All @@ -835,10 +839,6 @@ No. None of the DDL operations can be executed on the target table when you load

Yes. But the `load data` does not support the `replace into` syntax.

#### Does TiDB release space immediately after deleting data?

None of the `DELETE`, `TRUNCATE` and `DROP` operations release data immediately. For the `TRUNCATE` and `DROP` operations, after the TiDB GC (Garbage Collection) time (10 minutes by default), the data is deleted and the space is released. For the `DELETE` operation, the data is deleted but the space is not released according to TiDB GC. When subsequent data is written into RocksDB and executes `COMPACT`, the space is reused.

#### Why does the query speed getting slow after deleting data?

Deleting a large amount of data leaves a lot of useless keys, affecting the query efficiency. Currently the Region Merge feature is in development, which is expected to solve this problem. For details, see the [deleting data section in TiDB Best Practices](https://pingcap.com/blog/2017-07-24-tidbbestpractice/#write).
Expand Down
16 changes: 13 additions & 3 deletions op-guide/ansible-deployment-rolling-update.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,9 @@ When you perform a rolling update for a TiDB cluster, the service is shut down s
## Upgrade the component version

To upgrade between large versions, you need to upgrade [`tidb-ansible`](https://github.com/pingcap/tidb-ansible). If you want to upgrade the version of TiDB from 1.0 to 2.0, see [TiDB 2.0 Upgrade Guide](tidb-v2-upgrade-guide.md).
- To upgrade between large versions, you need to upgrade [`tidb-ansible`](https://github.com/pingcap/tidb-ansible). If you want to upgrade the version of TiDB from 1.0 to 2.0, see [TiDB 2.0 Upgrade Guide](tidb-v2-upgrade-guide.md).

- For a minor upgrade, it is also recommended to update `tidb-ansible` for the latest configuration file templates, features, and bug fixes.

### Download the binary automatically

Expand Down Expand Up @@ -67,8 +69,16 @@ wget http://download.pingcap.org/tidb-v2.0.3-linux-amd64-unportable.tar.gz
If the rolling update fails in the process, log in to `pd-ctl` to execute `scheduler show` and check whether `evict-leader-scheduler` exists. If it does exist, delete it manually. Replace `{PD_IP}` and `{STORE_ID}` with your PD IP and the `store_id` of the TiKV instance:

```
$ /home/tidb/tidb-ansible/resources/bin/pd-ctl -u "http://{PD_IP}:2379" -d scheduler show
$ curl -X DELETE "http://{PD_IP}:2379/pd/api/v1/schedulers/evict-leader-scheduler-{STORE_ID}"
$ /home/tidb/tidb-ansible/resources/bin/pd-ctl -u "http://{PD_IP}:2379"$ /home/tidb/tidb-ansible/resources/bin/pd-ctl -u "http://{PD_IP}:2379"
» scheduler show
[
"label-scheduler",
"evict-leader-scheduler-{STORE_ID}",
"balance-region-scheduler",
"balance-leader-scheduler",
"balance-hot-region-scheduler"
]
» scheduler remove evict-leader-scheduler-{STORE_ID}
```

- Apply a rolling update to the TiDB node (only upgrade the TiDB service)
Expand Down
14 changes: 14 additions & 0 deletions tools/tidb-binlog-kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,20 @@ cd tidb-binlog-latest-linux-amd64
dir = "/path/pb-dir"
```

- The drainer outputs `kafka` and you need to set the following parameters in the configuration file:

```
[syncer]
db-type = "kafka"
# when db-type is kafka, you can uncomment this to config the down stream kafka, or it will be the same kafka addrs where drainer pulls binlog from.
# [syncer.to]
# kafka-addrs = "127.0.0.1:9092"
# kafka-version = "0.8.2.0"
```

The data which outputs to kafka follows the binlog format sorted by ts and defined by protobuf. See [driver](https://github.com/pingcap/tidb-tools/tree/master/tidb_binlog/driver) to access the data and sync to the down stream.

- Deploy Kafka and ZooKeeper cluster before deploying TiDB-Binlog. Make sure that Kafka is 0.9 version or later.

#### Recommended Kafka cluster configuration
Expand Down
8 changes: 6 additions & 2 deletions tools/tikv-control.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,15 +98,19 @@ In this command, the key is also the escaped form of raw key.

To print the value of a key, use the `print` command.

### Compact data manually
### Compact data of each TiKV manually

Use the `compact` command to manually compact TiKV data. If you specify the `--from` and `--to` options, then their flags are also in the form of escaped raw key. You can use the `--db` option to specify the RocksDB that you need to compact. The optional values are `kv` and `raft`.
Use the `compact` command to manually compact data of each TiKV. If you specify the `--from` and `--to` options, then their flags are also in the form of escaped raw key. You can use the `--db` option to specify the RocksDB that you need to compact. The optional values are `kv` and `raft`. Also, the `--threads` option allows you to specify the concurrency that you compact and its default value is 8. Generally, a higher concurrency comes with a faster compact speed, which might yet affect the service. You need to choose an appropriate concurrency based on the scenario.

```bash
$ tikv-ctl --db /path/to/tikv/db compact -d kv
success!
```

### Compact data of the whole TiKV cluster manually

Use the `compact-cluster` command to manually compact data of the whole TiKV cluster. The flags of this command have the same meanings and usage as those of the `compact` command.

### Set a Region to tombstone

The `tombstone` command is usually used in circumstances where the sync-log is not enabled, and some data written in the Raft state machine is lost caused by power down.
Expand Down

0 comments on commit 17b6dfa

Please sign in to comment.