Skip to content

Commit 8b6f06a

Browse files
authored
tools: update whitespace (pingcap#378)
1 parent 9cddb9f commit 8b6f06a

File tree

1 file changed

+16
-15
lines changed

1 file changed

+16
-15
lines changed

tools/tidb-binlog-kafka.md

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ TiDB-Binlog supports the following scenarios:
1515

1616
- **Data synchronization**: to synchronize TiDB cluster data to other databases
1717
- **Real-time backup and recovery**: to back up TiDB cluster data, and recover in case of cluster outages
18-
18+
1919
## TiDB-Binlog architecture
2020

2121
The TiDB-Binlog architecture is as follows:
@@ -47,7 +47,7 @@ The Kafka cluster stores the binlog data written by Pump and provides the binlog
4747
wget http://download.pingcap.org/tidb-binlog-latest-linux-amd64.tar.gz
4848
wget http://download.pingcap.org/tidb-binlog-latest-linux-amd64.sha256
4949

50-
# Check the file integrity. If the result is OK, the file is correct.
50+
# Check the file integrity. If the result is OK, the file is correct.
5151
sha256sum -c tidb-binlog-latest-linux-amd64.sha256
5252

5353
# Extract the package.
@@ -64,20 +64,20 @@ cd tidb-binlog-latest-linux-amd64
6464
- When you deploy a Pump manually, to start the service, follow the order of Pump -> TiDB; to stop the service, follow the order of TiDB -> Pump.
6565

6666
We set the startup parameter `binlog-socket` as the specified unix socket file path of the corresponding parameter `socket` in Pump. The final deployment architecture is as follows:
67-
67+
6868
![TiDB Pump deployment architecture](../media/tidb_pump_deployment.jpeg)
6969

7070
- Drainer does not support renaming DDL on the table of the ignored schemas (schemas in the filter list).
7171

7272
- To start Drainer in the existing TiDB cluster, usually you need to do a full backup, get the savepoint, import the full backup, and start Drainer and synchronize from the savepoint.
73-
73+
7474
To guarantee the integrity of data, perform the following operations 10 minutes after Pump is started:
7575

7676
- Use the `generate_binlog_position` tool of the [tidb-tools](https://github.com/pingcap/tidb-tools)project to generate the Drainer savepoint file. Use `generate_binlog_position` to compile this tool. See the [README description](https://github.com/pingcap/tidb-tools/blob/master/generate_binlog_position/README.md) for usage. You can also download this tool from [generate_binlog_position](https://download.pingcap.org/generate_binlog_position-latest-linux-amd64.tar.gz) and use `sha256sum` to verify the [sha256](https://download.pingcap.org/generate_binlog_position-latest-linux-amd64.sha256) file.
7777
- Do a full backup. For example, back up TiDB using mydumper.
7878
- Import the full backup to the target system.
7979
- The savepoint file started by the Kafka version of Drainer is stored in the checkpoint table of the downstream database tidb_binlog by default. If no valid data exists in the checkpoint table, configure `initial-commit-ts` to make Drainer work from a specified position when it is started:
80-
80+
8181
```
8282
bin/drainer --config=conf/drainer.toml --data-dir=${drainer_savepoint_dir}
8383
```
@@ -95,17 +95,18 @@ cd tidb-binlog-latest-linux-amd64
9595
9696
- Deploy Kafka and ZooKeeper cluster before deploying TiDB-Binlog. Make sure that Kafka is 0.9 version or later.
9797
98-
#### Recommended Kafka cluster configuration
98+
#### Recommended Kafka cluster configuration
99+
99100
|Name|Number|Memory size|CPU|Hard disk|
100101
|:---:|:---:|:---:|:---:|:---:|
101102
|Kafka|3+|16G|8+|2+ 1TB|
102103
|ZooKeeper|3+|8G|4+|2+ 300G|
103-
104+
104105
#### Recommended Kafka parameter configuration
105-
106+
106107
- `auto.create.topics.enable = true`: if no topic exists, Kafka automatically creates a topic on the broker.
107108
- `broker.id`: a required parameter to identify the Kafka cluster. Keep the parameter value unique. For example, `broker.id = 1`.
108-
- `fs.file-max = 1000000`: Kafka uses a lot of files and network sockets. It is recommended to change the parameter value to 1000000. Change the value using `vi /etc/sysctl.conf`.
109+
- `fs.file-max = 1000000`: Kafka uses a lot of files and network sockets. It is recommended to change the parameter value to 1000000. Change the value using `vi /etc/sysctl.conf`.
109110
110111
### Deploy Pump using TiDB-Ansible
111112
@@ -339,7 +340,7 @@ This example describes how to use Pump/Drainer.
339340
```bash
340341
./bin/drainer -config drainer.toml
341342
```
342-
343+
343344
## Download PbReader (Linux)
344345
345346
PbReader parses the pb file generated by Drainer and translates it into SQL statements.
@@ -350,21 +351,21 @@ CentOS 7+
350351
# Download PbReader package
351352
wget http://download.pingcap.org/pb_reader-latest-linux-amd64.tar.gz
352353
wget http://download.pingcap.org/pb_reader-latest-linux-amd64.sha256
353-
354+
354355
# Check the file integrity. If the result is OK, the file is correct.
355356
sha256sum -c pb_reader-latest-linux-amd64.sha256
356357
357358
# Extract the package.
358359
tar -xzf pb_reader-latest-linux-amd64.tar.gz
359360
cd pb_reader-latest-linux-amd64
360361
```
361-
362+
362363
The PbReader usage example
363-
364+
364365
```bash
365366
./bin/pbReader -binlog-file=binlog-0000000000000000
366-
```
367-
367+
```
368+
368369
## Monitor TiDB-Binlog
369370

370371
This section introduces how to monitor TiDB-Binlog's status and performance, and display the metrics using Prometheus and Grafana.

0 commit comments

Comments
 (0)