You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -64,20 +64,20 @@ cd tidb-binlog-latest-linux-amd64
64
64
- When you deploy a Pump manually, to start the service, follow the order of Pump -> TiDB; to stop the service, follow the order of TiDB -> Pump.
65
65
66
66
We set the startup parameter `binlog-socket` as the specified unix socket file path of the corresponding parameter `socket` in Pump. The final deployment architecture is as follows:
- Drainer does not support renaming DDL on the table of the ignored schemas (schemas in the filter list).
71
71
72
72
- To start Drainer in the existing TiDB cluster, usually you need to do a full backup, get the savepoint, import the full backup, and start Drainer and synchronize from the savepoint.
73
-
73
+
74
74
To guarantee the integrity of data, perform the following operations 10 minutes after Pump is started:
75
75
76
76
- Use the `generate_binlog_position` tool of the [tidb-tools](https://github.com/pingcap/tidb-tools)project to generate the Drainer savepoint file. Use `generate_binlog_position` to compile this tool. See the [README description](https://github.com/pingcap/tidb-tools/blob/master/generate_binlog_position/README.md) for usage. You can also download this tool from [generate_binlog_position](https://download.pingcap.org/generate_binlog_position-latest-linux-amd64.tar.gz) and use `sha256sum` to verify the [sha256](https://download.pingcap.org/generate_binlog_position-latest-linux-amd64.sha256) file.
77
77
- Do a full backup. For example, back up TiDB using mydumper.
78
78
- Import the full backup to the target system.
79
79
- The savepoint file started by the Kafka version of Drainer is stored in the checkpoint table of the downstream database tidb_binlog by default. If no valid data exists in the checkpoint table, configure `initial-commit-ts` to make Drainer work from a specified position when it is started:
@@ -95,17 +95,18 @@ cd tidb-binlog-latest-linux-amd64
95
95
96
96
- Deploy Kafka and ZooKeeper cluster before deploying TiDB-Binlog. Make sure that Kafka is 0.9 version or later.
97
97
98
-
#### Recommended Kafka cluster configuration
98
+
#### Recommended Kafka cluster configuration
99
+
99
100
|Name|Number|Memory size|CPU|Hard disk|
100
101
|:---:|:---:|:---:|:---:|:---:|
101
102
|Kafka|3+|16G|8+|2+ 1TB|
102
103
|ZooKeeper|3+|8G|4+|2+ 300G|
103
-
104
+
104
105
#### Recommended Kafka parameter configuration
105
-
106
+
106
107
- `auto.create.topics.enable = true`: if no topic exists, Kafka automatically creates a topic on the broker.
107
108
- `broker.id`: a required parameter to identify the Kafka cluster. Keep the parameter value unique. For example, `broker.id = 1`.
108
-
- `fs.file-max = 1000000`: Kafka uses a lot of files and network sockets. It is recommended to change the parameter value to 1000000. Change the value using `vi /etc/sysctl.conf`.
109
+
- `fs.file-max = 1000000`: Kafka uses a lot of files and network sockets. It is recommended to change the parameter value to 1000000. Change the value using `vi /etc/sysctl.conf`.
109
110
110
111
### Deploy Pump using TiDB-Ansible
111
112
@@ -339,7 +340,7 @@ This example describes how to use Pump/Drainer.
339
340
```bash
340
341
./bin/drainer -config drainer.toml
341
342
```
342
-
343
+
343
344
## Download PbReader (Linux)
344
345
345
346
PbReader parses the pb file generated by Drainer and translates it into SQL statements.
0 commit comments