DM-worker may has high CPU usage and flood log after start a GTID task #5063
Closed
Description
opened on Mar 30, 2022
What did you do?
the bug has following prerequisites:
- v5.4.0 or v6.0.0, and
- use relay log
enable-gtid: true
in upstream config, and- start a
all
task when last upstream mysql binlog file has a large size or start aincremental
task from a middle position of binlog file, or the task is auto resumed at a middle position of binlog file
What did you expect to see?
works fine
What did you see instead?
- task almost doesn't get forward after start-task, and
- DM-worker has high CPU (360% in my local PC, which is 12 core), and
- generates lots of log with
[2022/03/30 11:30:46.431 +08:00] [INFO] [syncer.go:2020] ["meet heartbeat event and then flush jobs"] [task=test] [unit="binlog replication"]
[2022/03/30 11:30:46.431 +08:00] [INFO] [syncer.go:3247] ["flush all jobs"] [task=test] [unit="binlog replication"] ["global checkpoint"="{{{mysql-bin.000001 113080239} 0xc000010ff8 0} <nil>}(flushed {{{mysql-bin.000001 113080239} 0xc000011208 0} <nil>})"] ["flush job seq"=37]
[2022/03/30 11:30:46.432 +08:00] [INFO] [syncer.go:1114] ["checkpoint has no change, skip sync flush checkpoint"]
For all
mode task, the problematic duration is related to last upstream mysql binlog file size.
For incremental
task, the problematic duration is related to the specified starting binlog location
Versions of the cluster
DM version (run dmctl -V
or dm-worker -V
or dm-master -V
):
v5.4.0, v6.0.0
Upstream MySQL/MariaDB server version:
(paste upstream MySQL/MariaDB server version here)
Downstream TiDB cluster version (execute SELECT tidb_version();
in a MySQL client):
(paste TiDB cluster version here)
How did you deploy DM: tiup or manually?
(leave TiUP or manually here)
Other interesting information (system version, hardware config, etc):
>
>
current status of DM cluster (execute query-status <task-name>
in dmctl)
(paste current status of DM cluster here)
Activity