Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update tools faq #4591

Merged
merged 9 commits into from
Jan 18, 2021
Prev Previous commit
Next Next commit
Apply suggestions from code review
Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com>
  • Loading branch information
CharLotteiu and TomShawn authored Jan 14, 2021
commit a33480afee3dc1acafb8f5bd4e7024f3c1174251
2 changes: 1 addition & 1 deletion br/backup-and-restore-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ You can use [`filter.rules`](https://github.com/pingcap/ticdc/blob/7c3c2336f9815

## Does BR back up the `SHARD_ROW_ID_BITS` and `PRE_SPLIT_REGIONS` information of a table? Does the restored table have multiple Regions?

Yes. BR backs up the [`SHARD_ROW_ID_BITS` and `PRE_SPLIT_REGIONS`](/sql-statements/sql-statement-split-region.md#pre_split_regions) information of a table. The data of the restored table also split into multiple Regions.
Yes. BR backs up the [`SHARD_ROW_ID_BITS` and `PRE_SPLIT_REGIONS`](/sql-statements/sql-statement-split-region.md#pre_split_regions) information of a table. The data of the restored table is also split into multiple Regions.

## Why is the `region is unavailable` error reported for a SQL query after I use BR to restore the backup data?

Expand Down
2 changes: 1 addition & 1 deletion faq/migration-tidb-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ Two solutions:

### Why does Dumpling return `The local disk space is insufficient` error when exporting a large table?

It is because the database primary keys are not evenly distributed. When Dumpling splits the data, some data chunks become excessive. Try to allocate more disk space or [contact us](https://tidbcommunity.slack.com/archives/CH7TTLL7P) to get the nightly version of Dumpling.
This error occurs because the database's primary keys are not evenly distributed. When Dumpling splits the data, some data chunks become excessive. Try to allocate more disk space or [contact us](https://tidbcommunity.slack.com/archives/CH7TTLL7P) to get the nightly version of Dumpling.

### Does TiDB have a function like the Flashback Query in Oracle? Does it support DDL?

Expand Down
4 changes: 2 additions & 2 deletions ticdc/troubleshoot-ticdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ cdc cli changefeed update -c [changefeed-id] --sort-engine="unified" --sort-dir=

## What is the complete behavior of TiCDC garbage collection (GC) safepoint?

If a replication task starts after the TiCDC service starts, the TiCDC owner updates the PD service GC safepoint with the smallest value of `checkpoint-ts` among all replication tasks. The service GC safepoint ensures that TiCDC does not delete data generated at that time and after that time. If the replication task is interrupted, the `checkpoint-ts` of this task does not change and PD's corresponding service GC safepoint is not updated either. The Time-To-Live (TTL) that TiCDC sets for a service GC safepoint is 24 hours, meaning that the GC mechanism does not delete any data if the TiCDC service can be recovered within 24 hours after it is interrupted.
If a replication task starts after the TiCDC service starts, the TiCDC owner updates the PD service GC safepoint with the smallest value of `checkpoint-ts` among all replication tasks. The service GC safepoint ensures that TiCDC does not delete data generated at that time and after that time. If the replication task is interrupted, the `checkpoint-ts` of this task does not change and PD's corresponding service GC safepoint is not updated either. The Time-To-Live (TTL) that TiCDC sets for a service GC safepoint is 24 hours, which means that the GC mechanism does not delete any data if the TiCDC service can be recovered within 24 hours after it is interrupted.

## How do I handle the `Error 1298: Unknown or incorrect time zone: 'UTC'` error when creating the replication task or replicating data to MySQL?

Expand Down Expand Up @@ -277,4 +277,4 @@ For more information, refer to [Open protocol Row Changed Event format](/ticdc/t

## How much PD storage does TiCDC use?

TiCDC uses etcd in PD to store and regularly update the metadata. As the interval time between the MVCC of etcd and PD‘s default compaction is one hour, the amount of PD storage that TiCDC uses is proportional to the amount of metadata versions generated within this hour. However, in v4.0.5, v4.0.6, and v4.0.7, TiCDC has a problem of frequently writing, so if there are 1000 tables created or scheduled in an hour, it then takes up all the etcd storage and returns error `etcdserver: mvcc: database space exceeded`. You need to clean up the etcd storage after getting this error. See [etcd maintaince space-quota](https://etcd.io/docs/v3.4.0/op-guide/maintenance/#space-quota) for details. It is recommended to upgrade to v4.0.9 and later versions.
TiCDC uses etcd in PD to store and regularly update the metadata. Because the time interval between the MVCC of etcd and PD‘s default compaction is one hour, the amount of PD storage that TiCDC uses is proportional to the amount of metadata versions generated within this hour. However, in v4.0.5, v4.0.6, and v4.0.7, TiCDC has a problem of frequent writing, so if there are 1000 tables created or scheduled in an hour, it then takes up all the etcd storage and returns the `etcdserver: mvcc: database space exceeded` error. You need to clean up the etcd storage after getting this error. See [etcd maintaince space-quota](https://etcd.io/docs/v3.4.0/op-guide/maintenance/#space-quota) for details. It is recommended to upgrade your cluster to v4.0.9 or later versions.