From 9a33a40154a7fb182a9d71248eac9931fc7a9871 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 17 Jun 2020 13:33:24 +0800 Subject: [PATCH] change absolute path to relative path of docs repo files (#2912) * change absolute path to relative path of docs repo files * address comments from coco * Update benchmark-tidb-using-sysbench.md Co-authored-by: Keke Yi <40977455+yikeke@users.noreply.github.com> Co-authored-by: ti-srebot <66930949+ti-srebot@users.noreply.github.com> --- .../v4.0-performance-benchmarking-with-tpch.md | 2 +- migrate-from-aurora-mysql-database.md | 4 ++-- releases/release-2.1-ga.md | 16 ++++++++-------- releases/release-2.1-rc.5.md | 2 +- releases/release-2.1.18.md | 2 +- releases/release-2.1.2.md | 2 +- sql-statements/sql-statement-explain.md | 2 +- tidb-binlog/upgrade-tidb-binlog.md | 2 +- tidb-lightning/deploy-tidb-lightning.md | 2 +- tidb-troubleshooting-map.md | 14 +++++++------- 10 files changed, 24 insertions(+), 24 deletions(-) diff --git a/benchmark/v4.0-performance-benchmarking-with-tpch.md b/benchmark/v4.0-performance-benchmarking-with-tpch.md index 064a43617f8fe..cf1860900acee 100644 --- a/benchmark/v4.0-performance-benchmarking-with-tpch.md +++ b/benchmark/v4.0-performance-benchmarking-with-tpch.md @@ -111,7 +111,7 @@ To avoid TiKV and TiFlash racing for disk and I/O resources, mount the two NVMe ### Test process -1. Deploy TiDB v4.0 and v3.0 using [TiUP](https://pingcap.com/docs/stable/tiup/tiup-overview/#tiup-overview). +1. Deploy TiDB v4.0 and v3.0 using [TiUP](/tiup/tiup-overview.md#tiup-overview). 2. Use the bench tool of TiUP to import the TPC-H data with the scale factor 10. diff --git a/migrate-from-aurora-mysql-database.md b/migrate-from-aurora-mysql-database.md index 3ea73aff99762..67a33032f6998 100644 --- a/migrate-from-aurora-mysql-database.md +++ b/migrate-from-aurora-mysql-database.md @@ -43,11 +43,11 @@ To migrate data based on GTID, set both `gtid-mode` and `enforce_gtid_consistenc ## Step 2: Deploy the DM cluster -It is recommended to use DM-Ansible to deploy a DM cluster. See [Deploy Data Migration Using DM-Ansible](https://pingcap.com/docs/dev/how-to/deploy/data-migration-with-ansible/). +It is recommended to use DM-Ansible to deploy a DM cluster. See [Deploy Data Migration Using DM-Ansible](https://pingcap.com/docs/tidb-data-migration/stable/deploy-a-dm-cluster-using-ansible/). > **Note:** > -> - Use password encrypted with dmctl in all the DM configuration files. If the database password is empty, it is unnecessary to encrypt it. For how to use dmctl to encrypt a cleartext password, see [Encrypt the upstream MySQL user password using dmctl](https://pingcap.com/docs/dev/how-to/deploy/data-migration-with-ansible/#encrypt-the-upstream-mysql-user-password-using-dmctl). +> - Use password encrypted with dmctl in all the DM configuration files. If the database password is empty, it is unnecessary to encrypt it. For how to use dmctl to encrypt a cleartext password, see [Encrypt the upstream MySQL user password using dmctl](https://pingcap.com/docs/tidb-data-migration/stable/deploy-a-dm-cluster-using-ansible/#encrypt-the-upstream-mysql-user-password-using-dmctl). > - Both the upstream and downstream users must have the corresponding read and write privileges. ## Step 3: Check the cluster informtaion diff --git a/releases/release-2.1-ga.md b/releases/release-2.1-ga.md index a74d15bbd8e50..88c5d70cf784c 100644 --- a/releases/release-2.1-ga.md +++ b/releases/release-2.1-ga.md @@ -175,19 +175,19 @@ On November 30, 2018, TiDB 2.1 GA is released. See the following updates in this - Add the [`GetAllStores` interface](https://github.com/pingcap/kvproto/blob/8e3f33ac49297d7c93b61a955531191084a2f685/proto/pdpb.proto#L32), to support distributed GC in TiDB + pd-ctl supports: - - [using statistics for Region split](https://pingcap.com/docs/tools/pd-control/#operator-show--add--remove) + - [using statistics for Region split](/pd-control.md#operator-show--add--remove) - - [calling `jq` to format the JSON output](https://pingcap.com/docs/tools/pd-control/#jq-formatted-json-output-usage) + - [calling `jq` to format the JSON output](/pd-control.md#jq-formatted-json-output-usage) - - [checking the Region information of the specified store](https://pingcap.com/docs/tools/pd-control/#region-store-store-id) + - [checking the Region information of the specified store](/pd-control.md#region-store-store-id) - - [checking topN Region list sorted by versions](https://pingcap.com/docs/tools/pd-control/#region-topconfver-limit) + - [checking topN Region list sorted by versions](/pd-control.md#region-topconfver-limit) - - [checking topN Region list sorted by size](https://pingcap.com/docs/tools/pd-control/#region-topsize-limit) + - [checking topN Region list sorted by size](/pd-control.md#region-topsize-limit) - - [more precise TSO encoding](https://pingcap.com/docs/tools/pd-control/#tso) + - [more precise TSO encoding](/pd-control.md#tso) - - [pd-recover](https://pingcap.com/docs/tools/pd-recover) doesn't need to provide the `max-replica` parameter + - [pd-recover](/pd-recover.md) doesn't need to provide the `max-replica` parameter + Metrics @@ -259,7 +259,7 @@ On November 30, 2018, TiDB 2.1 GA is released. See the following updates in this ## Tools -- Fast full import of large amounts of data: [TiDB Lightning](https://pingcap.com/docs/tools/lightning/overview-architecture/) +- Fast full import of large amounts of data: [TiDB Lightning](/tidb-lightning/tidb-lightning-overview.md) - Support new [TiDB Binlog](/tidb-binlog/tidb-binlog-overview.md) diff --git a/releases/release-2.1-rc.5.md b/releases/release-2.1-rc.5.md index 0df39a994f6f9..7b58dc97583b1 100644 --- a/releases/release-2.1-rc.5.md +++ b/releases/release-2.1-rc.5.md @@ -61,4 +61,4 @@ On November 12, 2018, TiDB 2.1 RC5 is released. Compared with TiDB 2.1 RC4, this ## Tools -- Support the TiDB-Binlog cluster, which is not compatible with the older version of binlog [#8093](https://github.com/pingcap/tidb/pull/8093), [documentation](https://pingcap.com/docs/dev/reference/tidb-binlog/overview/) +- Support the TiDB-Binlog cluster, which is not compatible with the older version of binlog [#8093](https://github.com/pingcap/tidb/pull/8093), [documentation](/tidb-binlog/tidb-binlog-overview.md) diff --git a/releases/release-2.1.18.md b/releases/release-2.1.18.md index 077e3790d755a..72291745b62aa 100644 --- a/releases/release-2.1.18.md +++ b/releases/release-2.1.18.md @@ -49,7 +49,7 @@ TiDB Ansible version: 2.1.18 - Fix the issue that the `COM_STMT_FETCH` time record in slow query logs is inconsistent with that in MySQL [#12953](https://github.com/pingcap/tidb/pull/12953) - Add an error code in the error message for write conflicts to quickly locate the cause [#12878](https://github.com/pingcap/tidb/pull/12878) + DDL - - Disallow dropping the `AUTO INCREMENT` attribute of a column by default. Modify the value of the `tidb_allow_remove_auto_inc` variable if you do need to drop this attribute. See [TiDB Specific System Variables](https://pingcap.com/docs/dev/reference/configuration/tidb-server/tidb-specific-variables/#tidb_allow_remove_auto_inc--new-in-v218) for more details. [#12146](https://github.com/pingcap/tidb/pull/12146) + - Disallow dropping the `AUTO INCREMENT` attribute of a column by default. Modify the value of the `tidb_allow_remove_auto_inc` variable if you do need to drop this attribute. See [TiDB Specific System Variables](/tidb-specific-system-variables.md#tidb_allow_remove_auto_inc-new-in-v2118-and-v304) for more details. [#12146](https://github.com/pingcap/tidb/pull/12146) - Support multiple `unique`s when creating a unique index in the `Create Table` statement [#12469](https://github.com/pingcap/tidb/pull/12469) - Fix a compatibility issue that if the foreign key constraint in `CREATE TABLE` statement has no schema, schema of the created table should be used instead of returning a `No Database selected` error [#12678](https://github.com/pingcap/tidb/pull/12678) - Fix the issue that the `invalid list index` error is reported when executing `ADMIN CANCEL DDL JOBS` [#12681](https://github.com/pingcap/tidb/pull/12681) diff --git a/releases/release-2.1.2.md b/releases/release-2.1.2.md index 6583166022dd2..5c7bb3591fba8 100644 --- a/releases/release-2.1.2.md +++ b/releases/release-2.1.2.md @@ -37,4 +37,4 @@ On December 22, 2018, TiDB 2.1.2 is released. The corresponding TiDB Ansible 2.1 - Fix the issue that `Too many open engines` occurs after the checkpoint is used to restart Lightning + TiDB Binlog - Eliminate some bottlenecks of Drainer writing data to Kafka - - Support the [Kafka version of TiDB Binlog](https://pingcap.com/docs/v2.1/reference/tidb-binlog/tidb-binlog-kafka/) + - Support the Kafka version of TiDB Binlog diff --git a/sql-statements/sql-statement-explain.md b/sql-statements/sql-statement-explain.md index 7e032679e8509..0bd32c1943486 100644 --- a/sql-statements/sql-statement-explain.md +++ b/sql-statements/sql-statement-explain.md @@ -92,7 +92,7 @@ mysql> EXPLAIN DELETE FROM t1 WHERE c1=3; 3 rows in set (0.00 sec) ``` -If you do not specify the `FORMAT`, or specify `FORMAT = "row"`, `EXPLAIN` statement will output the results in a tabular format. See [Understand the Query Execution Plan](https://pingcap.com/docs/dev/reference/performance/understanding-the-query-execution-plan/) for more information. +If you do not specify the `FORMAT`, or specify `FORMAT = "row"`, `EXPLAIN` statement will output the results in a tabular format. See [Understand the Query Execution Plan](/query-execution-plan.md) for more information. In addition to the MySQL standard result format, TiDB also supports DotGraph and you need to specify `FORMAT = "dot"` as in the following example: diff --git a/tidb-binlog/upgrade-tidb-binlog.md b/tidb-binlog/upgrade-tidb-binlog.md index a43df279c30e9..ae0e33d7c1253 100644 --- a/tidb-binlog/upgrade-tidb-binlog.md +++ b/tidb-binlog/upgrade-tidb-binlog.md @@ -48,7 +48,7 @@ Second, upgrade the Drainer component: ## Upgrade TiDB Binlog from Kafka/Local version to the cluster version -The new TiDB versions (v2.0.8-binlog, v2.1.0-rc.5 or later) are not compatible with the [Kafka version](https://pingcap.com/docs/v2.1/reference/tidb-binlog/tidb-binlog-kafka/) or [Local version](https://pingcap.com/docs-cn/v2.1/reference/tidb-binlog/tidb-binlog-local/) of TiDB Binlog. If TiDB is upgraded to one of the new versions, it is required to use the cluster version of TiDB Binlog. If the Kafka or local version of TiDB Binlog is used before upgrading, you need to upgrade your TiDB Binlog to the cluster version. +The new TiDB versions (v2.0.8-binlog, v2.1.0-rc.5 or later) are not compatible with the Kafka version or Local version of TiDB Binlog. If TiDB is upgraded to one of the new versions, it is required to use the cluster version of TiDB Binlog. If the Kafka or local version of TiDB Binlog is used before upgrading, you need to upgrade your TiDB Binlog to the cluster version. The corresponding relationship between TiDB Binlog versions and TiDB versions is shown in the following table: diff --git a/tidb-lightning/deploy-tidb-lightning.md b/tidb-lightning/deploy-tidb-lightning.md index 87daf61e008e5..e190c85869f07 100644 --- a/tidb-lightning/deploy-tidb-lightning.md +++ b/tidb-lightning/deploy-tidb-lightning.md @@ -181,7 +181,7 @@ You can deploy TiDB Lightning using TiDB Ansible together with the [deployment o Before importing data, you need to have a deployed TiDB cluster, with the cluster version 2.0.9 or above. It is highly recommended to use the latest version. -You can find deployment instructions in [TiDB Quick Start Guide](https://pingcap.com/docs/QUICKSTART/). +You can find deployment instructions in [TiDB Quick Start Guide](/quick-start-with-tidb.md). #### Step 2: Download the TiDB Lightning installation package diff --git a/tidb-troubleshooting-map.md b/tidb-troubleshooting-map.md index 0c8772f4f418d..02e0494a319f0 100644 --- a/tidb-troubleshooting-map.md +++ b/tidb-troubleshooting-map.md @@ -164,7 +164,7 @@ Refer to [5 PD issues](#5-pd-issues). - For v3.0 and later versions, use the `SQL Bind` feature to bind the execution plan. - - Update the statistics. If you are roughly sure that the problem is caused by the statistics, [dump the statistics](https://pingcap.com/docs/stable/reference/performance/statistics/#export-statistics). If the cause is outdated statistics, such as the `modify count/row count` in `show stats_meta` is greater than a certain value (e.g. 0.3), or the table has an index of time column, you can try recovering by using `analyze table`. If `auto analyze` is configured, check whether the `tidb_auto_analyze_ratio` system variable is too large (e.g. > 0.3), and whether the current time is between `tidb_auto_analyze_start_time` and `tidb_auto_analyze_end_time`. + - Update the statistics. If you are roughly sure that the problem is caused by the statistics, [dump the statistics](/statistics.md#export-statistics). If the cause is outdated statistics, such as the `modify count/row count` in `show stats_meta` is greater than a certain value (e.g. 0.3), or the table has an index of time column, you can try recovering by using `analyze table`. If `auto analyze` is configured, check whether the `tidb_auto_analyze_ratio` system variable is too large (e.g. > 0.3), and whether the current time is between `tidb_auto_analyze_start_time` and `tidb_auto_analyze_end_time`. - For other situations, [report a bug](https://github.com/pingcap/tidb/issues/new?labels=type%2Fbug&template=bug-report.md). @@ -435,7 +435,7 @@ Check the specific cause for busy by viewing the monitor **Grafana** -> **TiKV** - Cause: When Pump is started, it notifies all Drainer nodes that are in the `online` state. If it fails to notify Drainer, this error log is printed. - - Solution: Use the binlogctl tool to check whether each Drainer node is normal or not. This is to ensure that all Drainer nodes in the `online` state are working normally. If the state of a Drainer node is not consistent with its actual working status, use the binlogctl tool to change its state and then restart Pump. See the case [fail-to-notify-all-living-drainer](https://pingcap.com/docs/stable/reference/tidb-binlog/troubleshoot/error-handling/#fail-to-notify-all-living-drainer-is-returned-when-pump-is-started). + - Solution: Use the binlogctl tool to check whether each Drainer node is normal or not. This is to ensure that all Drainer nodes in the `online` state are working normally. If the state of a Drainer node is not consistent with its actual working status, use the binlogctl tool to change its state and then restart Pump. See the case [fail-to-notify-all-living-drainer](/tidb-binlog/handle-tidb-binlog-errors.md#fail-to-notify-all-living-drainer-is-returned-when-pump-is-started). - 6.1.9 Draienr reports the `gen update sqls failed: table xxx: row data is corruption []` error. @@ -523,30 +523,30 @@ Check the specific cause for busy by viewing the monitor **Grafana** -> **TiKV** - `AUTO_INCREMENT` columns need to be positive, and do not contain the value “0”. - UNIQUE and PRIMARY KEYs must not have duplicate entries. - - Solution: See [Troubleshooting Solution](https://pingcap.com/docs/stable/how-to/troubleshoot/tidb-lightning/#checksum-failed-checksum-mismatched-remote-vs-local). + - Solution: See [Troubleshooting Solution](/troubleshoot-tidb-lightning.md#checksum-failed-checksum-mismatched-remote-vs-local). - 6.3.4 `Checkpoint for … has invalid status:(error code)` - Cause: Checkpoint is enabled, and Lightning/Importer has previously abnormally exited. To prevent accidental data corruption, Lightning will not start until the error is addressed. The error code is an integer less than 25, with possible values as `0, 3, 6, 9, 12, 14, 15, 17, 18, 20 and 21`. The integer indicates the step where the unexpected exit occurs in the import process. The larger the integer is, the later the exit occurs. - - Solution: See [Troubleshooting Solution](https://pingcap.com/docs/stable/how-to/troubleshoot/tidb-lightning/#checkpoint-for--has-invalid-status-error-code). + - Solution: See [Troubleshooting Solution](/troubleshoot-tidb-lightning.md#checkpoint-for--has-invalid-status-error-code). - 6.3.5 `ResourceTemporarilyUnavailable("Too many open engines …: 8")` - Cause: The number of concurrent engine files exceeds the limit specified by tikv-importer. This could be caused by misconfiguration. In addition, even when the configuration is correct, if tidb-lightning has exited abnormally before, an engine file might be left at a dangling open state, which could cause this error as well. - - Solution: See [Troubleshooting Solution](https://pingcap.com/docs/stable/how-to/troubleshoot/tidb-lightning/#resourcetemporarilyunavailabletoo-many-open-engines--). + - Solution: See [Troubleshooting Solution](/troubleshoot-tidb-lightning.md#resourcetemporarilyunavailabletoo-many-open-engines--). - 6.3.6 `cannot guess encoding for input file, please convert to UTF-8 manually` - Cause: TiDB Lightning only supports the UTF-8 and GB-18030 encodings. This error means the file is not in any of these encodings. It is also possible that the file has mixed encoding, such as containing a string in UTF-8 and another string in GB-18030, due to historical ALTER TABLE executions. - - Solution: See [Troubleshooting Solution](https://pingcap.com/docs/stable/how-to/troubleshoot/tidb-lightning/#cannot-guess-encoding-for-input-file-please-convert-to-utf-8-manually). + - Solution: See [Troubleshooting Solution](/troubleshoot-tidb-lightning.md#cannot-guess-encoding-for-input-file-please-convert-to-utf-8-manually). - 6.3.7 `[sql2kv] sql encode error = [types:1292]invalid time format: '{1970 1 1 0 45 0 0}'` - Cause: A timestamp type entry has a time value that does not exist. This is either because of DST changes or because the time value has exceeded the supported range (from Jan 1st 1970 to Jan 19th 2038). - - Solution: See [Troubleshooting Solution](https://pingcap.com/docs/stable/how-to/troubleshoot/tidb-lightning/#sql2kv-sql-encode-error--types1292invalid-time-format-1970-1-1-). + - Solution: See [Troubleshooting Solution](/troubleshoot-tidb-lightning.md#sql2kv-sql-encode-error--types1292invalid-time-format-1970-1-1-). ## 7. Common log analysis