Skip to content

Commit

Permalink
change absolute path to relative path of docs repo files (pingcap#2912)
Browse files Browse the repository at this point in the history
* change absolute path to relative path of docs repo files

* address comments from coco

* Update benchmark-tidb-using-sysbench.md

Co-authored-by: Keke Yi <40977455+yikeke@users.noreply.github.com>
Co-authored-by: ti-srebot <66930949+ti-srebot@users.noreply.github.com>
  • Loading branch information
3 people authored Jun 17, 2020
1 parent 1d83019 commit 9a33a40
Show file tree
Hide file tree
Showing 10 changed files with 24 additions and 24 deletions.
2 changes: 1 addition & 1 deletion benchmark/v4.0-performance-benchmarking-with-tpch.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ To avoid TiKV and TiFlash racing for disk and I/O resources, mount the two NVMe

### Test process

1. Deploy TiDB v4.0 and v3.0 using [TiUP](https://pingcap.com/docs/stable/tiup/tiup-overview/#tiup-overview).
1. Deploy TiDB v4.0 and v3.0 using [TiUP](/tiup/tiup-overview.md#tiup-overview).

2. Use the bench tool of TiUP to import the TPC-H data with the scale factor 10.

Expand Down
4 changes: 2 additions & 2 deletions migrate-from-aurora-mysql-database.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,11 +43,11 @@ To migrate data based on GTID, set both `gtid-mode` and `enforce_gtid_consistenc
## Step 2: Deploy the DM cluster

It is recommended to use DM-Ansible to deploy a DM cluster. See [Deploy Data Migration Using DM-Ansible](https://pingcap.com/docs/dev/how-to/deploy/data-migration-with-ansible/).
It is recommended to use DM-Ansible to deploy a DM cluster. See [Deploy Data Migration Using DM-Ansible](https://pingcap.com/docs/tidb-data-migration/stable/deploy-a-dm-cluster-using-ansible/).

> **Note:**
>
> - Use password encrypted with dmctl in all the DM configuration files. If the database password is empty, it is unnecessary to encrypt it. For how to use dmctl to encrypt a cleartext password, see [Encrypt the upstream MySQL user password using dmctl](https://pingcap.com/docs/dev/how-to/deploy/data-migration-with-ansible/#encrypt-the-upstream-mysql-user-password-using-dmctl).
> - Use password encrypted with dmctl in all the DM configuration files. If the database password is empty, it is unnecessary to encrypt it. For how to use dmctl to encrypt a cleartext password, see [Encrypt the upstream MySQL user password using dmctl](https://pingcap.com/docs/tidb-data-migration/stable/deploy-a-dm-cluster-using-ansible/#encrypt-the-upstream-mysql-user-password-using-dmctl).
> - Both the upstream and downstream users must have the corresponding read and write privileges.
## Step 3: Check the cluster informtaion
Expand Down
16 changes: 8 additions & 8 deletions releases/release-2.1-ga.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,19 +175,19 @@ On November 30, 2018, TiDB 2.1 GA is released. See the following updates in this
- Add the [`GetAllStores` interface](https://github.com/pingcap/kvproto/blob/8e3f33ac49297d7c93b61a955531191084a2f685/proto/pdpb.proto#L32), to support distributed GC in TiDB

+ pd-ctl supports:
- [using statistics for Region split](https://pingcap.com/docs/tools/pd-control/#operator-show--add--remove)
- [using statistics for Region split](/pd-control.md#operator-show--add--remove)

- [calling `jq` to format the JSON output](https://pingcap.com/docs/tools/pd-control/#jq-formatted-json-output-usage)
- [calling `jq` to format the JSON output](/pd-control.md#jq-formatted-json-output-usage)

- [checking the Region information of the specified store](https://pingcap.com/docs/tools/pd-control/#region-store-store-id)
- [checking the Region information of the specified store](/pd-control.md#region-store-store-id)

- [checking topN Region list sorted by versions](https://pingcap.com/docs/tools/pd-control/#region-topconfver-limit)
- [checking topN Region list sorted by versions](/pd-control.md#region-topconfver-limit)

- [checking topN Region list sorted by size](https://pingcap.com/docs/tools/pd-control/#region-topsize-limit)
- [checking topN Region list sorted by size](/pd-control.md#region-topsize-limit)

- [more precise TSO encoding](https://pingcap.com/docs/tools/pd-control/#tso)
- [more precise TSO encoding](/pd-control.md#tso)

- [pd-recover](https://pingcap.com/docs/tools/pd-recover) doesn't need to provide the `max-replica` parameter
- [pd-recover](/pd-recover.md) doesn't need to provide the `max-replica` parameter

+ Metrics

Expand Down Expand Up @@ -259,7 +259,7 @@ On November 30, 2018, TiDB 2.1 GA is released. See the following updates in this

## Tools

- Fast full import of large amounts of data: [TiDB Lightning](https://pingcap.com/docs/tools/lightning/overview-architecture/)
- Fast full import of large amounts of data: [TiDB Lightning](/tidb-lightning/tidb-lightning-overview.md)

- Support new [TiDB Binlog](/tidb-binlog/tidb-binlog-overview.md)

Expand Down
2 changes: 1 addition & 1 deletion releases/release-2.1-rc.5.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,4 +61,4 @@ On November 12, 2018, TiDB 2.1 RC5 is released. Compared with TiDB 2.1 RC4, this

## Tools

- Support the TiDB-Binlog cluster, which is not compatible with the older version of binlog [#8093](https://github.com/pingcap/tidb/pull/8093), [documentation](https://pingcap.com/docs/dev/reference/tidb-binlog/overview/)
- Support the TiDB-Binlog cluster, which is not compatible with the older version of binlog [#8093](https://github.com/pingcap/tidb/pull/8093), [documentation](/tidb-binlog/tidb-binlog-overview.md)
2 changes: 1 addition & 1 deletion releases/release-2.1.18.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ TiDB Ansible version: 2.1.18
- Fix the issue that the `COM_STMT_FETCH` time record in slow query logs is inconsistent with that in MySQL [#12953](https://github.com/pingcap/tidb/pull/12953)
- Add an error code in the error message for write conflicts to quickly locate the cause [#12878](https://github.com/pingcap/tidb/pull/12878)
+ DDL
- Disallow dropping the `AUTO INCREMENT` attribute of a column by default. Modify the value of the `tidb_allow_remove_auto_inc` variable if you do need to drop this attribute. See [TiDB Specific System Variables](https://pingcap.com/docs/dev/reference/configuration/tidb-server/tidb-specific-variables/#tidb_allow_remove_auto_inc--new-in-v218) for more details. [#12146](https://github.com/pingcap/tidb/pull/12146)
- Disallow dropping the `AUTO INCREMENT` attribute of a column by default. Modify the value of the `tidb_allow_remove_auto_inc` variable if you do need to drop this attribute. See [TiDB Specific System Variables](/tidb-specific-system-variables.md#tidb_allow_remove_auto_inc-new-in-v2118-and-v304) for more details. [#12146](https://github.com/pingcap/tidb/pull/12146)
- Support multiple `unique`s when creating a unique index in the `Create Table` statement [#12469](https://github.com/pingcap/tidb/pull/12469)
- Fix a compatibility issue that if the foreign key constraint in `CREATE TABLE` statement has no schema, schema of the created table should be used instead of returning a `No Database selected` error [#12678](https://github.com/pingcap/tidb/pull/12678)
- Fix the issue that the `invalid list index` error is reported when executing `ADMIN CANCEL DDL JOBS` [#12681](https://github.com/pingcap/tidb/pull/12681)
Expand Down
2 changes: 1 addition & 1 deletion releases/release-2.1.2.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,4 @@ On December 22, 2018, TiDB 2.1.2 is released. The corresponding TiDB Ansible 2.1
- Fix the issue that `Too many open engines` occurs after the checkpoint is used to restart Lightning
+ TiDB Binlog
- Eliminate some bottlenecks of Drainer writing data to Kafka
- Support the [Kafka version of TiDB Binlog](https://pingcap.com/docs/v2.1/reference/tidb-binlog/tidb-binlog-kafka/)
- Support the Kafka version of TiDB Binlog
2 changes: 1 addition & 1 deletion sql-statements/sql-statement-explain.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ mysql> EXPLAIN DELETE FROM t1 WHERE c1=3;
3 rows in set (0.00 sec)
```

If you do not specify the `FORMAT`, or specify `FORMAT = "row"`, `EXPLAIN` statement will output the results in a tabular format. See [Understand the Query Execution Plan](https://pingcap.com/docs/dev/reference/performance/understanding-the-query-execution-plan/) for more information.
If you do not specify the `FORMAT`, or specify `FORMAT = "row"`, `EXPLAIN` statement will output the results in a tabular format. See [Understand the Query Execution Plan](/query-execution-plan.md) for more information.

In addition to the MySQL standard result format, TiDB also supports DotGraph and you need to specify `FORMAT = "dot"` as in the following example:

Expand Down
2 changes: 1 addition & 1 deletion tidb-binlog/upgrade-tidb-binlog.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Second, upgrade the Drainer component:

## Upgrade TiDB Binlog from Kafka/Local version to the cluster version

The new TiDB versions (v2.0.8-binlog, v2.1.0-rc.5 or later) are not compatible with the [Kafka version](https://pingcap.com/docs/v2.1/reference/tidb-binlog/tidb-binlog-kafka/) or [Local version](https://pingcap.com/docs-cn/v2.1/reference/tidb-binlog/tidb-binlog-local/) of TiDB Binlog. If TiDB is upgraded to one of the new versions, it is required to use the cluster version of TiDB Binlog. If the Kafka or local version of TiDB Binlog is used before upgrading, you need to upgrade your TiDB Binlog to the cluster version.
The new TiDB versions (v2.0.8-binlog, v2.1.0-rc.5 or later) are not compatible with the Kafka version or Local version of TiDB Binlog. If TiDB is upgraded to one of the new versions, it is required to use the cluster version of TiDB Binlog. If the Kafka or local version of TiDB Binlog is used before upgrading, you need to upgrade your TiDB Binlog to the cluster version.

The corresponding relationship between TiDB Binlog versions and TiDB versions is shown in the following table:

Expand Down
2 changes: 1 addition & 1 deletion tidb-lightning/deploy-tidb-lightning.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ You can deploy TiDB Lightning using TiDB Ansible together with the [deployment o

Before importing data, you need to have a deployed TiDB cluster, with the cluster version 2.0.9 or above. It is highly recommended to use the latest version.

You can find deployment instructions in [TiDB Quick Start Guide](https://pingcap.com/docs/QUICKSTART/).
You can find deployment instructions in [TiDB Quick Start Guide](/quick-start-with-tidb.md).

#### Step 2: Download the TiDB Lightning installation package

Expand Down
14 changes: 7 additions & 7 deletions tidb-troubleshooting-map.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ Refer to [5 PD issues](#5-pd-issues).

- For v3.0 and later versions, use the `SQL Bind` feature to bind the execution plan.

- Update the statistics. If you are roughly sure that the problem is caused by the statistics, [dump the statistics](https://pingcap.com/docs/stable/reference/performance/statistics/#export-statistics). If the cause is outdated statistics, such as the `modify count/row count` in `show stats_meta` is greater than a certain value (e.g. 0.3), or the table has an index of time column, you can try recovering by using `analyze table`. If `auto analyze` is configured, check whether the `tidb_auto_analyze_ratio` system variable is too large (e.g. > 0.3), and whether the current time is between `tidb_auto_analyze_start_time` and `tidb_auto_analyze_end_time`.
- Update the statistics. If you are roughly sure that the problem is caused by the statistics, [dump the statistics](/statistics.md#export-statistics). If the cause is outdated statistics, such as the `modify count/row count` in `show stats_meta` is greater than a certain value (e.g. 0.3), or the table has an index of time column, you can try recovering by using `analyze table`. If `auto analyze` is configured, check whether the `tidb_auto_analyze_ratio` system variable is too large (e.g. > 0.3), and whether the current time is between `tidb_auto_analyze_start_time` and `tidb_auto_analyze_end_time`.

- For other situations, [report a bug](https://github.com/pingcap/tidb/issues/new?labels=type%2Fbug&template=bug-report.md).

Expand Down Expand Up @@ -435,7 +435,7 @@ Check the specific cause for busy by viewing the monitor **Grafana** -> **TiKV**

- Cause: When Pump is started, it notifies all Drainer nodes that are in the `online` state. If it fails to notify Drainer, this error log is printed.

- Solution: Use the binlogctl tool to check whether each Drainer node is normal or not. This is to ensure that all Drainer nodes in the `online` state are working normally. If the state of a Drainer node is not consistent with its actual working status, use the binlogctl tool to change its state and then restart Pump. See the case [fail-to-notify-all-living-drainer](https://pingcap.com/docs/stable/reference/tidb-binlog/troubleshoot/error-handling/#fail-to-notify-all-living-drainer-is-returned-when-pump-is-started).
- Solution: Use the binlogctl tool to check whether each Drainer node is normal or not. This is to ensure that all Drainer nodes in the `online` state are working normally. If the state of a Drainer node is not consistent with its actual working status, use the binlogctl tool to change its state and then restart Pump. See the case [fail-to-notify-all-living-drainer](/tidb-binlog/handle-tidb-binlog-errors.md#fail-to-notify-all-living-drainer-is-returned-when-pump-is-started).

- 6.1.9 Draienr reports the `gen update sqls failed: table xxx: row data is corruption []` error.

Expand Down Expand Up @@ -523,30 +523,30 @@ Check the specific cause for busy by viewing the monitor **Grafana** -> **TiKV**
- `AUTO_INCREMENT` columns need to be positive, and do not contain the value “0”.
- UNIQUE and PRIMARY KEYs must not have duplicate entries.

- Solution: See [Troubleshooting Solution](https://pingcap.com/docs/stable/how-to/troubleshoot/tidb-lightning/#checksum-failed-checksum-mismatched-remote-vs-local).
- Solution: See [Troubleshooting Solution](/troubleshoot-tidb-lightning.md#checksum-failed-checksum-mismatched-remote-vs-local).

- 6.3.4 `Checkpoint for … has invalid status:(error code)`

- Cause: Checkpoint is enabled, and Lightning/Importer has previously abnormally exited. To prevent accidental data corruption, Lightning will not start until the error is addressed. The error code is an integer less than 25, with possible values as `0, 3, 6, 9, 12, 14, 15, 17, 18, 20 and 21`. The integer indicates the step where the unexpected exit occurs in the import process. The larger the integer is, the later the exit occurs.

- Solution: See [Troubleshooting Solution](https://pingcap.com/docs/stable/how-to/troubleshoot/tidb-lightning/#checkpoint-for--has-invalid-status-error-code).
- Solution: See [Troubleshooting Solution](/troubleshoot-tidb-lightning.md#checkpoint-for--has-invalid-status-error-code).

- 6.3.5 `ResourceTemporarilyUnavailable("Too many open engines …: 8")`

- Cause: The number of concurrent engine files exceeds the limit specified by tikv-importer. This could be caused by misconfiguration. In addition, even when the configuration is correct, if tidb-lightning has exited abnormally before, an engine file might be left at a dangling open state, which could cause this error as well.
- Solution: See [Troubleshooting Solution](https://pingcap.com/docs/stable/how-to/troubleshoot/tidb-lightning/#resourcetemporarilyunavailabletoo-many-open-engines--).
- Solution: See [Troubleshooting Solution](/troubleshoot-tidb-lightning.md#resourcetemporarilyunavailabletoo-many-open-engines--).

- 6.3.6 `cannot guess encoding for input file, please convert to UTF-8 manually`

- Cause: TiDB Lightning only supports the UTF-8 and GB-18030 encodings. This error means the file is not in any of these encodings. It is also possible that the file has mixed encoding, such as containing a string in UTF-8 and another string in GB-18030, due to historical ALTER TABLE executions.

- Solution: See [Troubleshooting Solution](https://pingcap.com/docs/stable/how-to/troubleshoot/tidb-lightning/#cannot-guess-encoding-for-input-file-please-convert-to-utf-8-manually).
- Solution: See [Troubleshooting Solution](/troubleshoot-tidb-lightning.md#cannot-guess-encoding-for-input-file-please-convert-to-utf-8-manually).

- 6.3.7 `[sql2kv] sql encode error = [types:1292]invalid time format: '{1970 1 1 0 45 0 0}'`

- Cause: A timestamp type entry has a time value that does not exist. This is either because of DST changes or because the time value has exceeded the supported range (from Jan 1st 1970 to Jan 19th 2038).

- Solution: See [Troubleshooting Solution](https://pingcap.com/docs/stable/how-to/troubleshoot/tidb-lightning/#sql2kv-sql-encode-error--types1292invalid-time-format-1970-1-1-).
- Solution: See [Troubleshooting Solution](/troubleshoot-tidb-lightning.md#sql2kv-sql-encode-error--types1292invalid-time-format-1970-1-1-).

## 7. Common log analysis

Expand Down

0 comments on commit 9a33a40

Please sign in to comment.