From eaf50fc7170210b79b17c305d327ff6105d231e2 Mon Sep 17 00:00:00 2001 From: Charlotte Liu <37295236+CharLotteiu@users.noreply.github.com> Date: Thu, 25 Feb 2021 19:10:30 +0800 Subject: [PATCH] fix some typos detected by Vale (#4908) Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> --- choose-index.md | 4 ++-- command-line-flags-for-pd-configuration.md | 2 +- dashboard/dashboard-diagnostics-report.md | 2 +- download-ecosystem-tools.md | 2 +- literal-values.md | 4 ++-- privilege-management.md | 2 +- releases/release-2.0.6.md | 2 +- releases/release-3.0.0-rc.3.md | 2 +- releases/release-4.0.5.md | 2 +- sql-statements/sql-statement-recover-table.md | 2 +- ticdc/manage-ticdc.md | 2 +- ticdc/ticdc-open-protocol.md | 4 ++-- tidb-troubleshooting-map.md | 2 +- 13 files changed, 16 insertions(+), 16 deletions(-) diff --git a/choose-index.md b/choose-index.md index 26280ff192b7d..b00824139bba4 100644 --- a/choose-index.md +++ b/choose-index.md @@ -41,7 +41,7 @@ Skyline-pruning is a heuristic filtering rule for indexes. To judge an index, th - How many access conditions are covered by the indexed columns. An “access condition” is a where condition that can be converted to a column range. And the more access conditions an indexed column set covers, the better it is in this dimension. -For these three dimensions, if an index named idx_a is not worse than the index named idx_b in all three dimensions and one of the dimensions is better than Idx_b, then idx_a is preferred. +For these three dimensions, if an index named idx_a is not worse than the index named idx_b in all three dimensions and one of the dimensions is better than idx_b, then idx_a is preferred. ### Selection based on cost estimation @@ -63,7 +63,7 @@ According to these factors and the cost model, the optimizer selects an index wi 2. Statistics are accurate, and reading from TiFlash is faster, but why does the optimizer choose to read from TiKV? At present, the cost model of distinguishing TiFlash from TiKV is still rough. You can decrease the value of `tidb_opt_seek_factor` parameter, then the optimizer prefers to choose TiFlash. - + 3. The statistics are accurate. Index A needs to retrieve rows from tables, but it actually executes faster than Index B that does not retrieve rows from tables. Why does the optimizer choose Index B? In this case, the cost estimation may be too large for retrieving rows from tables. You can decrease the value of `tidb_opt_network_factor` parameter to reduce the cost of retrieving rows from tables. diff --git a/command-line-flags-for-pd-configuration.md b/command-line-flags-for-pd-configuration.md index 61b6cc79d4fbe..ca0338ba09d98 100644 --- a/command-line-flags-for-pd-configuration.md +++ b/command-line-flags-for-pd-configuration.md @@ -103,5 +103,5 @@ PD is configurable using command-line flags and environment variables. ## `--metrics-addr` -- The address of Prometheus Pushgateway, which does not push data to Promethus by default. +- The address of Prometheus Pushgateway, which does not push data to Prometheus by default. - Default: "" diff --git a/dashboard/dashboard-diagnostics-report.md b/dashboard/dashboard-diagnostics-report.md index af9660babcc37..bd334978aa97c 100644 --- a/dashboard/dashboard-diagnostics-report.md +++ b/dashboard/dashboard-diagnostics-report.md @@ -31,7 +31,7 @@ In this report, some small buttons are described as follows: * **expand**: Click **expand** to see details about this monitoring metric. For example, the detailed information of `tidb_get_token` in the image above includes the monitoring information of each TiDB instance's latency. * **collapse**: Contrary to **expand**, the button is used to fold detailed monitoring information. -All monitoring metrics basically correspond to those on the TiDB Grafna monitoring dashboard. After a module is found to be abnormal, you can view more monitoring information on the TiDB Grafna. +All monitoring metrics basically correspond to those on the TiDB Grafana monitoring dashboard. After a module is found to be abnormal, you can view more monitoring information on the TiDB Grafana. In addition, the `TOTAL_TIME` and `TOTAL_COUNT` metrics in this report are monitoring data read from Prometheus, so calculation inaccuracy might exits in their statistics. diff --git a/download-ecosystem-tools.md b/download-ecosystem-tools.md index 8add650192bf2..eb686e992cfef 100644 --- a/download-ecosystem-tools.md +++ b/download-ecosystem-tools.md @@ -36,7 +36,7 @@ Download [TiDB Lightning](/tidb-lightning/tidb-lightning-overview.md) by using t Download [BR](/br/backup-and-restore-tool.md) by using the download link in the following table: -| Package name | OS | Architecure | SHA256 checksum | +| Package name | OS | Architecture | SHA256 checksum | |:---|:---|:---|:---| | `http://download.pingcap.org/tidb-toolkit-{version}-linux-amd64.tar.gz` | Linux | amd64 | `http://download.pingcap.org/tidb-toolkit-{version}-linux-amd64.sha256` | diff --git a/literal-values.md b/literal-values.md index 8901423750ee4..82f43e9bfd24e 100644 --- a/literal-values.md +++ b/literal-values.md @@ -32,9 +32,9 @@ If the `ANSI_QUOTES` SQL MODE is enabled, string literals can be quoted only wit The string is divided into the following two types: + Binary string: It consists of a sequence of bytes, whose charset and collation are both `binary`, and uses **byte** as the unit when compared with each other. -+ Non-binary string: It consists of a sequence of characters and has various charsets and collations other than `binary`. When compared with each other, non-binary strings use **characters** as the unit. A charater might contian multiple bytes, depending on the charset. ++ Non-binary string: It consists of a sequence of characters and has various charsets and collations other than `binary`. When compared with each other, non-binary strings use **characters** as the unit. A character might contain multiple bytes, depending on the charset. -A string literal may have an optional `character set introducer` and `COLLATE clause`, to designate it as a string that uses a specific character set and collation. +A string literal may have an optional `character set introducer` and `COLLATE clause`, to designate it as a string that uses a specific character set and collation. ``` [_charset_name]'string' [COLLATE collation_name] diff --git a/privilege-management.md b/privilege-management.md index 815c42875d322..6c1fe113f0883 100644 --- a/privilege-management.md +++ b/privilege-management.md @@ -354,7 +354,7 @@ In this record, `Host` and `User` determine that the connection request sent by > **Note:** > -> It is recommended to only update the privilege tables via the supplied syntax such as `GRANT`, `CREATE USER` and `DROP USER`. Making direct edits to the underlying privilege tables will not automatially update the privilege cache, leading to unpredictable behavior until `FLUSH PRIVILEGES` is executed. +> It is recommended to only update the privilege tables via the supplied syntax such as `GRANT`, `CREATE USER` and `DROP USER`. Making direct edits to the underlying privilege tables will not automatically update the privilege cache, leading to unpredictable behavior until `FLUSH PRIVILEGES` is executed. ### Connection verification diff --git a/releases/release-2.0.6.md b/releases/release-2.0.6.md index 174c65cf5aacb..1c389a9079f5f 100644 --- a/releases/release-2.0.6.md +++ b/releases/release-2.0.6.md @@ -20,7 +20,7 @@ On August 6, 2018, TiDB 2.0.6 is released. Compared with TiDB 2.0.5, this releas - Set the upper limit of placeholders count in the `PREPARE` statement to 65535, compatible with MySQL [#7250](https://github.com/pingcap/tidb/pull/7250) - Bug Fixes - Fix the issue that the `DROP USER` statement is incompatible with MySQL behavior in some cases [#7014](https://github.com/pingcap/tidb/pull/7014) - - Fix the issue that statements like `INSERT`/`LOAD DATA` meet OOM aftering opening `tidb_batch_insert` [#7092](https://github.com/pingcap/tidb/pull/7092) + - Fix the issue that statements like `INSERT`/`LOAD DATA` meet OOM after opening `tidb_batch_insert` [#7092](https://github.com/pingcap/tidb/pull/7092) - Fix the issue that the statistics fail to automatically update when the data of a table keeps updating [#7093](https://github.com/pingcap/tidb/pull/7093) - Fix the issue that the firewall breaks inactive gPRC connections [#7099](https://github.com/pingcap/tidb/pull/7099) - Fix the issue that prefix index returns a wrong result in some scenarios [#7126](https://github.com/pingcap/tidb/pull/7126) diff --git a/releases/release-3.0.0-rc.3.md b/releases/release-3.0.0-rc.3.md index de11cfa0f889b..fe2040a8cbf0d 100644 --- a/releases/release-3.0.0-rc.3.md +++ b/releases/release-3.0.0-rc.3.md @@ -105,7 +105,7 @@ On June 21, 2019, TiDB 3.0.0-rc.3 is released. The corresponding TiDB Ansible ve + tikv-ctl - Add the `bad-regions` command to support checking more abnormal conditions [#4862](https://github.com/tikv/tikv/pull/4862) - - Add a feature of forcely executing the `tombstone` command [#4862](https://github.com/tikv/tikv/pull/4862) + - Add a feature of forcibly executing the `tombstone` command [#4862](https://github.com/tikv/tikv/pull/4862) + Misc - Add the `dist_release` compiling command [#4841](https://github.com/tikv/tikv/pull/4841) diff --git a/releases/release-4.0.5.md b/releases/release-4.0.5.md index 1c44f4a3f93a5..1755c397b38b0 100644 --- a/releases/release-4.0.5.md +++ b/releases/release-4.0.5.md @@ -104,7 +104,7 @@ TiDB version: 4.0.5 - Fix the `should ensure all columns have the same length` error that occurs because the `ErrTruncate/Overflow` error is incorrectly handled in the `builtinCastRealAsDecimalSig` function [#18967](https://github.com/pingcap/tidb/pull/18967) - Fix the issue that the `pre_split_regions` table option does not work in the partitioned table [#18837](https://github.com/pingcap/tidb/pull/18837) - - Fixe the issue that might cause a large transaction to be terminated prematurely [#18813](https://github.com/pingcap/tidb/pull/18813) + - Fix the issue that might cause a large transaction to be terminated prematurely [#18813](https://github.com/pingcap/tidb/pull/18813) - Fix the issue that using the `collation` functions get wrong query results [#18735](https://github.com/pingcap/tidb/pull/18735) - Fix the bug that the `getAutoIncrementID()` function does not consider the `tidb_snapshot` session variable, which might cause the dumper tool to fail with the `table not exist` error [#18692](https://github.com/pingcap/tidb/pull/18692) - Fix the `unknown column error` for SQL statement like `select a from t having t.a` [#18434](https://github.com/pingcap/tidb/pull/18434) diff --git a/sql-statements/sql-statement-recover-table.md b/sql-statements/sql-statement-recover-table.md index d2611b86918cf..c0f78874538fd 100644 --- a/sql-statements/sql-statement-recover-table.md +++ b/sql-statements/sql-statement-recover-table.md @@ -48,7 +48,7 @@ RECOVER TABLE BY JOB ddl_job_id > > + `RECOVER TABLE` is supported in the Binlog version 3.0.1, so you can use `RECOVER TABLE` in the following three situations: > -> - Binglog version is 3.0.1 or later. +> - Binlog version is 3.0.1 or later. > - TiDB 3.0 is used both in the upstream cluster and the downstream cluster. > - The GC life time of the secondary cluster must be longer than that of the primary cluster. However, as latency occurs during data replication between upstream and downstream databases, data recovery might fail in the downstream. diff --git a/ticdc/manage-ticdc.md b/ticdc/manage-ticdc.md index 855eb0faecde4..2e67d13e87f80 100644 --- a/ticdc/manage-ticdc.md +++ b/ticdc/manage-ticdc.md @@ -150,7 +150,7 @@ The following are descriptions of parameters and parameter values that can be co | `127.0.0.1` | The IP address of the downstream Kafka services | | `9092` | The port for the downstream Kafka | | `cdc-test` | The name of the Kafka topic | -| `kafka-version` | The version of the downstream Kafka (optional, `2.4.0` by default. Currently, the earlist supported Kafka version is `0.11.0.2` and the latest one is `2.7.0`. This value needs to be consistent with the actual version of the downstream Kafka.) | +| `kafka-version` | The version of the downstream Kafka (optional, `2.4.0` by default. Currently, the earliest supported Kafka version is `0.11.0.2` and the latest one is `2.7.0`. This value needs to be consistent with the actual version of the downstream Kafka.) | | `kafka-client-id` | Specifies the Kafka client ID of the replication task (optional, `TiCDC_sarama_producer_replication ID` by default) | | `partition-num` | The number of the downstream Kafka partitions (Optional. The value must be **no greater than** the actual number of partitions. If you do not configure this parameter, the partition number is obtained automatically.) | | `max-message-bytes` | The maximum size of data that is sent to Kafka broker each time (optional, `64MB` by default) | diff --git a/ticdc/ticdc-open-protocol.md b/ticdc/ticdc-open-protocol.md index 77e644f2eb6b5..8913ebd706a9f 100644 --- a/ticdc/ticdc-open-protocol.md +++ b/ticdc/ticdc-open-protocol.md @@ -147,7 +147,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve | :---------- | :----- | :--------------------- | | Column Name | String | The column name. | | Column Type | Number | The column type. For details, see [Column Type Code](#column-type-code). | - | Where Handle | Bool | Determines whether this column can be the filter condition of the `Where` clause. When this column is unique on the table, `Where Handle` is `true`. | + | Where Handle | Boolean | Determines whether this column can be the filter condition of the `Where` clause. When this column is unique on the table, `Where Handle` is `true`. | | Flag (**experimental**) | Number | The bit flags of columns. For details, see [Bit flags of columns](#bit-flags-of-columns). | | Column Value | Any | The Column value. | @@ -283,7 +283,7 @@ Currently, TiCDC does not provide the standard parsing library for TiCDC Open Pr | Type | Code | Output Example | Description | | :-------------------- | :--- | :------ | :-- | -| TINYINT/BOOL | 1 | {"t":1,"v":1} | | +| TINYINT/BOOLEAN | 1 | {"t":1,"v":1} | | | SMALLINT | 2 | {"t":2,"v":1} | | | INT | 3 | {"t":3,"v":123} | | | FLOAT | 4 | {"t":4,"v":153.123} | | diff --git a/tidb-troubleshooting-map.md b/tidb-troubleshooting-map.md index 956a4f6c3715c..bda4f55d9fe22 100644 --- a/tidb-troubleshooting-map.md +++ b/tidb-troubleshooting-map.md @@ -436,7 +436,7 @@ Check the specific cause for busy by viewing the monitor **Grafana** -> **TiKV** - Solution: Use the binlogctl tool to check whether each Drainer node is normal or not. This is to ensure that all Drainer nodes in the `online` state are working normally. If the state of a Drainer node is not consistent with its actual working status, use the binlogctl tool to change its state and then restart Pump. See the case [fail-to-notify-all-living-drainer](/tidb-binlog/handle-tidb-binlog-errors.md#fail-to-notify-all-living-drainer-is-returned-when-pump-is-started). -- 6.1.9 Draienr reports the `gen update sqls failed: table xxx: row data is corruption []` error. +- 6.1.9 Drainer reports the `gen update sqls failed: table xxx: row data is corruption []` error. - Trigger: The upstream performs DML operations on this table while performing `DROP COLUMN` DDL. This issue has been fixed in v3.0.6. See [case-820](https://github.com/pingcap/tidb-map/blob/master/maps/diagnose-case-study/case820.md) in Chinese.