Skip to content

Commit

Permalink
fix some typos detected by Vale (pingcap#4908)
Browse files Browse the repository at this point in the history
Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com>
  • Loading branch information
CharLotteiu and TomShawn authored Feb 25, 2021
1 parent 6be7cfd commit eaf50fc
Show file tree
Hide file tree
Showing 13 changed files with 16 additions and 16 deletions.
4 changes: 2 additions & 2 deletions choose-index.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ Skyline-pruning is a heuristic filtering rule for indexes. To judge an index, th

- How many access conditions are covered by the indexed columns. An “access condition” is a where condition that can be converted to a column range. And the more access conditions an indexed column set covers, the better it is in this dimension.

For these three dimensions, if an index named idx_a is not worse than the index named idx_b in all three dimensions and one of the dimensions is better than Idx_b, then idx_a is preferred.
For these three dimensions, if an index named idx_a is not worse than the index named idx_b in all three dimensions and one of the dimensions is better than idx_b, then idx_a is preferred.

### Selection based on cost estimation

Expand All @@ -63,7 +63,7 @@ According to these factors and the cost model, the optimizer selects an index wi
2. Statistics are accurate, and reading from TiFlash is faster, but why does the optimizer choose to read from TiKV?

At present, the cost model of distinguishing TiFlash from TiKV is still rough. You can decrease the value of `tidb_opt_seek_factor` parameter, then the optimizer prefers to choose TiFlash.

3. The statistics are accurate. Index A needs to retrieve rows from tables, but it actually executes faster than Index B that does not retrieve rows from tables. Why does the optimizer choose Index B?

In this case, the cost estimation may be too large for retrieving rows from tables. You can decrease the value of `tidb_opt_network_factor` parameter to reduce the cost of retrieving rows from tables.
Expand Down
2 changes: 1 addition & 1 deletion command-line-flags-for-pd-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,5 +103,5 @@ PD is configurable using command-line flags and environment variables.
## `--metrics-addr`
- The address of Prometheus Pushgateway, which does not push data to Promethus by default.
- The address of Prometheus Pushgateway, which does not push data to Prometheus by default.
- Default: ""
2 changes: 1 addition & 1 deletion dashboard/dashboard-diagnostics-report.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ In this report, some small buttons are described as follows:
* **expand**: Click **expand** to see details about this monitoring metric. For example, the detailed information of `tidb_get_token` in the image above includes the monitoring information of each TiDB instance's latency.
* **collapse**: Contrary to **expand**, the button is used to fold detailed monitoring information.

All monitoring metrics basically correspond to those on the TiDB Grafna monitoring dashboard. After a module is found to be abnormal, you can view more monitoring information on the TiDB Grafna.
All monitoring metrics basically correspond to those on the TiDB Grafana monitoring dashboard. After a module is found to be abnormal, you can view more monitoring information on the TiDB Grafana.

In addition, the `TOTAL_TIME` and `TOTAL_COUNT` metrics in this report are monitoring data read from Prometheus, so calculation inaccuracy might exits in their statistics.

Expand Down
2 changes: 1 addition & 1 deletion download-ecosystem-tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Download [TiDB Lightning](/tidb-lightning/tidb-lightning-overview.md) by using t

Download [BR](/br/backup-and-restore-tool.md) by using the download link in the following table:

| Package name | OS | Architecure | SHA256 checksum |
| Package name | OS | Architecture | SHA256 checksum |
|:---|:---|:---|:---|
| `http://download.pingcap.org/tidb-toolkit-{version}-linux-amd64.tar.gz` | Linux | amd64 | `http://download.pingcap.org/tidb-toolkit-{version}-linux-amd64.sha256` |

Expand Down
4 changes: 2 additions & 2 deletions literal-values.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ If the `ANSI_QUOTES` SQL MODE is enabled, string literals can be quoted only wit
The string is divided into the following two types:

+ Binary string: It consists of a sequence of bytes, whose charset and collation are both `binary`, and uses **byte** as the unit when compared with each other.
+ Non-binary string: It consists of a sequence of characters and has various charsets and collations other than `binary`. When compared with each other, non-binary strings use **characters** as the unit. A charater might contian multiple bytes, depending on the charset.
+ Non-binary string: It consists of a sequence of characters and has various charsets and collations other than `binary`. When compared with each other, non-binary strings use **characters** as the unit. A character might contain multiple bytes, depending on the charset.

A string literal may have an optional `character set introducer` and `COLLATE clause`, to designate it as a string that uses a specific character set and collation.
A string literal may have an optional `character set introducer` and `COLLATE clause`, to designate it as a string that uses a specific character set and collation.

```
[_charset_name]'string' [COLLATE collation_name]
Expand Down
2 changes: 1 addition & 1 deletion privilege-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -354,7 +354,7 @@ In this record, `Host` and `User` determine that the connection request sent by

> **Note:**
>
> It is recommended to only update the privilege tables via the supplied syntax such as `GRANT`, `CREATE USER` and `DROP USER`. Making direct edits to the underlying privilege tables will not automatially update the privilege cache, leading to unpredictable behavior until `FLUSH PRIVILEGES` is executed.
> It is recommended to only update the privilege tables via the supplied syntax such as `GRANT`, `CREATE USER` and `DROP USER`. Making direct edits to the underlying privilege tables will not automatically update the privilege cache, leading to unpredictable behavior until `FLUSH PRIVILEGES` is executed.
### Connection verification

Expand Down
2 changes: 1 addition & 1 deletion releases/release-2.0.6.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ On August 6, 2018, TiDB 2.0.6 is released. Compared with TiDB 2.0.5, this releas
- Set the upper limit of placeholders count in the `PREPARE` statement to 65535, compatible with MySQL [#7250](https://github.com/pingcap/tidb/pull/7250)
- Bug Fixes
- Fix the issue that the `DROP USER` statement is incompatible with MySQL behavior in some cases [#7014](https://github.com/pingcap/tidb/pull/7014)
- Fix the issue that statements like `INSERT`/`LOAD DATA` meet OOM aftering opening `tidb_batch_insert` [#7092](https://github.com/pingcap/tidb/pull/7092)
- Fix the issue that statements like `INSERT`/`LOAD DATA` meet OOM after opening `tidb_batch_insert` [#7092](https://github.com/pingcap/tidb/pull/7092)
- Fix the issue that the statistics fail to automatically update when the data of a table keeps updating [#7093](https://github.com/pingcap/tidb/pull/7093)
- Fix the issue that the firewall breaks inactive gPRC connections [#7099](https://github.com/pingcap/tidb/pull/7099)
- Fix the issue that prefix index returns a wrong result in some scenarios [#7126](https://github.com/pingcap/tidb/pull/7126)
Expand Down
2 changes: 1 addition & 1 deletion releases/release-3.0.0-rc.3.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ On June 21, 2019, TiDB 3.0.0-rc.3 is released. The corresponding TiDB Ansible ve

+ tikv-ctl
- Add the `bad-regions` command to support checking more abnormal conditions [#4862](https://github.com/tikv/tikv/pull/4862)
- Add a feature of forcely executing the `tombstone` command [#4862](https://github.com/tikv/tikv/pull/4862)
- Add a feature of forcibly executing the `tombstone` command [#4862](https://github.com/tikv/tikv/pull/4862)

+ Misc
- Add the `dist_release` compiling command [#4841](https://github.com/tikv/tikv/pull/4841)
Expand Down
2 changes: 1 addition & 1 deletion releases/release-4.0.5.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ TiDB version: 4.0.5

- Fix the `should ensure all columns have the same length` error that occurs because the `ErrTruncate/Overflow` error is incorrectly handled in the `builtinCastRealAsDecimalSig` function [#18967](https://github.com/pingcap/tidb/pull/18967)
- Fix the issue that the `pre_split_regions` table option does not work in the partitioned table [#18837](https://github.com/pingcap/tidb/pull/18837)
- Fixe the issue that might cause a large transaction to be terminated prematurely [#18813](https://github.com/pingcap/tidb/pull/18813)
- Fix the issue that might cause a large transaction to be terminated prematurely [#18813](https://github.com/pingcap/tidb/pull/18813)
- Fix the issue that using the `collation` functions get wrong query results [#18735](https://github.com/pingcap/tidb/pull/18735)
- Fix the bug that the `getAutoIncrementID()` function does not consider the `tidb_snapshot` session variable, which might cause the dumper tool to fail with the `table not exist` error [#18692](https://github.com/pingcap/tidb/pull/18692)
- Fix the `unknown column error` for SQL statement like `select a from t having t.a` [#18434](https://github.com/pingcap/tidb/pull/18434)
Expand Down
2 changes: 1 addition & 1 deletion sql-statements/sql-statement-recover-table.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ RECOVER TABLE BY JOB ddl_job_id
>
> + `RECOVER TABLE` is supported in the Binlog version 3.0.1, so you can use `RECOVER TABLE` in the following three situations:
>
> - Binglog version is 3.0.1 or later.
> - Binlog version is 3.0.1 or later.
> - TiDB 3.0 is used both in the upstream cluster and the downstream cluster.
> - The GC life time of the secondary cluster must be longer than that of the primary cluster. However, as latency occurs during data replication between upstream and downstream databases, data recovery might fail in the downstream.
Expand Down
2 changes: 1 addition & 1 deletion ticdc/manage-ticdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ The following are descriptions of parameters and parameter values that can be co
| `127.0.0.1` | The IP address of the downstream Kafka services |
| `9092` | The port for the downstream Kafka |
| `cdc-test` | The name of the Kafka topic |
| `kafka-version` | The version of the downstream Kafka (optional, `2.4.0` by default. Currently, the earlist supported Kafka version is `0.11.0.2` and the latest one is `2.7.0`. This value needs to be consistent with the actual version of the downstream Kafka.) |
| `kafka-version` | The version of the downstream Kafka (optional, `2.4.0` by default. Currently, the earliest supported Kafka version is `0.11.0.2` and the latest one is `2.7.0`. This value needs to be consistent with the actual version of the downstream Kafka.) |
| `kafka-client-id` | Specifies the Kafka client ID of the replication task (optional, `TiCDC_sarama_producer_replication ID` by default) |
| `partition-num` | The number of the downstream Kafka partitions (Optional. The value must be **no greater than** the actual number of partitions. If you do not configure this parameter, the partition number is obtained automatically.) |
| `max-message-bytes` | The maximum size of data that is sent to Kafka broker each time (optional, `64MB` by default) |
Expand Down
4 changes: 2 additions & 2 deletions ticdc/ticdc-open-protocol.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve
| :---------- | :----- | :--------------------- |
| Column Name | String | The column name. |
| Column Type | Number | The column type. For details, see [Column Type Code](#column-type-code). |
| Where Handle | Bool | Determines whether this column can be the filter condition of the `Where` clause. When this column is unique on the table, `Where Handle` is `true`. |
| Where Handle | Boolean | Determines whether this column can be the filter condition of the `Where` clause. When this column is unique on the table, `Where Handle` is `true`. |
| Flag (**experimental**) | Number | The bit flags of columns. For details, see [Bit flags of columns](#bit-flags-of-columns). |
| Column Value | Any | The Column value. |
Expand Down Expand Up @@ -283,7 +283,7 @@ Currently, TiCDC does not provide the standard parsing library for TiCDC Open Pr

| Type | Code | Output Example | Description |
| :-------------------- | :--- | :------ | :-- |
| TINYINT/BOOL | 1 | {"t":1,"v":1} | |
| TINYINT/BOOLEAN | 1 | {"t":1,"v":1} | |
| SMALLINT | 2 | {"t":2,"v":1} | |
| INT | 3 | {"t":3,"v":123} | |
| FLOAT | 4 | {"t":4,"v":153.123} | |
Expand Down
2 changes: 1 addition & 1 deletion tidb-troubleshooting-map.md
Original file line number Diff line number Diff line change
Expand Up @@ -436,7 +436,7 @@ Check the specific cause for busy by viewing the monitor **Grafana** -> **TiKV**

- Solution: Use the binlogctl tool to check whether each Drainer node is normal or not. This is to ensure that all Drainer nodes in the `online` state are working normally. If the state of a Drainer node is not consistent with its actual working status, use the binlogctl tool to change its state and then restart Pump. See the case [fail-to-notify-all-living-drainer](/tidb-binlog/handle-tidb-binlog-errors.md#fail-to-notify-all-living-drainer-is-returned-when-pump-is-started).

- 6.1.9 Draienr reports the `gen update sqls failed: table xxx: row data is corruption []` error.
- 6.1.9 Drainer reports the `gen update sqls failed: table xxx: row data is corruption []` error.

- Trigger: The upstream performs DML operations on this table while performing `DROP COLUMN` DDL. This issue has been fixed in v3.0.6. See [case-820](https://github.com/pingcap/tidb-map/blob/master/maps/diagnose-case-study/case820.md) in Chinese.

Expand Down

0 comments on commit eaf50fc

Please sign in to comment.