Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix typos in docs #17381

Merged
merged 6 commits into from
Apr 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Please check out these templates before you submit a pull request:
We use separate branches to maintain different versions of TiDB documentation.

- The [documentation under development](https://docs.pingcap.com/tidb/dev) is maintained in the `master` branch.
- The [published documentation](https://docs.pingcap.com/tidb/stable/) is maintained in the corresponding `release-<verion>` branch. For example, TiDB v7.5 documentation is maintained in the `release-7.5` branch.
- The [published documentation](https://docs.pingcap.com/tidb/stable/) is maintained in the corresponding `release-<version>` branch. For example, TiDB v7.5 documentation is maintained in the `release-7.5` branch.
- The [archived documentation](https://docs-archive.pingcap.com/) is no longer maintained and does not receive any further updates.

### Use cherry-pick labels
Expand Down
2 changes: 1 addition & 1 deletion benchmark/benchmark-tidb-using-sysbench.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ server_configs:
log.level: "error"
```

It is also recommended to make sure [`tidb_enable_prepared_plan_cache`](/system-variables.md#tidb_enable_prepared_plan_cache-new-in-v610) is enabled and that you allow sysbench to use prepared statements by using `--db-ps-mode=auto`. See the [SQL Prepared Execution Plan Cache](/sql-prepared-plan-cache.md) for documetnation about what the SQL plan cache does and how to monitor it.
It is also recommended to make sure [`tidb_enable_prepared_plan_cache`](/system-variables.md#tidb_enable_prepared_plan_cache-new-in-v610) is enabled and that you allow sysbench to use prepared statements by using `--db-ps-mode=auto`. See the [SQL Prepared Execution Plan Cache](/sql-prepared-plan-cache.md) for documentation about what the SQL plan cache does and how to monitor it.

> **Note:**
>
Expand Down
2 changes: 1 addition & 1 deletion best-practices-on-public-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ To reduce the number of Regions and alleviate the heartbeat overhead on the syst

## After tuning

After the tunning, the following effects can be observed:
After the tuning, the following effects can be observed:

- The TSO requests per second are decreased to 64,800.
- The CPU utilization is significantly reduced from approximately 4,600% to 1,400%.
Expand Down
2 changes: 1 addition & 1 deletion check-before-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,7 @@ To check whether the NTP service is installed and whether it synchronizes with t
Unable to talk to NTP daemon. Is it running?
```

3. Run the `chronyc tracking` command to check wheter the Chrony service synchronizes with the NTP server.
3. Run the `chronyc tracking` command to check whether the Chrony service synchronizes with the NTP server.

> **Note:**
>
Expand Down
2 changes: 1 addition & 1 deletion configure-memory-usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Currently, the memory limit set by `tidb_server_memory_limit` **DOES NOT** termi
>
> + During the startup process, TiDB does not guarantee that the [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) limit is enforced. If the free memory of the operating system is insufficient, TiDB might still encounter OOM. You need to ensure that the TiDB instance has enough available memory.
> + In the process of memory control, the total memory usage of TiDB might slightly exceed the limit set by `tidb_server_memory_limit`.
> + Since v6.5.0, the configruation item `server-memory-quota` is deprecated. To ensure compatibility, after you upgrade your cluster to v6.5.0 or a later version, `tidb_server_memory_limit` will inherit the value of `server-memory-quota`. If you have not configured `server-memory-quota` before the upgrade, the default value of `tidb_server_memory_limit` is used, which is `80%`.
> + Since v6.5.0, the configuration item `server-memory-quota` is deprecated. To ensure compatibility, after you upgrade your cluster to v6.5.0 or a later version, `tidb_server_memory_limit` will inherit the value of `server-memory-quota`. If you have not configured `server-memory-quota` before the upgrade, the default value of `tidb_server_memory_limit` is used, which is `80%`.

When the memory usage of a tidb-server instance reaches a certain proportion of the total memory (the proportion is controlled by the system variable [`tidb_server_memory_limit_gc_trigger`](/system-variables.md#tidb_server_memory_limit_gc_trigger-new-in-v640)), tidb-server will try to trigger a Golang GC to relieve memory stress. To avoid frequent GCs that cause performance issues due to the instance memory fluctuating around the threshold, this GC method will trigger GC at most once every minute.

Expand Down
2 changes: 1 addition & 1 deletion dashboard/dashboard-session-sso.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ First, create an Okta Application Integration to integrate SSO.

![Sample Step](/media/dashboard/dashboard-session-sso-okta-1.png)

4. In the poped up dialog, choose **OIDC - OpenID Connect** in **Sign-in method**.
4. In the popped up dialog, choose **OIDC - OpenID Connect** in **Sign-in method**.

5. Choose **Single-Page Application** in **Application Type**.

Expand Down
2 changes: 1 addition & 1 deletion ddl-introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ absent -> delete only -> write only -> write reorg -> public
For users, the newly created index is unavailable before the `public` state.

<SimpleTab>
<div label="Online DDL asychronous change before TiDB v6.2.0">
<div label="Online DDL asynchronous change before TiDB v6.2.0">

Before v6.2.0, the process of handling asynchronous schema changes in the TiDB SQL layer is as follows:

Expand Down
2 changes: 1 addition & 1 deletion dm/dm-enable-tls.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ This section introduces how to enable encrypted data transmission between DM com

### Enable encrypted data transmission for downstream TiDB

1. Configure the downstream TiDB to use encrypted connections. For detailed operatons, refer to [Configure TiDB server to use secure connections](/enable-tls-between-clients-and-servers.md#configure-tidb-server-to-use-secure-connections).
1. Configure the downstream TiDB to use encrypted connections. For detailed operations, refer to [Configure TiDB server to use secure connections](/enable-tls-between-clients-and-servers.md#configure-tidb-server-to-use-secure-connections).

2. Set the TiDB client certificate in the task configuration file:

Expand Down
2 changes: 1 addition & 1 deletion dm/dm-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -365,7 +365,7 @@ To solve this issue, you are recommended to maintain DM clusters using TiUP. In

## Why DM-master cannot be connected when I use dmctl to execute commands?

When using dmctl execute commands, you might find the connection to DM master fails (even if you have specified the parameter value of `--master-addr` in the command), and the error message is like `RawCause: context deadline exceeded, Workaround: please check your network connection.`. But afer checking the network connection using commands like `telnet <master-addr>`, no exception is found.
When using dmctl execute commands, you might find the connection to DM master fails (even if you have specified the parameter value of `--master-addr` in the command), and the error message is like `RawCause: context deadline exceeded, Workaround: please check your network connection.`. But after checking the network connection using commands like `telnet <master-addr>`, no exception is found.

In this case, you can check the environment variable `https_proxy` (note that it is **https**). If this variable is configured, dmctl automatically connects the host and port specified by `https_proxy`. If the host does not have a corresponding `proxy` forwarding service, the connection fails.

Expand Down
2 changes: 1 addition & 1 deletion dm/dm-open-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -1346,7 +1346,7 @@ curl -X 'GET' \
"name": "string",
"source_name": "string",
"worker_name": "string",
"stage": "runing",
"stage": "running",
qiancai marked this conversation as resolved.
Show resolved Hide resolved
"unit": "sync",
"unresolved_ddl_lock_id": "string",
"load_status": {
Expand Down
2 changes: 1 addition & 1 deletion dm/dm-table-routing.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ To migrate the upstream instances to the downstream `test`.`t`, you must create

Assuming in the scenario of sharded schemas and tables, you want to migrate the `test_{1,2,3...}`.`t_{1,2,3...}` tables in two upstream MySQL instances to the `test`.`t` table in the downstream TiDB instance. At the same time, you want to extract the source information of the sharded tables and write it to the downstream merged table.

To migrate the upstream instances to the downstream `test`.`t`, you must create routing rules similar to the previous section [Merge sharded schemas and tables](#merge-sharded-schemas-and-tables). In addtion, you need to add the `extract-table`, `extract-schema`, and `extract-source` configurations:
To migrate the upstream instances to the downstream `test`.`t`, you must create routing rules similar to the previous section [Merge sharded schemas and tables](#merge-sharded-schemas-and-tables). In addition, you need to add the `extract-table`, `extract-schema`, and `extract-source` configurations:

- `extract-table`: For a sharded table matching `schema-pattern` and `table-pattern`, DM extracts the sharded table name by using `table-regexp` and writes the name suffix without the `t_` part to `target-column` of the merged table, that is, the `c_table` column.
- `extract-schema`: For a sharded schema matching `schema-pattern` and `table-pattern`, DM extracts the sharded schema name by using `schema-regexp` and writes the name suffix without the `test_` part to `target-column` of the merged table, that is, the `c_schema` column.
Expand Down
2 changes: 1 addition & 1 deletion dm/monitor-a-dm-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ The following metrics show only when `task-mode` is in the `incremental` or `all
| total sqls jobs | The number of newly added jobs per unit of time | N/A | N/A |
| finished sqls jobs | The number of finished jobs per unit of time | N/A | N/A |
| statement execution latency | The duration that the binlog replication unit executes the statement to the downstream (in seconds) | N/A | N/A |
| add job duration | The duration tht the binlog replication unit adds a job to the queue (in seconds) | N/A | N/A |
| add job duration | The duration that the binlog replication unit adds a job to the queue (in seconds) | N/A | N/A |
| DML conflict detect duration | The duration that the binlog replication unit detects the conflict in DML (in seconds) | N/A | N/A |
| skipped event duration | The duration that the binlog replication unit skips a binlog event (in seconds) | N/A | N/A |
| unsynced tables | The number of tables that have not received the shard DDL statement in the current subtask | N/A | N/A |
Expand Down
2 changes: 1 addition & 1 deletion dm/quick-start-create-source.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ The returned results are as follows:

After creating a data source, you can use the following command to query the data source:

- If you konw the `source-id` of the data source, you can use the `dmctl config source <source-id>` command to directly check the configuration of the data source:
- If you know the `source-id` of the data source, you can use the `dmctl config source <source-id>` command to directly check the configuration of the data source:

{{< copyable "shell-regular" >}}

Expand Down
2 changes: 1 addition & 1 deletion explain-index-merge.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,6 @@ When using the intersection-type index merge to access tables, the optimizer can
>
> - If the optimizer can choose the single index scan method (other than full table scan) for a query plan, the optimizer will not automatically use index merge. For the optimizer to use index merge, you need to use the optimizer hint.
>
> - Index Merge is not supported in [tempoaray tables](/temporary-tables.md) for now.
> - Index Merge is not supported in [temporary tables](/temporary-tables.md) for now.
>
> - The intersection-type index merge will not automatically be selected by the optimizer. You must specify the **table name and index name** using the [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) hint for it to be selected.
2 changes: 1 addition & 1 deletion faq/manage-cluster-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ TiDB provides a few features and [tools](/ecosystem-tool-user-guide.md), with wh

The TiDB community is highly active. The engineers have been keeping optimizing features and fixing bugs. Therefore, the TiDB version is updated quite fast. If you want to keep informed of the latest version, see [TiDB Release Timeline](/releases/release-timeline.md).

It is recommeneded to deploy TiDB [using TiUP](/production-deployment-using-tiup.md) or [using TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/stable). TiDB has a unified management of the version number. You can view the version number using one of the following methods:
It is recommended to deploy TiDB [using TiUP](/production-deployment-using-tiup.md) or [using TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/stable). TiDB has a unified management of the version number. You can view the version number using one of the following methods:

- `select tidb_version()`
- `tidb-server -V`
Expand Down
2 changes: 1 addition & 1 deletion faq/migration-tidb-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ To migrate all the data or migrate incrementally from DB2 or Oracle to TiDB, see

Currently, it is recommended to use OGG.

### Error: `java.sql.BatchUpdateExecption:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in `batches`
### Error: `java.sql.BatchUpdateException:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in `batches`

In Sqoop, `--batch` means committing 100 `statement`s in each batch, but by default each `statement` contains 100 SQL statements. So, 100 * 100 = 10000 SQL statements, which exceeds 5000, the maximum number of statements allowed in a single TiDB transaction.

Expand Down
2 changes: 1 addition & 1 deletion faq/sql-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ TiDB supports modifying the [`sql_mode`](/system-variables.md#sql_mode) system v
- Changes to [`GLOBAL`](/sql-statements/sql-statement-set-variable.md) scoped variables propagate to the rest servers of the cluster and persist across restarts. This means that you do not need to change the `sql_mode` value on each TiDB server.
- Changes to `SESSION` scoped variables only affect the current client session. After restarting a server, the changes are lost.

## Error: `java.sql.BatchUpdateExecption:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in batches
## Error: `java.sql.BatchUpdateException:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in batches

In Sqoop, `--batch` means committing 100 statements in each batch, but by default each statement contains 100 SQL statements. So, 100 * 100 = 10000 SQL statements, which exceeds 5000, the maximum number of statements allowed in a single TiDB transaction.

Expand Down
2 changes: 1 addition & 1 deletion functions-and-operators/precision-math.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ DECIMAL columns do not store a leading `+` character or `-` character or leading

DECIMAL columns do not permit values larger than the range implied by the column definition. For example, a `DECIMAL(3,0)` column supports a range of `-999` to `999`. A `DECIMAL(M,D)` column permits at most `M - D` digits to the left of the decimal point.

For more information about the internal format of the DECIMAL values, see [`mydecimal.go`](https://github.com/pingcap/tidb/blob/master/pkg/types/mydecimal.go) in TiDB souce code.
For more information about the internal format of the DECIMAL values, see [`mydecimal.go`](https://github.com/pingcap/tidb/blob/master/pkg/types/mydecimal.go) in TiDB source code.

## Expression handling

Expand Down
4 changes: 2 additions & 2 deletions functions-and-operators/string-functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,10 +218,10 @@ SELECT CHAR_LENGTH("TiDB") AS LengthOfString;
```

```sql
SELECT CustomerName, CHAR_LENGTH(CustomerName) AS LenghtOfName FROM Customers;
SELECT CustomerName, CHAR_LENGTH(CustomerName) AS LengthOfName FROM Customers;

+--------------------+--------------+
| CustomerName | LenghtOfName |
| CustomerName | LengthOfName |
+--------------------+--------------+
| Albert Einstein | 15 |
| Robert Oppenheimer | 18 |
Expand Down
2 changes: 1 addition & 1 deletion grafana-pd-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ The following is the description of PD Dashboard metrics items:
- Store Write rate keys: The total written keys on each TiKV instance
- Hot cache write entry number: The number of peers on each TiKV instance that are in the write hotspot statistics module
- Selector events: The event count of Selector in the hotspot scheduling module
- Direction of hotspot move leader: The direction of leader movement in the hotspot scheduling. The positive number means scheduling into the instance. The negtive number means scheduling out of the instance
- Direction of hotspot move leader: The direction of leader movement in the hotspot scheduling. The positive number means scheduling into the instance. The negative number means scheduling out of the instance
- Direction of hotspot move peer: The direction of peer movement in the hotspot scheduling. The positive number means scheduling into the instance. The negative number means scheduling out of the instance

![PD Dashboard - Hot write metrics](/media/pd-dashboard-hotwrite-v4.png)
Expand Down
2 changes: 1 addition & 1 deletion information-schema/information-schema-deadlocks.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ USE INFORMATION_SCHEMA;
DESC deadlocks;
```

Thhe output is as follows:
The output is as follows:

```sql
+-------------------------+---------------------+------+------+---------+-------+
Expand Down
4 changes: 2 additions & 2 deletions migrate-small-mysql-to-tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,8 +137,8 @@ To view the historical status of the migration task and other internal metrics,

If you have deployed Prometheus, Alertmanager, and Grafana when deploying DM using TiUP, you can access Grafana using the IP address and port specified during the deployment. You can then select the DM dashboard to view DM-related monitoring metrics.

- The log directory of DM-master: specified by the DM-master process parameter `--log-file`. If you have deployd DM using TiUP, the log directory is `/dm-deploy/dm-master-8261/log/` by default.
- The log directory of DM-worker: specified by the DM-worker process parameter `--log-file`. If you have deployd DM using TiUP, the log directory is `/dm-deploy/dm-worker-8262/log/` by default.
- The log directory of DM-master: specified by the DM-master process parameter `--log-file`. If you have deployed DM using TiUP, the log directory is `/dm-deploy/dm-master-8261/log/` by default.
- The log directory of DM-worker: specified by the DM-worker process parameter `--log-file`. If you have deployed DM using TiUP, the log directory is `/dm-deploy/dm-worker-8262/log/` by default.

## What's next

Expand Down
2 changes: 1 addition & 1 deletion migrate-with-pt-ghost.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ summary: Learn how to use DM to replicate incremental data from databases that u

In production scenarios, table locking during DDL execution can block the reads from or writes to the database to a certain extent. Therefore, online DDL tools are often used to execute DDLs to minimize the impact on reads and writes. Common DDL tools are [gh-ost](https://github.com/github/gh-ost) and [pt-osc](https://www.percona.com/doc/percona-toolkit/3.0/pt-online-schema-change.html).

When using DM to migrate data from MySQL to TiDB, you can enbale `online-ddl` to allow collaboration of DM and gh-ost or pt-osc.
When using DM to migrate data from MySQL to TiDB, you can enable `online-ddl` to allow collaboration of DM and gh-ost or pt-osc.

For the detailed replication instructions, refer to the following documents by scenarios:

Expand Down
4 changes: 2 additions & 2 deletions online-unsafe-recovery.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Before using Online Unsafe Recovery, make sure that the following requirements a

### Step 1. Specify the stores that cannot be recovered

To trigger automatic recovery, use PD Control to execute [`unsafe remove-failed-stores <store_id>[,<store_id>,...]`](/pd-control.md#unsafe-remove-failed-stores-store-ids--show) and specify **all** the TiKV nodes that cannot be recovered, seperated by commas.
To trigger automatic recovery, use PD Control to execute [`unsafe remove-failed-stores <store_id>[,<store_id>,...]`](/pd-control.md#unsafe-remove-failed-stores-store-ids--show) and specify **all** the TiKV nodes that cannot be recovered, separated by commas.

{{< copyable "shell-regular" >}}

Expand Down Expand Up @@ -174,7 +174,7 @@ After the recovery is completed, the data and index might be inconsistent. Use t
ADMIN CHECK TABLE table_name;
```

If there are inconsistent indexes, you can fix the index inconsistency by renaming the old index, creating a new index, and then droping the old index.
If there are inconsistent indexes, you can fix the index inconsistency by renaming the old index, creating a new index, and then dropping the old index.

1. Rename the old index:

Expand Down
4 changes: 2 additions & 2 deletions oracle-functions-to-tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,13 +65,13 @@ TiDB distinguishes between `NULL` and an empty string `''`.
Oracle supports reading and writing to the same table in an `INSERT` statement. For example:

```sql
INSERT INTO table1 VALUES (feild1,(SELECT feild2 FROM table1 WHERE...))
INSERT INTO table1 VALUES (field1,(SELECT field2 FROM table1 WHERE...))
```

TiDB does not support reading and writing to the same table in a `INSERT` statement. For example:

```sql
INSERT INTO table1 VALUES (feild1,(SELECT T.fields2 FROM table1 T WHERE...))
INSERT INTO table1 VALUES (field1,(SELECT T.fields2 FROM table1 T WHERE...))
```

### Get the first n rows from a query
Expand Down
Loading
Loading