From 252d4c3c928794569b593448b9400dff7d459f2d Mon Sep 17 00:00:00 2001 From: Aolin Date: Thu, 3 Aug 2023 10:17:38 +0800 Subject: [PATCH] fix: correct whitespace usage (#14398) --- dashboard/dashboard-ops-reverse-proxy.md | 6 +- dashboard/dashboard-session-sso.md | 96 +-- develop/dev-guide-unstable-result-set.md | 112 ++-- dm/feature-online-ddl.md | 6 +- enable-tls-between-components.md | 2 +- quick-start-with-htap.md | 4 +- schedule-replicas-by-topology-labels.md | 18 +- sql-plan-management.md | 12 +- statistics.md | 20 +- ticdc/deploy-ticdc.md | 2 +- ticdc/integrate-confluent-using-ticdc.md | 4 +- tidb-cloud/integrate-tidbcloud-with-dbt.md | 2 +- tidb-cloud/integrate-tidbcloud-with-zapier.md | 18 +- .../terraform-get-tidbcloud-provider.md | 58 +- tidb-cloud/terraform-use-cluster-resource.md | 590 +++++++++--------- tiflash/create-tiflash-replicas.md | 30 +- tiflash/troubleshoot-tiflash.md | 18 +- tiup/tiup-bench.md | 6 +- 18 files changed, 502 insertions(+), 502 deletions(-) diff --git a/dashboard/dashboard-ops-reverse-proxy.md b/dashboard/dashboard-ops-reverse-proxy.md index 1be6962b53630..fb417e2e4d363 100644 --- a/dashboard/dashboard-ops-reverse-proxy.md +++ b/dashboard/dashboard-ops-reverse-proxy.md @@ -195,9 +195,9 @@ For a deployed cluster: {{< copyable "shell-regular" >}} - ```shell - tiup cluster reload CLUSTER_NAME -R pd - ``` + ```shell + tiup cluster reload CLUSTER_NAME -R pd + ``` See [Common TiUP Operations - Modify the configuration](/maintain-tidb-using-tiup.md#modify-the-configuration) for details. diff --git a/dashboard/dashboard-session-sso.md b/dashboard/dashboard-session-sso.md index f5c1a0148d1af..2a46b56538374 100644 --- a/dashboard/dashboard-session-sso.md +++ b/dashboard/dashboard-session-sso.md @@ -19,28 +19,28 @@ TiDB Dashboard supports [OIDC](https://openid.net/connect/)-based Single Sign-On 4. Fill the **OIDC Client ID** and the **OIDC Discovery URL** fields in the form. - Generally, you can obtain the two fields from the SSO service provider: + Generally, you can obtain the two fields from the SSO service provider: - - OIDC Client ID is also called OIDC Token Issuer. - - OIDC Discovery URL is also called OIDC Token Audience. + - OIDC Client ID is also called OIDC Token Issuer. + - OIDC Discovery URL is also called OIDC Token Audience. 5. Click **Authorize Impersonation** and input the SQL password. - TiDB Dashboard will store this SQL password and use it to impersonate a normal SQL sign-in after an SSO sign-in is finished. + TiDB Dashboard will store this SQL password and use it to impersonate a normal SQL sign-in after an SSO sign-in is finished. - ![Sample Step](/media/dashboard/dashboard-session-sso-enable-1.png) + ![Sample Step](/media/dashboard/dashboard-session-sso-enable-1.png) - > **Note:** - > - > The password you have entered will be encrypted and stored. The SSO sign-in will fail after the password of the SQL user is changed. In this case, you can re-enter the password to bring SSO back. + > **Note:** + > + > The password you have entered will be encrypted and stored. The SSO sign-in will fail after the password of the SQL user is changed. In this case, you can re-enter the password to bring SSO back. 6. Click **Authorize and Save**. - ![Sample Step](/media/dashboard/dashboard-session-sso-enable-2.png) + ![Sample Step](/media/dashboard/dashboard-session-sso-enable-2.png) 7. Click **Update** (Update) to save the configuration. - ![Sample Step](/media/dashboard/dashboard-session-sso-enable-3.png) + ![Sample Step](/media/dashboard/dashboard-session-sso-enable-3.png) Now SSO sign-in has been enabled for TiDB Dashboard. @@ -60,7 +60,7 @@ You can disable the SSO, which will completely erase the stored SQL password: 4. Click **Update** (Update) to save the configuration. - ![Sample Step](/media/dashboard/dashboard-session-sso-disable.png) + ![Sample Step](/media/dashboard/dashboard-session-sso-disable.png) ### Re-enter the password after a password change @@ -72,7 +72,7 @@ The SSO sign-in will fail once the password of the SQL user is changed. In this 3. In the **Single Sign-On** section, Click **Authorize Impersonation** and input the updated SQL password. - ![Sample Step](/media/dashboard/dashboard-session-sso-reauthorize.png) + ![Sample Step](/media/dashboard/dashboard-session-sso-reauthorize.png) 4. Click **Authorize and Save**. @@ -82,7 +82,7 @@ Once SSO is configured for TiDB Dashboard, you can sign in via SSO by taking fol 1. In the sign-in page of TiDB Dashboard, click **Sign in via Company Account**. - ![Sample Step](/media/dashboard/dashboard-session-sso-signin.png) + ![Sample Step](/media/dashboard/dashboard-session-sso-signin.png) 2. Sign into the system with SSO service configured. @@ -102,7 +102,7 @@ First, create an Okta Application Integration to integrate SSO. 3. Click **Create App Integration**. - ![Sample Step](/media/dashboard/dashboard-session-sso-okta-1.png) + ![Sample Step](/media/dashboard/dashboard-session-sso-okta-1.png) 4. In the poped up dialog, choose **OIDC - OpenID Connect** in **Sign-in method**. @@ -110,43 +110,43 @@ First, create an Okta Application Integration to integrate SSO. 6. Click the **Next** button. - ![Sample Step](/media/dashboard/dashboard-session-sso-okta-2.png) + ![Sample Step](/media/dashboard/dashboard-session-sso-okta-2.png) 7. Fill **Sign-in redirect URIs** as follows: - ``` - http://DASHBOARD_IP:PORT/dashboard/?sso_callback=1 - ``` + ``` + http://DASHBOARD_IP:PORT/dashboard/?sso_callback=1 + ``` - Substitute `DASHBOARD_IP:PORT` with the actual domain (or IP address) and port that you use to access the TiDB Dashboard in the browser. + Substitute `DASHBOARD_IP:PORT` with the actual domain (or IP address) and port that you use to access the TiDB Dashboard in the browser. 8. Fill **Sign-out redirect URIs** as follows: - ``` - http://DASHBOARD_IP:PORT/dashboard/ - ``` + ``` + http://DASHBOARD_IP:PORT/dashboard/ + ``` - Similarly, substitute `DASHBOARD_IP:PORT` with the actual domain (or IP address) and port. + Similarly, substitute `DASHBOARD_IP:PORT` with the actual domain (or IP address) and port. - ![Sample Step](/media/dashboard/dashboard-session-sso-okta-3.png) + ![Sample Step](/media/dashboard/dashboard-session-sso-okta-3.png) 9. Configure what type of users in your organization is allowed for SSO sign-in in the **Assignments** field, and then click **Save** to save the configuration. - ![Sample Step](/media/dashboard/dashboard-session-sso-okta-4.png) + ![Sample Step](/media/dashboard/dashboard-session-sso-okta-4.png) ### Step 2: Obtain OIDC information and fill in TiDB Dashboard 1. In the Application Integration just created in Okta, click **Sign On**. - ![Sample Step 1](/media/dashboard/dashboard-session-sso-okta-info-1.png) + ![Sample Step 1](/media/dashboard/dashboard-session-sso-okta-info-1.png) 2. Copy values of the **Issuer** and **Audience** fields from the **OpenID Connect ID Token** section. - ![Sample Step 2](/media/dashboard/dashboard-session-sso-okta-info-2.png) + ![Sample Step 2](/media/dashboard/dashboard-session-sso-okta-info-2.png) 3. Open the TiDB Dashboard configuration page, fill **OIDC Client ID** with **Issuer** obtained from the last step and fill **OIDC Discovery URL** with **Audience**. Then finish the authorization and save the configuration. For example: - ![Sample Step 3](/media/dashboard/dashboard-session-sso-okta-info-3.png) + ![Sample Step 3](/media/dashboard/dashboard-session-sso-okta-info-3.png) Now TiDB Dashboard has been configured to use Okta SSO for sign-in. @@ -160,33 +160,33 @@ Similar to Okta, [Auth0](https://auth0.com/) also provides OIDC SSO identity ser 2. Navigate on the left sidebar **Applications** > **Applications**. -3. Click **Create App Integration**. +3. Click **Create App Integration**. - ![Create Application](/media/dashboard/dashboard-session-sso-auth0-create-app.png) + ![Create Application](/media/dashboard/dashboard-session-sso-auth0-create-app.png) In the popped-up dialog, fill **Name**, for example, "TiDB Dashboard". Choose **Single Page Web Applications** in **Choose an application type**. Click **Create**. 4. Click **Settings**. - ![Settings](/media/dashboard/dashboard-session-sso-auth0-settings-1.png) + ![Settings](/media/dashboard/dashboard-session-sso-auth0-settings-1.png) 5. Fill **Allowed Callback URLs** as follows: - ``` - http://DASHBOARD_IP:PORT/dashboard/?sso_callback=1 - ``` + ``` + http://DASHBOARD_IP:PORT/dashboard/?sso_callback=1 + ``` - Replace `DASHBOARD_IP:PORT` with the actual domain (or IP address) and port that you use to access the TiDB Dashboard in your browser. + Replace `DASHBOARD_IP:PORT` with the actual domain (or IP address) and port that you use to access the TiDB Dashboard in your browser. 6. Fill **Allowed Logout URLs** as follows: - ``` - http://DASHBOARD_IP:PORT/dashboard/ + ``` + http://DASHBOARD_IP:PORT/dashboard/ ``` - Similarly, replace `DASHBOARD_IP:PORT` with the actual domain (or IP address) and port. + Similarly, replace `DASHBOARD_IP:PORT` with the actual domain (or IP address) and port. - ![Settings](/media/dashboard/dashboard-session-sso-auth0-settings-2.png) + ![Settings](/media/dashboard/dashboard-session-sso-auth0-settings-2.png) 7. Keep the default values for other settings and click **Save Changes**. @@ -196,7 +196,7 @@ Similar to Okta, [Auth0](https://auth0.com/) also provides OIDC SSO identity ser 2. Fill **OIDC Discovery URL** with the **Domain** field value prefixed with `https://` and suffixed with `/`, for example, `https://example.us.auth0.com/`. Complete authorization and save the configuration. - ![Settings](/media/dashboard/dashboard-session-sso-auth0-settings-3.png) + ![Settings](/media/dashboard/dashboard-session-sso-auth0-settings-3.png) Now TiDB Dashboard has been configured to use Auth0 SSO for sign-in. @@ -211,19 +211,19 @@ Now TiDB Dashboard has been configured to use Auth0 SSO for sign-in. 2. Navigate from the top sidebar **Applications**. 3. Click **Applications - Add**. - ![Settings](/media/dashboard/dashboard-session-sso-casdoor-settings-1.png) + ![Settings](/media/dashboard/dashboard-session-sso-casdoor-settings-1.png) 4. Fill **Name** and **Display name**, for example, **TiDB Dashboard**. 5. Add **Redirect URLs** as follows: - ``` - http://DASHBOARD_IP:PORT/dashboard/?sso_callback=1 - ``` + ``` + http://DASHBOARD_IP:PORT/dashboard/?sso_callback=1 + ``` + + Replace `DASHBOARD_IP:PORT` with the actual domain (or IP address) and port that you use to access the TiDB Dashboard in your browser. - Replace `DASHBOARD_IP:PORT` with the actual domain (or IP address) and port that you use to access the TiDB Dashboard in your browser. - - ![Settings](/media/dashboard/dashboard-session-sso-casdoor-settings-2.png) + ![Settings](/media/dashboard/dashboard-session-sso-casdoor-settings-2.png) 6. Keep the default values for other settings and click **Save & Exit**. @@ -235,6 +235,6 @@ Now TiDB Dashboard has been configured to use Auth0 SSO for sign-in. 2. Fill **OIDC Discovery URL** with the **Domain** field value prefixed with `https://` and suffixed with `/`, for example, `https://casdoor.example.com/`. Complete authorization and save the configuration. - ![Settings](/media/dashboard/dashboard-session-sso-casdoor-settings-3.png) + ![Settings](/media/dashboard/dashboard-session-sso-casdoor-settings-3.png) Now TiDB Dashboard has been configured to use Casdoor SSO for sign-in. \ No newline at end of file diff --git a/develop/dev-guide-unstable-result-set.md b/develop/dev-guide-unstable-result-set.md index 2ed4d18afae7d..dacc60b3d332c 100644 --- a/develop/dev-guide-unstable-result-set.md +++ b/develop/dev-guide-unstable-result-set.md @@ -70,25 +70,25 @@ Then two values that match this SQL are returned. The first returned value: - ```sql - +------------+--------------+------------------------+ - | class | stuname | max( `b`.`courscore` ) | - +------------+--------------+------------------------+ - | 2018_CS_01 | MonkeyDLuffy | 95.5 | - | 2018_CS_03 | PatrickStar | 99.0 | - +------------+--------------+------------------------+ - ``` +```sql ++------------+--------------+------------------------+ +| class | stuname | max( `b`.`courscore` ) | ++------------+--------------+------------------------+ +| 2018_CS_01 | MonkeyDLuffy | 95.5 | +| 2018_CS_03 | PatrickStar | 99.0 | ++------------+--------------+------------------------+ +``` The second returned value: - ```sql - +------------+--------------+------------------+ - | class | stuname | max(b.courscore) | - +------------+--------------+------------------+ - | 2018_CS_01 | MonkeyDLuffy | 95.5 | - | 2018_CS_03 | SpongeBob | 99.0 | - +------------+--------------+------------------+ - ``` +```sql ++------------+--------------+------------------+ +| class | stuname | max(b.courscore) | ++------------+--------------+------------------+ +| 2018_CS_01 | MonkeyDLuffy | 95.5 | +| 2018_CS_03 | SpongeBob | 99.0 | ++------------+--------------+------------------+ +``` There are two results because you did **_NOT_** specify how to get the value of the `a`.`stuname` field in SQL, and two results are both satisfied by SQL semantics. It results in an unstable result set. Therefore, if you want to guarantee the stability of the result set of the `GROUP BY` statement, use the `FULL GROUP BY` syntax. @@ -177,59 +177,59 @@ To let `GROUP_CONCAT()` get the result set output in order, you need to add the 1. Excluded `ORDER BY` - First query: + First query: - {{< copyable "sql" >}} + {{< copyable "sql" >}} - ```sql - mysql> select GROUP_CONCAT( customer_id SEPARATOR ',' ) FROM customer where customer_id like '200002%'; - +-------------------------------------------------------------------------+ - | GROUP_CONCAT(customer_id SEPARATOR ',') | - +-------------------------------------------------------------------------+ - | 20000200992,20000200993,20000200994,20000200995,20000200996,20000200... | - +-------------------------------------------------------------------------+ - ``` + ```sql + mysql> select GROUP_CONCAT( customer_id SEPARATOR ',' ) FROM customer where customer_id like '200002%'; + +-------------------------------------------------------------------------+ + | GROUP_CONCAT(customer_id SEPARATOR ',') | + +-------------------------------------------------------------------------+ + | 20000200992,20000200993,20000200994,20000200995,20000200996,20000200... | + +-------------------------------------------------------------------------+ + ``` - Second query: + Second query: - {{< copyable "sql" >}} + {{< copyable "sql" >}} - ```sql - mysql> select GROUP_CONCAT( customer_id SEPARATOR ',' ) FROM customer where customer_id like '200002%'; - +-------------------------------------------------------------------------+ - | GROUP_CONCAT(customer_id SEPARATOR ',') | - +-------------------------------------------------------------------------+ - | 20000203040,20000203041,20000203042,20000203043,20000203044,20000203... | - +-------------------------------------------------------------------------+ - ``` + ```sql + mysql> select GROUP_CONCAT( customer_id SEPARATOR ',' ) FROM customer where customer_id like '200002%'; + +-------------------------------------------------------------------------+ + | GROUP_CONCAT(customer_id SEPARATOR ',') | + +-------------------------------------------------------------------------+ + | 20000203040,20000203041,20000203042,20000203043,20000203044,20000203... | + +-------------------------------------------------------------------------+ + ``` 2. Include `ORDER BY` - First query: + First query: - {{< copyable "sql" >}} + {{< copyable "sql" >}} - ```sql - mysql> select GROUP_CONCAT( customer_id order by customer_id SEPARATOR ',' ) FROM customer where customer_id like '200002%'; - +-------------------------------------------------------------------------+ - | GROUP_CONCAT(customer_id SEPARATOR ',') | - +-------------------------------------------------------------------------+ - | 20000200000,20000200001,20000200002,20000200003,20000200004,20000200... | - +-------------------------------------------------------------------------+ - ``` + ```sql + mysql> select GROUP_CONCAT( customer_id order by customer_id SEPARATOR ',' ) FROM customer where customer_id like '200002%'; + +-------------------------------------------------------------------------+ + | GROUP_CONCAT(customer_id SEPARATOR ',') | + +-------------------------------------------------------------------------+ + | 20000200000,20000200001,20000200002,20000200003,20000200004,20000200... | + +-------------------------------------------------------------------------+ + ``` - Second query: + Second query: - {{< copyable "sql" >}} + {{< copyable "sql" >}} - ```sql - mysql> select GROUP_CONCAT( customer_id order by customer_id SEPARATOR ',' ) FROM customer where customer_id like '200002%'; - +-------------------------------------------------------------------------+ - | GROUP_CONCAT(customer_id SEPARATOR ',') | - +-------------------------------------------------------------------------+ - | 20000200000,20000200001,20000200002,20000200003,20000200004,20000200... | - +-------------------------------------------------------------------------+ - ``` + ```sql + mysql> select GROUP_CONCAT( customer_id order by customer_id SEPARATOR ',' ) FROM customer where customer_id like '200002%'; + +-------------------------------------------------------------------------+ + | GROUP_CONCAT(customer_id SEPARATOR ',') | + +-------------------------------------------------------------------------+ + | 20000200000,20000200001,20000200002,20000200003,20000200004,20000200... | + +-------------------------------------------------------------------------+ + ``` ## Unstable results in `SELECT * FROM T LIMIT N` diff --git a/dm/feature-online-ddl.md b/dm/feature-online-ddl.md index d4e52b7f4aed2..c9e18cc73d920 100644 --- a/dm/feature-online-ddl.md +++ b/dm/feature-online-ddl.md @@ -182,9 +182,9 @@ The SQL statements mostly used by pt-osc and the corresponding operation of DM a * DM splits the above `rename` operation into two SQL statements: ```sql - rename test.test4 to test._test4_old; - rename test._test4_new to test.test4; - ``` + rename test.test4 to test._test4_old; + rename test._test4_new to test.test4; + ``` * DM does not execute `rename to _test4_old`. When executing `rename ghost_table to origin table`, DM takes the following steps: diff --git a/enable-tls-between-components.md b/enable-tls-between-components.md index 06acae406a49e..af73a7887ddfc 100644 --- a/enable-tls-between-components.md +++ b/enable-tls-between-components.md @@ -86,7 +86,7 @@ Currently, it is not supported to only enable encrypted transmission of some spe Configure in the `tiflash.toml` file, and change the `http_port` item to `https_port`: - ```toml + ```toml [security] ## The path for certificates. An empty string means that secure connections are disabled. # Path of the file that contains a list of trusted SSL CAs. If it is set, the following settings `cert_path` and `key_path` are also needed. diff --git a/quick-start-with-htap.md b/quick-start-with-htap.md index 6c5b0e6c16f92..abd0f6f2ea961 100644 --- a/quick-start-with-htap.md +++ b/quick-start-with-htap.md @@ -97,7 +97,7 @@ In the following steps, you can create a [TPC-H](http://www.tpc.org/tpch/) datas | test.lineitem | 6491711 | 849.07 MiB| 99.06 MiB | 948.13 MiB| +---------------+----------------+-----------+------------+-----------+ 8 rows in set (0.06 sec) - ``` + ``` This is a database of a commercial ordering system. In which, the `test.nation` table indicates the information about countries, the `test.region` table indicates the information about regions, the `test.part` table indicates the information about parts, the `test.supplier` table indicates the information about suppliers, the `test.partsupp` table indicates the information about parts of suppliers, the `test.customer` table indicates the information about customers, the `test.customer` table indicates the information about orders, and the `test.lineitem` table indicates the information about online items. @@ -139,7 +139,7 @@ This is a shipping priority query, which provides the priority and potential rev ### Step 4. Replicate the test data to the columnar storage engine -After TiFlash is deployed, TiKV does not replicate data to TiFlash immediately. You need to execute the following DDL statements in a MySQL client of TiDB to specify which tables need to be replicated. After that, TiDB will create the specified replicas in TiFlash accordingly. +After TiFlash is deployed, TiKV does not replicate data to TiFlash immediately. You need to execute the following DDL statements in a MySQL client of TiDB to specify which tables need to be replicated. After that, TiDB will create the specified replicas in TiFlash accordingly. {{< copyable "sql" >}} diff --git a/schedule-replicas-by-topology-labels.md b/schedule-replicas-by-topology-labels.md index c8506f7e94cce..682a112b915a4 100644 --- a/schedule-replicas-by-topology-labels.md +++ b/schedule-replicas-by-topology-labels.md @@ -24,7 +24,7 @@ Assume that the topology has four layers: zone > data center (dc) > rack > host, + Use the command-line flag to start a TiKV instance: - ```shell + ```shell tikv-server --labels zone=,dc=,rack=,host= ``` @@ -41,14 +41,14 @@ Assume that the topology has four layers: zone > data center (dc) > rack > host, To set labels for TiFlash, you can use the `tiflash-learner.toml` file, which is the configuration file of tiflash-proxy: - ```toml - [server] - [server.labels] - zone = "" - dc = "" - rack = "" - host = "" - ``` +```toml +[server] +[server.labels] +zone = "" +dc = "" +rack = "" +host = "" +``` ### (Optional) Configure `labels` for TiDB diff --git a/sql-plan-management.md b/sql-plan-management.md index da894e724d865..442bb2c717244 100644 --- a/sql-plan-management.md +++ b/sql-plan-management.md @@ -168,15 +168,15 @@ The original SQL statement and the bound statement must have the same text after - This binding can be created successfully because the texts before and after parameterization and hint removal are the same: `SELECT * FROM test . t WHERE a > ?` - ```sql - CREATE BINDING FOR SELECT * FROM t WHERE a > 1 USING SELECT * FROM t use index (idx) WHERE a > 2 - ``` + ```sql + CREATE BINDING FOR SELECT * FROM t WHERE a > 1 USING SELECT * FROM t use index (idx) WHERE a > 2 + ``` - This binding will fail because the original SQL statement is processed as `SELECT * FROM test . t WHERE a > ?`, while the bound SQL statement is processed differently as `SELECT * FROM test . t WHERE b > ?`. - ```sql - CREATE BINDING FOR SELECT * FROM t WHERE a > 1 USING SELECT * FROM t use index(idx) WHERE b > 2 - ``` + ```sql + CREATE BINDING FOR SELECT * FROM t WHERE a > 1 USING SELECT * FROM t use index(idx) WHERE b > 2 + ``` > **Note:** > diff --git a/statistics.md b/statistics.md index 1c5ed0e9e225d..6d3f4ae0c42a7 100644 --- a/statistics.md +++ b/statistics.md @@ -37,22 +37,22 @@ When `tidb_analyze_version = 2`, if memory overflow occurs after `ANALYZE` is ex - If the `ANALYZE` statement is executed manually, manually analyze every table to be analyzed. - ```sql - SELECT DISTINCT(CONCAT('ANALYZE TABLE ', table_schema, '.', table_name, ';')) FROM information_schema.tables, mysql.stats_histograms WHERE stats_ver = 2 AND table_id = tidb_table_id; - ``` + ```sql + SELECT DISTINCT(CONCAT('ANALYZE TABLE ', table_schema, '.', table_name, ';')) FROM information_schema.tables, mysql.stats_histograms WHERE stats_ver = 2 AND table_id = tidb_table_id; + ``` - If TiDB automatically executes the `ANALYZE` statement because the auto-analysis has been enabled, execute the following statement that generates the `DROP STATS` statement: - ```sql - SELECT DISTINCT(CONCAT('DROP STATS ', table_schema, '.', table_name, ';')) FROM information_schema.tables, mysql.stats_histograms WHERE stats_ver = 2 AND table_id = tidb_table_id; - ``` + ```sql + SELECT DISTINCT(CONCAT('DROP STATS ', table_schema, '.', table_name, ';')) FROM information_schema.tables, mysql.stats_histograms WHERE stats_ver = 2 AND table_id = tidb_table_id; + ``` - If the result of the preceding statement is too long to copy and paste, you can export the result to a temporary text file and then perform execution from the file like this: - ```sql - SELECT DISTINCT ... INTO OUTFILE '/tmp/sql.txt'; - mysql -h ${TiDB_IP} -u user -P ${TIDB_PORT} ... < '/tmp/sql.txt' - ``` + ```sql + SELECT DISTINCT ... INTO OUTFILE '/tmp/sql.txt'; + mysql -h ${TiDB_IP} -u user -P ${TIDB_PORT} ... < '/tmp/sql.txt' + ``` This document briefly introduces the histogram, Count-Min Sketch, and Top-N, and details the collection and maintenance of statistics. diff --git a/ticdc/deploy-ticdc.md b/ticdc/deploy-ticdc.md index 3d1fbdd75d9a0..2350d0c2e5a55 100644 --- a/ticdc/deploy-ticdc.md +++ b/ticdc/deploy-ticdc.md @@ -115,7 +115,7 @@ This section describes how to use the [`tiup cluster edit-config`](/tiup/tiup-co 1. Run the `tiup cluster edit-config` command. Replace `` with the actual cluster name: - ```shell + ```shell tiup cluster edit-config ``` diff --git a/ticdc/integrate-confluent-using-ticdc.md b/ticdc/integrate-confluent-using-ticdc.md index 3d940ea2e5810..43b71808f7492 100644 --- a/ticdc/integrate-confluent-using-ticdc.md +++ b/ticdc/integrate-confluent-using-ticdc.md @@ -71,7 +71,7 @@ The preceding steps are performed in a lab environment. You can also deploy a cl After creation, a key pair file is generated, as shown below: - ``` + ``` === Confluent Cloud API key: yyy-yyyyy === API key: 7NBH2CAFM2LMGTH7 @@ -229,7 +229,7 @@ create or replace TABLE TIDB_TEST_ITEM ( ); ``` -2. Create a stream for `TIDB_TEST_ITEM` and set `append_only` to `true` as follows. +2. Create a stream for `TIDB_TEST_ITEM` and set `append_only` to `true` as follows. ``` create or replace stream TEST_ITEM_STREAM on table TIDB_TEST_ITEM append_only=true; diff --git a/tidb-cloud/integrate-tidbcloud-with-dbt.md b/tidb-cloud/integrate-tidbcloud-with-dbt.md index 3946c22a39339..1f4b6f62da682 100644 --- a/tidb-cloud/integrate-tidbcloud-with-dbt.md +++ b/tidb-cloud/integrate-tidbcloud-with-dbt.md @@ -77,7 +77,7 @@ To configure the project, take the following steps: In the editor, add the following configuration: - ```yaml + ```yaml jaffle_shop_tidb: # Project name target: dev # Target outputs: diff --git a/tidb-cloud/integrate-tidbcloud-with-zapier.md b/tidb-cloud/integrate-tidbcloud-with-zapier.md index 1af0879314d73..4b3442919251f 100644 --- a/tidb-cloud/integrate-tidbcloud-with-zapier.md +++ b/tidb-cloud/integrate-tidbcloud-with-zapier.md @@ -119,15 +119,15 @@ In the editor page, you can see the trigger and action. Click the trigger to set Click **Test action** to create a new row in the table. If you check your TiDB Cloud cluster, you can find the data is written successfully. - ```sql - mysql> SELECT * FROM test.github_global_event; - +-------------+-------------+------------+-----------------+----------------------------------------------+--------+---------------------+ - | id | type | actor | repo_name | repo_url | public | created_at | - +-------------+-------------+------------+-----------------+----------------------------------------------+--------+---------------------+ - | 25324462424 | CreateEvent | shiyuhang0 | shiyuhang0/docs | https://api.github.com/repos/shiyuhang0/docs | True | 2022-11-18 08:03:14 | - +-------------+-------------+------------+-----------------+----------------------------------------------+--------+---------------------+ - 1 row in set (0.17 sec) - ``` + ```sql + mysql> SELECT * FROM test.github_global_event; + +-------------+-------------+------------+-----------------+----------------------------------------------+--------+---------------------+ + | id | type | actor | repo_name | repo_url | public | created_at | + +-------------+-------------+------------+-----------------+----------------------------------------------+--------+---------------------+ + | 25324462424 | CreateEvent | shiyuhang0 | shiyuhang0/docs | https://api.github.com/repos/shiyuhang0/docs | True | 2022-11-18 08:03:14 | + +-------------+-------------+------------+-----------------+----------------------------------------------+--------+---------------------+ + 1 row in set (0.17 sec) + ``` ### Step 5: Publish your zap diff --git a/tidb-cloud/terraform-get-tidbcloud-provider.md b/tidb-cloud/terraform-get-tidbcloud-provider.md index d3d0d80b8e6b0..4a0dd59ca150b 100644 --- a/tidb-cloud/terraform-get-tidbcloud-provider.md +++ b/tidb-cloud/terraform-get-tidbcloud-provider.md @@ -43,43 +43,43 @@ For detailed steps, see [TiDB Cloud API documentation](https://docs.pingcap.com/ 1. Create a `main.tf` file: - ``` - terraform { - required_providers { - tidbcloud = { - source = "tidbcloud/tidbcloud" - version = "~> 0.1.0" - } - } - required_version = ">= 1.0.0" - } - ``` - - - The `source` attribute specifies the target Terraform provider to be downloaded from [Terraform Registry](https://registry.terraform.io/). - - The `version` attribute is optional, which specifies the version of the Terraform provider. If it is not specified, the latest provider version is used by default. - - The `required_version` is optional, which specifies the version of Terraform. If it is not specified, the latest Terraform version is used by default. + ``` + terraform { + required_providers { + tidbcloud = { + source = "tidbcloud/tidbcloud" + version = "~> 0.1.0" + } + } + required_version = ">= 1.0.0" + } + ``` + + - The `source` attribute specifies the target Terraform provider to be downloaded from [Terraform Registry](https://registry.terraform.io/). + - The `version` attribute is optional, which specifies the version of the Terraform provider. If it is not specified, the latest provider version is used by default. + - The `required_version` is optional, which specifies the version of Terraform. If it is not specified, the latest Terraform version is used by default. 2. Run the `terraform init` command to download TiDB Cloud Terraform Provider from Terraform Registry. - ``` - $ terraform init + ``` + $ terraform init - Initializing the backend... + Initializing the backend... - Initializing provider plugins... - - Reusing previous version of tidbcloud/tidbcloud from the dependency lock file - - Using previously-installed tidbcloud/tidbcloud v0.1.0 + Initializing provider plugins... + - Reusing previous version of tidbcloud/tidbcloud from the dependency lock file + - Using previously-installed tidbcloud/tidbcloud v0.1.0 - Terraform has been successfully initialized! + Terraform has been successfully initialized! - You may now begin working with Terraform. Try running "terraform plan" to see - any changes that are required for your infrastructure. All Terraform commands - should now work. + You may now begin working with Terraform. Try running "terraform plan" to see + any changes that are required for your infrastructure. All Terraform commands + should now work. - If you ever set or change modules or backend configuration for Terraform, - rerun this command to reinitialize your working directory. If you forget, other - commands will detect it and remind you to do so if necessary. - ``` + If you ever set or change modules or backend configuration for Terraform, + rerun this command to reinitialize your working directory. If you forget, other + commands will detect it and remind you to do so if necessary. + ``` ## Step 4. Configure TiDB Cloud Terraform Provider with the API key diff --git a/tidb-cloud/terraform-use-cluster-resource.md b/tidb-cloud/terraform-use-cluster-resource.md index a672b723effa7..2cd11ac54a499 100644 --- a/tidb-cloud/terraform-use-cluster-resource.md +++ b/tidb-cloud/terraform-use-cluster-resource.md @@ -21,96 +21,96 @@ To view the information of all available projects, you can use the `tidbcloud_pr 1. In the `main.tf` file that is created when you [Get TiDB Cloud Terraform Provider](/tidb-cloud/terraform-get-tidbcloud-provider.md), add the `data` and `output` blocks as follows: - ``` - terraform { - required_providers { - tidbcloud = { - source = "tidbcloud/tidbcloud" - version = "~> 0.1.0" - } - } - required_version = ">= 1.0.0" - } - - provider "tidbcloud" { - public_key = "fake_public_key" - private_key = "fake_private_key" - } - - data "tidbcloud_projects" "example_project" { - page = 1 - page_size = 10 - } - - output "projects" { - value = data.tidbcloud_projects.example_project.items - } - ``` - - - Use the `data` block to define the data source of TiDB Cloud, including the data source type and the data source name. - - - To use the projects data source, set the data source type as `tidbcloud_projects`. - - For the data source name, you can define it according to your need. For example, "example_project". - - For the `tidbcloud_projects` data source, you can use the `page` and `page_size` attributes to limit the maximum number of projects you want to check. - - - Use the `output` block to define the data source information to be displayed in the output, and expose the information for other Terraform configurations to use. + ``` + terraform { + required_providers { + tidbcloud = { + source = "tidbcloud/tidbcloud" + version = "~> 0.1.0" + } + } + required_version = ">= 1.0.0" + } + + provider "tidbcloud" { + public_key = "fake_public_key" + private_key = "fake_private_key" + } + + data "tidbcloud_projects" "example_project" { + page = 1 + page_size = 10 + } + + output "projects" { + value = data.tidbcloud_projects.example_project.items + } + ``` + + - Use the `data` block to define the data source of TiDB Cloud, including the data source type and the data source name. + + - To use the projects data source, set the data source type as `tidbcloud_projects`. + - For the data source name, you can define it according to your need. For example, "example_project". + - For the `tidbcloud_projects` data source, you can use the `page` and `page_size` attributes to limit the maximum number of projects you want to check. + + - Use the `output` block to define the data source information to be displayed in the output, and expose the information for other Terraform configurations to use. The `output` block works similarly to returned values in programming languages. See [Terraform documentation](https://www.terraform.io/language/values/outputs) for more details. - To get all the available configurations for the resources and data sources, see this [configuration documentation](https://registry.terraform.io/providers/tidbcloud/tidbcloud/latest/docs). + To get all the available configurations for the resources and data sources, see this [configuration documentation](https://registry.terraform.io/providers/tidbcloud/tidbcloud/latest/docs). 2. Run the `terraform apply` command to apply the configurations. You need to type `yes` at the confirmation prompt to proceed. - To skip the prompt, use `terraform apply --auto-approve`: - - ``` - $ terraform apply --auto-approve - - Changes to Outputs: - + projects = [ - + { - + cluster_count = 0 - + create_timestamp = "1649154426" - + id = "1372813089191121286" - + name = "test1" - + org_id = "1372813089189921287" - + user_count = 1 - }, - + { - + cluster_count = 1 - + create_timestamp = "1640602740" - + id = "1372813089189561287" - + name = "default project" - + org_id = "1372813089189921287" - + user_count = 1 - }, - ] - - You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure. - - Apply complete! Resources: 0 added, 0 changed, 0 destroyed. - - Outputs: - - projects = tolist([ - { - "cluster_count" = 0 - "create_timestamp" = "1649154426" - "id" = "1372813089191121286" - "name" = "test1" - "org_id" = "1372813089189921287" - "user_count" = 1 - }, - { - "cluster_count" = 1 - "create_timestamp" = "1640602740" - "id" = "1372813089189561287" - "name" = "default project" - "org_id" = "1372813089189921287" - "user_count" = 1 - }, - ]) - ``` + To skip the prompt, use `terraform apply --auto-approve`: + + ``` + $ terraform apply --auto-approve + + Changes to Outputs: + + projects = [ + + { + + cluster_count = 0 + + create_timestamp = "1649154426" + + id = "1372813089191121286" + + name = "test1" + + org_id = "1372813089189921287" + + user_count = 1 + }, + + { + + cluster_count = 1 + + create_timestamp = "1640602740" + + id = "1372813089189561287" + + name = "default project" + + org_id = "1372813089189921287" + + user_count = 1 + }, + ] + + You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure. + + Apply complete! Resources: 0 added, 0 changed, 0 destroyed. + + Outputs: + + projects = tolist([ + { + "cluster_count" = 0 + "create_timestamp" = "1649154426" + "id" = "1372813089191121286" + "name" = "test1" + "org_id" = "1372813089189921287" + "user_count" = 1 + }, + { + "cluster_count" = 1 + "create_timestamp" = "1640602740" + "id" = "1372813089189561287" + "name" = "default project" + "org_id" = "1372813089189921287" + "user_count" = 1 + }, + ]) + ``` Now, you can get all the available projects from the output. Copy one of the project IDs that you need. @@ -149,7 +149,7 @@ To get the cluster specification information, you can use the `tidbcloud_cluster
Cluster specification - + ``` { "cloud_provider" = "AWS" @@ -294,20 +294,20 @@ The following example shows how to create a Dedicated Tier cluster. 2. Create a `cluster.tf` file: ``` - terraform { - required_providers { - tidbcloud = { - source = "tidbcloud/tidbcloud" - version = "~> 0.1.0" - } - } - required_version = ">= 1.0.0" - } + terraform { + required_providers { + tidbcloud = { + source = "tidbcloud/tidbcloud" + version = "~> 0.1.0" + } + } + required_version = ">= 1.0.0" + } - provider "tidbcloud" { - public_key = "fake_public_key" - private_key = "fake_private_key" - } + provider "tidbcloud" { + public_key = "fake_public_key" + private_key = "fake_private_key" + } resource "tidbcloud_cluster" "example_cluster" { project_id = "1372813089189561287" @@ -343,7 +343,7 @@ The following example shows how to create a Dedicated Tier cluster. ```shell $ terraform apply - + Terraform will perform the following actions: # tidbcloud_cluster.example_cluster will be created @@ -387,11 +387,11 @@ The following example shows how to create a Dedicated Tier cluster. Enter a value: ``` - As in the above result, Terraform generates an execution plan for you, which describes the actions Terraform will take: + As in the above result, Terraform generates an execution plan for you, which describes the actions Terraform will take: - - You can check the difference between the configurations and the states. - - You can also see the results of this `apply`. It will add a new resource, and no resource will be changed or destroyed. - - The `known after apply` shows that you will get the value after `apply`. + - You can check the difference between the configurations and the states. + - You can also see the results of this `apply`. It will add a new resource, and no resource will be changed or destroyed. + - The `known after apply` shows that you will get the value after `apply`. 4. If everything in your plan looks fine, type `yes` to continue: @@ -624,75 +624,75 @@ You can scale a TiDB cluster when its status is `AVAILABLE`. For example, to add one more node for TiDB, 3 more nodes for TiKV (The number of TiKV nodes needs to be a multiple of 3 for its step is 3. You can [get this information from the cluster specifcation](#get-cluster-specification-information-using-the-tidbcloud_cluster_specs-data-source)), and one more node for TiFlash, you can edit the configurations as follows: - ``` - components = { - tidb = { - node_size : "8C16G" - node_quantity : 2 - } - tikv = { - node_size : "8C32G" - storage_size_gib : 500 - node_quantity : 6 - } - tiflash = { - node_size : "8C64G" - storage_size_gib : 500 - node_quantity : 2 - } - } - ``` + ``` + components = { + tidb = { + node_size : "8C16G" + node_quantity : 2 + } + tikv = { + node_size : "8C32G" + storage_size_gib : 500 + node_quantity : 6 + } + tiflash = { + node_size : "8C64G" + storage_size_gib : 500 + node_quantity : 2 + } + } + ``` 2. Run the `terraform apply` command and type `yes` for confirmation: - ``` - $ terraform apply - - tidbcloud_cluster.example_cluster: Refreshing state... [id=1379661944630234067] - - Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: - ~ update in-place - - Terraform will perform the following actions: - - # tidbcloud_cluster.example_cluster will be updated in-place - ~ resource "tidbcloud_cluster" "example_cluster" { - ~ config = { - ~ components = { - ~ tidb = { - ~ node_quantity = 1 -> 2 - # (1 unchanged attribute hidden) - } - ~ tiflash = { - ~ node_quantity = 1 -> 2 - # (2 unchanged attributes hidden) - } - ~ tikv = { - ~ node_quantity = 3 -> 6 - # (2 unchanged attributes hidden) - } - } - # (3 unchanged attributes hidden) - } - id = "1379661944630234067" - name = "firstCluster" - ~ status = "AVAILABLE" -> (known after apply) - # (4 unchanged attributes hidden) - } - - Plan: 0 to add, 1 to change, 0 to destroy. - - Do you want to perform these actions? - Terraform will perform the actions described above. - Only 'yes' will be accepted to approve. - - Enter a value: yes - - tidbcloud_cluster.example_cluster: Modifying... [id=1379661944630234067] - tidbcloud_cluster.example_cluster: Modifications complete after 2s [id=1379661944630234067] - - Apply complete! Resources: 0 added, 1 changed, 0 destroyed. - ``` + ``` + $ terraform apply + + tidbcloud_cluster.example_cluster: Refreshing state... [id=1379661944630234067] + + Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + ~ update in-place + + Terraform will perform the following actions: + + # tidbcloud_cluster.example_cluster will be updated in-place + ~ resource "tidbcloud_cluster" "example_cluster" { + ~ config = { + ~ components = { + ~ tidb = { + ~ node_quantity = 1 -> 2 + # (1 unchanged attribute hidden) + } + ~ tiflash = { + ~ node_quantity = 1 -> 2 + # (2 unchanged attributes hidden) + } + ~ tikv = { + ~ node_quantity = 3 -> 6 + # (2 unchanged attributes hidden) + } + } + # (3 unchanged attributes hidden) + } + id = "1379661944630234067" + name = "firstCluster" + ~ status = "AVAILABLE" -> (known after apply) + # (4 unchanged attributes hidden) + } + + Plan: 0 to add, 1 to change, 0 to destroy. + + Do you want to perform these actions? + Terraform will perform the actions described above. + Only 'yes' will be accepted to approve. + + Enter a value: yes + + tidbcloud_cluster.example_cluster: Modifying... [id=1379661944630234067] + tidbcloud_cluster.example_cluster: Modifications complete after 2s [id=1379661944630234067] + + Apply complete! Resources: 0 added, 1 changed, 0 destroyed. + ``` Wait for the status to turn from `MODIFYING` to `AVAILABLE`. @@ -705,143 +705,143 @@ You can pause a cluster when its status is `AVAILABLE` or resume a cluster when 1. In the `cluster.tf` file that is used when you [create the cluster](#create-a-cluster-using-the-cluster-resource), add `pause = true` to the `config` configurations: - ``` - config = { - paused = true - root_password = "Your_root_password1." - port = 4000 - ... - } - ``` + ``` + config = { + paused = true + root_password = "Your_root_password1." + port = 4000 + ... + } + ``` 2. Run the `terraform apply` command and type `yes` after check: - ``` - $ terraform apply + ``` + $ terraform apply - tidbcloud_cluster.example_cluster: Refreshing state... [id=1379661944630234067] + tidbcloud_cluster.example_cluster: Refreshing state... [id=1379661944630234067] - Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: - ~ update in-place + Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + ~ update in-place - Terraform will perform the following actions: + Terraform will perform the following actions: - # tidbcloud_cluster.example_cluster will be updated in-place - ~ resource "tidbcloud_cluster" "example_cluster" { - ~ config = { - + paused = true - # (4 unchanged attributes hidden) - } - id = "1379661944630234067" - name = "firstCluster" - ~ status = "AVAILABLE" -> (known after apply) - # (4 unchanged attributes hidden) - } + # tidbcloud_cluster.example_cluster will be updated in-place + ~ resource "tidbcloud_cluster" "example_cluster" { + ~ config = { + + paused = true + # (4 unchanged attributes hidden) + } + id = "1379661944630234067" + name = "firstCluster" + ~ status = "AVAILABLE" -> (known after apply) + # (4 unchanged attributes hidden) + } - Plan: 0 to add, 1 to change, 0 to destroy. + Plan: 0 to add, 1 to change, 0 to destroy. - Do you want to perform these actions? - Terraform will perform the actions described above. - Only 'yes' will be accepted to approve. + Do you want to perform these actions? + Terraform will perform the actions described above. + Only 'yes' will be accepted to approve. - Enter a value: yes + Enter a value: yes - tidbcloud_cluster.example_cluster: Modifying... [id=1379661944630234067] - tidbcloud_cluster.example_cluster: Modifications complete after 2s [id=1379661944630234067] + tidbcloud_cluster.example_cluster: Modifying... [id=1379661944630234067] + tidbcloud_cluster.example_cluster: Modifications complete after 2s [id=1379661944630234067] - Apply complete! Resources: 0 added, 1 changed, 0 destroyed. - ``` + Apply complete! Resources: 0 added, 1 changed, 0 destroyed. + ``` 3. Use the `terraform state show tidbcloud_cluster.${resource-name}` command to check the status: - ``` - $ terraform state show tidbcloud_cluster.example_cluster - - # tidbcloud_cluster.example_cluster: - resource "tidbcloud_cluster" "example_cluster" { - cloud_provider = "AWS" - cluster_type = "DEDICATED" - config = { - components = { - tidb = { - node_quantity = 2 - node_size = "8C16G" - } - tiflash = { - node_quantity = 2 - node_size = "8C64G" - storage_size_gib = 500 - } - tikv = { - node_quantity = 6 - node_size = "8C32G" - storage_size_gib = 500 - } - } - ip_access_list = [ - # (1 unchanged element hidden) - ] - paused = true - port = 4000 - root_password = "Your_root_password1." - } - id = "1379661944630234067" - name = "firstCluster" - project_id = "1372813089189561287" - region = "eu-central-1" - status = "PAUSED" - } - ``` + ``` + $ terraform state show tidbcloud_cluster.example_cluster + + # tidbcloud_cluster.example_cluster: + resource "tidbcloud_cluster" "example_cluster" { + cloud_provider = "AWS" + cluster_type = "DEDICATED" + config = { + components = { + tidb = { + node_quantity = 2 + node_size = "8C16G" + } + tiflash = { + node_quantity = 2 + node_size = "8C64G" + storage_size_gib = 500 + } + tikv = { + node_quantity = 6 + node_size = "8C32G" + storage_size_gib = 500 + } + } + ip_access_list = [ + # (1 unchanged element hidden) + ] + paused = true + port = 4000 + root_password = "Your_root_password1." + } + id = "1379661944630234067" + name = "firstCluster" + project_id = "1372813089189561287" + region = "eu-central-1" + status = "PAUSED" + } + ``` 4. When you need to resume the cluster, set `paused = false`: - ``` - config = { - paused = false - root_password = "Your_root_password1." - port = 4000 - ... - } - ``` + ``` + config = { + paused = false + root_password = "Your_root_password1." + port = 4000 + ... + } + ``` 5. Run the `terraform apply` command and type `yes` for confirmation. If you use the `terraform state show tidbcloud_cluster.${resource-name}` command to check the status, you will find it turns to `RESUMING`: - ``` - # tidbcloud_cluster.example_cluster: - resource "tidbcloud_cluster" "example_cluster" { - cloud_provider = "AWS" - cluster_type = "DEDICATED" - config = { - components = { - tidb = { - node_quantity = 2 - node_size = "8C16G" - } - tiflash = { - node_quantity = 2 - node_size = "8C64G" - storage_size_gib = 500 - } - tikv = { - node_quantity = 6 - node_size = "8C32G" - storage_size_gib = 500 - } - } - ip_access_list = [ - # (1 unchanged element hidden) - ] - paused = false - port = 4000 - root_password = "Your_root_password1." - } - id = "1379661944630234067" - name = "firstCluster" - project_id = "1372813089189561287" - region = "eu-central-1" - status = "RESUMING" - } - ``` + ``` + # tidbcloud_cluster.example_cluster: + resource "tidbcloud_cluster" "example_cluster" { + cloud_provider = "AWS" + cluster_type = "DEDICATED" + config = { + components = { + tidb = { + node_quantity = 2 + node_size = "8C16G" + } + tiflash = { + node_quantity = 2 + node_size = "8C64G" + storage_size_gib = 500 + } + tikv = { + node_quantity = 6 + node_size = "8C32G" + storage_size_gib = 500 + } + } + ip_access_list = [ + # (1 unchanged element hidden) + ] + paused = false + port = 4000 + root_password = "Your_root_password1." + } + id = "1379661944630234067" + name = "firstCluster" + project_id = "1372813089189561287" + region = "eu-central-1" + status = "RESUMING" + } + ``` 6. Wait for a moment, then use the `terraform refersh` command to update the state. The status will be changed to `AVAILABLE` finally. @@ -857,20 +857,20 @@ For example, you can import a cluster that is not created by Terraform or import ``` terraform { - required_providers { - tidbcloud = { - source = "tidbcloud/tidbcloud" - version = "~> 0.1.0" - } - } - required_version = ">= 1.0.0" - } + required_providers { + tidbcloud = { + source = "tidbcloud/tidbcloud" + version = "~> 0.1.0" + } + } + required_version = ">= 1.0.0" + } resource "tidbcloud_cluster" "import_cluster" {} ``` 2. Import the cluster by `terraform import tidbcloud_cluster.import_cluster projectId,clusterId`: - For example: + For example: ``` $ terraform import tidbcloud_cluster.import_cluster 1372813089189561287,1379661944630264072 @@ -973,7 +973,7 @@ For example, you can import a cluster that is not created by Terraform or import Apply complete! Resources: 0 added, 0 changed, 0 destroyed. ``` -Now you can use Terraform to manage the cluster. +Now you can use Terraform to manage the cluster. ## Delete a cluster diff --git a/tiflash/create-tiflash-replicas.md b/tiflash/create-tiflash-replicas.md index 26e706303228c..896adf6be4a27 100644 --- a/tiflash/create-tiflash-replicas.md +++ b/tiflash/create-tiflash-replicas.md @@ -127,13 +127,13 @@ Before TiFlash replicas are added, each TiKV instance performs a full table scan 1. Temporarily increase the snapshot write speed limit for each TiKV and TiFlash instance by using the [Dynamic Config SQL statement](https://docs.pingcap.com/tidb/stable/dynamic-config): - ```sql - -- The default value for both configurations are 100MiB, i.e. the maximum disk bandwidth used for writing snapshots is no more than 100MiB/s. - SET CONFIG tikv `server.snap-io-max-bytes-per-sec` = '300MiB'; - SET CONFIG tiflash `raftstore-proxy.server.snap-max-write-bytes-per-sec` = '300MiB'; - ``` + ```sql + -- The default value for both configurations are 100MiB, i.e. the maximum disk bandwidth used for writing snapshots is no more than 100MiB/s. + SET CONFIG tikv `server.snap-io-max-bytes-per-sec` = '300MiB'; + SET CONFIG tiflash `raftstore-proxy.server.snap-max-write-bytes-per-sec` = '300MiB'; + ``` - After executing these SQL statements, the configuration changes take effect immediately without restarting the cluster. However, since the replication speed is still restricted by the PD limit globally, you cannot observe the acceleration for now. + After executing these SQL statements, the configuration changes take effect immediately without restarting the cluster. However, since the replication speed is still restricted by the PD limit globally, you cannot observe the acceleration for now. 2. Use [PD Control](https://docs.pingcap.com/tidb/stable/pd-control) to progressively ease the new replica speed limit. @@ -159,18 +159,18 @@ Before TiFlash replicas are added, each TiKV instance performs a full table scan 3. After the TiFlash replication is complete, revert to the default configuration to reduce the impact on online services. - Execute the following PD Control command to restore the default new replica speed limit: + Execute the following PD Control command to restore the default new replica speed limit: - ```shell - tiup ctl:v pd -u http://:2379 store limit all engine tiflash 30 add-peer - ``` + ```shell + tiup ctl:v pd -u http://:2379 store limit all engine tiflash 30 add-peer + ``` - Execute the following SQL statements to restore the default snapshot write speed limit: + Execute the following SQL statements to restore the default snapshot write speed limit: - ```sql - SET CONFIG tikv `server.snap-io-max-bytes-per-sec` = '100MiB'; - SET CONFIG tiflash `raftstore-proxy.server.snap-max-write-bytes-per-sec` = '100MiB'; - ``` + ```sql + SET CONFIG tikv `server.snap-io-max-bytes-per-sec` = '100MiB'; + SET CONFIG tiflash `raftstore-proxy.server.snap-max-write-bytes-per-sec` = '100MiB'; + ``` ## Set available zones diff --git a/tiflash/troubleshoot-tiflash.md b/tiflash/troubleshoot-tiflash.md index c014f403d8329..d2c0b6c139e8c 100644 --- a/tiflash/troubleshoot-tiflash.md +++ b/tiflash/troubleshoot-tiflash.md @@ -14,21 +14,21 @@ The issue might occur due to different reasons. It is recommended that you troub 1. Check whether your system is RedHat Enterprise Linux 8. - RedHat Enterprise Linux 8 does not have the `libnsl.so` system library. You can manually install it via the following command: + RedHat Enterprise Linux 8 does not have the `libnsl.so` system library. You can manually install it via the following command: - {{< copyable "shell-regular" >}} + {{< copyable "shell-regular" >}} - ```shell - dnf install libnsl - ``` + ```shell + dnf install libnsl + ``` 2. Check your system's `ulimit` parameter setting. - {{< copyable "shell-regular" >}} + {{< copyable "shell-regular" >}} - ```shell - ulimit -n 1000000 - ``` + ```shell + ulimit -n 1000000 + ``` 3. Use the PD Control tool to check whether there is any TiFlash instance that failed to go offline on the node (same IP and Port) and force the instance(s) to go offline. For detailed steps, refer to [Scale in a TiFlash cluster](/scale-tidb-using-tiup.md#scale-in-a-tiflash-cluster). diff --git a/tiup/tiup-bench.md b/tiup/tiup-bench.md index 11850720ec83e..c69b483af73eb 100644 --- a/tiup/tiup-bench.md +++ b/tiup/tiup-bench.md @@ -229,6 +229,6 @@ You can write an arbitrary query in a SQL file, and then use it for the test by 2. Run the RawSQL test: - ```shell - tiup bench rawsql run --count 60 --query-files demo.sql - ``` + ```shell + tiup bench rawsql run --count 60 --query-files demo.sql + ```