Skip to content

Commit

Permalink
br: remove s3 region param (pingcap#10561)
Browse files Browse the repository at this point in the history
  • Loading branch information
WangLe1321 authored Jul 25, 2022
1 parent 55b2d6d commit 56a8f72
Show file tree
Hide file tree
Showing 12 changed files with 27 additions and 35 deletions.
10 changes: 5 additions & 5 deletions backup-and-restore-using-dumpling-lightning.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,21 +60,21 @@ SELECT table_name,table_schema,SUM(data_length)/1024/1024 AS data_length,SUM(ind

## 使用 Dumpling 备份全量数据

1. 运行以下命令,从 TiDB 导出全量数据至 Amazon S3 存储路径 `s3://my-bucket/sql-backup?region=us-west-2`
1. 运行以下命令,从 TiDB 导出全量数据至 Amazon S3 存储路径 `s3://my-bucket/sql-backup`

```shell
tiup dumpling -h ${ip} -P 3306 -u root -t 16 -r 200000 -F 256MiB -B my_db1 -f 'my_db1.table[12]' -o 's3://my-bucket/sql-backup?region=us-west-2'
tiup dumpling -h ${ip} -P 3306 -u root -t 16 -r 200000 -F 256MiB -B my_db1 -f 'my_db1.table[12]' -o 's3://my-bucket/sql-backup'
```

Dumpling 默认导出数据格式为 SQL 文件,你也可以通过设置 `--filetype` 指定导出文件的类型。

关于更多 Dumpling 的配置,请参考 [Dumpling 主要选项表](/dumpling-overview.md#dumpling-主要选项表)。

2. 导出完成后,可以在数据存储目录 `s3://my-bucket/sql-backup?region=us-west-2` 查看导出的备份文件。
2. 导出完成后,可以在数据存储目录 `s3://my-bucket/sql-backup` 查看导出的备份文件。

## 使用 TiDB Lightning 恢复全量数据

1. 编写配置文件 `tidb-lightning.toml`,将 Dumpling 备份的全量数据从 `s3://my-bucket/sql-backup?region=us-west-2` 恢复到目标 TiDB 集群:
1. 编写配置文件 `tidb-lightning.toml`,将 Dumpling 备份的全量数据从 `s3://my-bucket/sql-backup` 恢复到目标 TiDB 集群:

```toml
[lightning]
Expand All @@ -91,7 +91,7 @@ SELECT table_name,table_schema,SUM(data_length)/1024/1024 AS data_length,SUM(ind
[mydumper]
# 源数据目录,即上一章节中 Dumpling 保存数据的路径。
data-source-dir = "${data-path}" # 本地或 S3 路径,例如:'s3://my-bucket/sql-backup?region=us-west-2'
data-source-dir = "${data-path}" # 本地或 S3 路径,例如:'s3://my-bucket/sql-backup'
[tidb]
# 目标集群的信息
Expand Down
9 changes: 3 additions & 6 deletions br/backup-and-restore-storages.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ S3、 GCS 和 Azblob 等云存储有时需要额外的连接配置,你可以

```bash
./dumpling -u root -h 127.0.0.1 -P 3306 -B mydb -F 256MiB \
-o 's3://my-bucket/sql-backup?region=us-west-2'
-o 's3://my-bucket/sql-backup'
```

* 用 TiDB Lightning 从 S3 导入数据:
Expand All @@ -39,7 +39,7 @@ S3、 GCS 和 Azblob 等云存储有时需要额外的连接配置,你可以

```bash
./tidb-lightning --tidb-port=4000 --pd-urls=127.0.0.1:2379 --backend=local --sorted-kv-dir=/tmp/sorted-kvs \
-d 's3://my-bucket/sql-backup?region=us-west-2'
-d 's3://my-bucket/sql-backup'
```

* 用 TiDB Lightning 从 S3 导入数据(使用路径类型的请求模式):
Expand Down Expand Up @@ -75,7 +75,6 @@ S3、 GCS 和 Azblob 等云存储有时需要额外的连接配置,你可以
|:----------|:---------|
| `access-key` | 访问密钥 |
| `secret-access-key` | secret 访问密钥 |
| `region` | Amazon S3 服务区域(默认为 `us-east-1`|
| `use-accelerate-endpoint` | 是否在 Amazon S3 上使用加速端点(默认为 `false`|
| `endpoint` | S3 兼容服务自定义端点的 URL(例如 `https://s3.example.com/`|
| `force-path-style` | 使用 path-style,而不是 virtual-hosted style(默认为 `true`|
Expand Down Expand Up @@ -138,8 +137,7 @@ S3、 GCS 和 Azblob 等云存储有时需要额外的连接配置,你可以

```bash
./dumpling -u root -h 127.0.0.1 -P 3306 -B mydb -F 256MiB \
-o 's3://my-bucket/sql-backup' \
--s3.region 'us-west-2'
-o 's3://my-bucket/sql-backup'
```

如果同时指定了 URL 参数和命令行参数,命令行参数会覆盖 URL 参数。
Expand All @@ -148,7 +146,6 @@ S3、 GCS 和 Azblob 等云存储有时需要额外的连接配置,你可以

| 命令行参数 | 描述 |
|:----------|:------|
| `--s3.region` | S3 服务区域(默认为 `us-east-1`|
| `--s3.endpoint` | S3 兼容服务自定义端点的 URL(例如 `https://s3.example.com/`|
| `--s3.storage-class` | 上传对象的存储类别(例如 `STANDARD``STANDARD_IA`|
| `--s3.sse` | 用于加密上传的服务器端加密算法(可以设置为空、`AES256``aws:kms`|
Expand Down
7 changes: 2 additions & 5 deletions br/backup-storage-S3.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,15 +33,15 @@ TiDB 的备份恢复功能 Backup & Restore (BR) 支持将 Amazon S3 或支持 S
{{< copyable "shell-regular" >}}

```shell
br backup full --pd "${PDIP}:2379" --storage "s3://${Bucket}/${Folder}" --s3.region "${region}"
br backup full --pd "${PDIP}:2379" --storage "s3://${Bucket}/${Folder}"
```

- 通过 `br` 命令行参数设置访问 S3 的 `access-key``secret-access-key`, 同时设置 `--send-credentials-to-tikv=true` 将 access key 从 BR 传递到每个 TiKV 上。

{{< copyable "shell-regular" >}}

```shell
br backup full --pd "${PDIP}:2379" --storage "s3://${Bucket}/${Folder}?access-key=${accessKey}&secret-access-key=${secretAccessKey}" --s3.region "${region}" --send-credentials-to-tikv=true
br backup full --pd "${PDIP}:2379" --storage "s3://${Bucket}/${Folder}?access-key=${accessKey}&secret-access-key=${secretAccessKey}" --send-credentials-to-tikv=true
```

在通常情况下,为了避免 `access-key` 等密钥信息记录在命令行中被泄漏,推荐使用为 EC2 实例关联 IAM role 的方法。
Expand All @@ -54,15 +54,13 @@ TiDB 的备份恢复功能 Backup & Restore (BR) 支持将 Amazon S3 或支持 S
br backup full \
--pd "${PDIP}:2379" \
--storage "s3://${Bucket}/${Folder}?access-key=${accessKey}&secret-access-key=${secretAccessKey}" \
--s3.region "${region}" \
--send-credentials-to-tikv=true \
--ratelimit 128 \
--log-file backuptable.log
```

上述命令中,

- `--s3.region`:表示 S3 存储所在的区域。
- `--send-credentials-to-tikv`:表示将 S3 的访问权限传递给 TiKV 节点。

## 从 S3 恢复集群数据
Expand All @@ -73,7 +71,6 @@ br backup full \
br restore full \
--pd "${PDIP}:2379" \
--storage "s3://${Bucket}/${Folder}?access-key=${accessKey}&secret-access-key=${secretAccessKey}" \
--s3.region "${region}" \
--ratelimit 128 \
--send-credentials-to-tikv=true \
--log-file restorefull.log
Expand Down
4 changes: 1 addition & 3 deletions dumpling-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,12 +176,10 @@ export AWS_SECRET_ACCESS_KEY=${SecretKey}

Dumpling 同时还支持从 `~/.aws/credentials` 读取凭证文件。更多 Dumpling 存储配置可以参考[外部存储](/br/backup-and-restore-storages.md)。

在进行 Dumpling 备份时,显式指定参数 `--s3.region`,即表示 Amazon S3 存储所在的区域,例如 `ap-northeast-1`

{{< copyable "shell-regular" >}}

```shell
./dumpling -u root -P 4000 -h 127.0.0.1 -r 200000 -o "s3://${Bucket}/${Folder}" --s3.region "${region}"
./dumpling -u root -P 4000 -h 127.0.0.1 -r 200000 -o "s3://${Bucket}/${Folder}"
```

### 筛选导出的数据
Expand Down
6 changes: 3 additions & 3 deletions encryption-at-rest.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,11 +175,11 @@ region = "us-west-2"
若要使用用户创建和拥有的自定义 AWS KMS CMK,需另外传递 `--s3.sse-kms-key-id` 参数。此时,BR 进程和集群中的所有 TiKV 节点都需访问该 KMS CMK(例如,通过 AWS IAM),并且该 KMS CMK 必须与存储备份的 S3 bucket 位于同一 AWS 区域。建议通过 AWS IAM 向 BR 进程和 TiKV 节点授予对 KMS CMK 的访问权限。参见 AWS 文档中的 [IAM](https://docs.aws.amazon.com/zh_cn/IAM/latest/UserGuide/introduction.html)。示例如下:

```
./br backup full --pd <pd-address> --storage "s3://<bucket>/<prefix>" --s3.region <region> --s3.sse aws:kms --s3.sse-kms-key-id 0987dcba-09fe-87dc-65ba-ab0987654321
./br backup full --pd <pd-address> --storage "s3://<bucket>/<prefix>" --s3.sse aws:kms --s3.sse-kms-key-id 0987dcba-09fe-87dc-65ba-ab0987654321
```

恢复备份时,不需要也不可指定 `--s3.sse``--s3.sse-kms-key-id` 参数。S3 将自动相应进行解密。用于恢复备份数据的 BR 进程和集群中的 TiKV 节点也需要访问 KMS CMK,否则恢复将失败。示例如下:

```
./br restore full --pd <pd-address> --storage "s3://<bucket>/<prefix> --s3.region <region>"
```
./br restore full --pd <pd-address> --storage "s3://<bucket>/<prefix>"
```
6 changes: 3 additions & 3 deletions migrate-aurora-to-tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ aliases: ['/zh/tidb/dev/migrate-from-aurora-using-lightning/','/docs-cn/dev/migr
{{< copyable "shell-regular" >}}

```shell
tiup dumpling --host ${host} --port 3306 --user root --password ${password} --filter 'my_db1.table[12]' --no-data --output 's3://my-bucket/schema-backup?region=us-west-2' --filter "mydb.*"
tiup dumpling --host ${host} --port 3306 --user root --password ${password} --filter 'my_db1.table[12]' --no-data --output 's3://my-bucket/schema-backup' --filter "mydb.*"
```

命令中所用参数描述如下。如需更多信息可参考 [Dumpling overview](/dumpling-overview.md)。
Expand Down Expand Up @@ -109,7 +109,7 @@ sorted-kv-dir = "${path}"
[mydumper]
# 快照文件的地址
data-source-dir = "${s3_path}" # eg: s3://my-bucket/sql-backup?region=us-west-2
data-source-dir = "${s3_path}" # eg: s3://my-bucket/sql-backup
[[mydumper.files]]
# 解析 parquet 文件所需的表达式
Expand All @@ -128,7 +128,7 @@ type = '$3'
{{< copyable "shell-regular" >}}

```shell
tiup tidb-lightning -config tidb-lightning.toml -d 's3://my-bucket/schema-backup?region=us-west-2'
tiup tidb-lightning -config tidb-lightning.toml -d 's3://my-bucket/schema-backup'
```

2. 运行 `tidb-lightning`。如果直接在命令行中启动程序,可能会因为 `SIGHUP` 信号而退出,建议配合 `nohup``screen` 等工具,如:
Expand Down
2 changes: 1 addition & 1 deletion migrate-from-csv-files-to-tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ sorted-kv-dir = "/mnt/ssd/sorted-kv-dir"

[mydumper]
# 源数据目录。
data-source-dir = "${data-path}" # 本地或 S3 路径,例如:'s3://my-bucket/sql-backup?region=us-west-2'
data-source-dir = "${data-path}" # 本地或 S3 路径,例如:'s3://my-bucket/sql-backup'

# 定义 CSV 格式
[mydumper.csv]
Expand Down
4 changes: 2 additions & 2 deletions migrate-from-sql-files-to-tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ aliases: ['/docs-cn/dev/migrate-from-mysql-mydumper-files/','/zh/tidb/dev/migrat

## 第 1 步:准备 SQL 文件

将所有 SQL 文件放到统一目录下,例如 `/data/my_datasource/``s3://my-bucket/sql-backup?region=us-west-2`。Lightning 将递归地寻找该目录下及其子目录内的所有 `.sql` 文件。
将所有 SQL 文件放到统一目录下,例如 `/data/my_datasource/``s3://my-bucket/sql-backup`。Lightning 将递归地寻找该目录下及其子目录内的所有 `.sql` 文件。

## 第 2 步:定义目标表结构

Expand Down Expand Up @@ -53,7 +53,7 @@ sorted-kv-dir = "${sorted-kv-dir}"

[mydumper]
# 源数据目录
data-source-dir = "${data-path}" # 本地或 S3 路径,例如:'s3://my-bucket/sql-backup?region=us-west-2'
data-source-dir = "${data-path}" # 本地或 S3 路径,例如:'s3://my-bucket/sql-backup'

[tidb]
# 目标集群的信息
Expand Down
4 changes: 2 additions & 2 deletions migrate-large-mysql-to-tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ SELECT table_name,table_schema,SUM(data_length)/1024/1024 AS data_length,SUM(ind
1. 运行以下命令,从 MySQL 导出全量数据:

```shell
tiup dumpling -h ${ip} -P 3306 -u root -t 16 -r 200000 -F 256MiB -B my_db1 -f 'my_db1.table[12]' -o 's3://my-bucket/sql-backup?region=us-west-2'
tiup dumpling -h ${ip} -P 3306 -u root -t 16 -r 200000 -F 256MiB -B my_db1 -f 'my_db1.table[12]' -o 's3://my-bucket/sql-backup'
```

Dumpling 默认导出数据格式为 SQL 文件,你也可以通过设置 `--filetype` 指定导出文件的类型。
Expand Down Expand Up @@ -101,7 +101,7 @@ SELECT table_name,table_schema,SUM(data_length)/1024/1024 AS data_length,SUM(ind
[mydumper]
# 源数据目录,即第 1 步中 Dumpling 保存数据的路径。
data-source-dir = "${data-path}" # 本地或 S3 路径,例如:'s3://my-bucket/sql-backup?region=us-west-2'
data-source-dir = "${data-path}" # 本地或 S3 路径,例如:'s3://my-bucket/sql-backup'
[tidb]
# 目标集群的信息
Expand Down
4 changes: 2 additions & 2 deletions sql-statements/sql-statement-backup.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ BR 支持备份数据到 Amazon S3 或 Google Cloud Storage (GCS):
{{< copyable "sql" >}}

```sql
BACKUP DATABASE `test` TO 's3://example-bucket-2020/backup-05/?region=us-west-2&access-key={YOUR_ACCESS_KEY}&secret-access-key={YOUR_SECRET_KEY}';
BACKUP DATABASE `test` TO 's3://example-bucket-2020/backup-05/?access-key={YOUR_ACCESS_KEY}&secret-access-key={YOUR_SECRET_KEY}';
```

有关详细的 URL 语法,见[外部存储](/br/backup-and-restore-storages.md)
Expand All @@ -113,7 +113,7 @@ BACKUP DATABASE `test` TO 's3://example-bucket-2020/backup-05/?region=us-west-2&
{{< copyable "sql" >}}

```sql
BACKUP DATABASE `test` TO 's3://example-bucket-2020/backup-05/?region=us-west-2'
BACKUP DATABASE `test` TO 's3://example-bucket-2020/backup-05/'
SEND_CREDENTIALS_TO_TIKV = FALSE;
```

Expand Down
4 changes: 2 additions & 2 deletions sql-statements/sql-statement-restore.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ BR 支持从 Amazon S3 或 Google Cloud Storage (GCS) 恢复数据:
{{< copyable "sql" >}}

```sql
RESTORE DATABASE * FROM 's3://example-bucket-2020/backup-05/?region=us-west-2';
RESTORE DATABASE * FROM 's3://example-bucket-2020/backup-05/';
```

有关详细的 URL 语法,见[外部存储](/br/backup-and-restore-storages.md)
Expand All @@ -108,7 +108,7 @@ RESTORE DATABASE * FROM 's3://example-bucket-2020/backup-05/?region=us-west-2';
{{< copyable "sql" >}}

```sql
RESTORE DATABASE * FROM 's3://example-bucket-2020/backup-05/?region=us-west-2'
RESTORE DATABASE * FROM 's3://example-bucket-2020/backup-05/'
SEND_CREDENTIALS_TO_TIKV = FALSE;
```

Expand Down
2 changes: 1 addition & 1 deletion sql-statements/sql-statement-show-backups.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ ShowLikeOrWhere ::=
{{< copyable "sql" >}}

```sql
BACKUP DATABASE `test` TO 's3://example-bucket/backup-01/?region=us-west-1';
BACKUP DATABASE `test` TO 's3://example-bucket/backup-01/';
```

在备份完成之前,在新的连接中执行 `SHOW BACKUPS`
Expand Down

0 comments on commit 56a8f72

Please sign in to comment.