Skip to content

Commit dd3c025

Browse files
committed
cleanup README.md
1 parent cca550b commit dd3c025

File tree

1 file changed

+122
-120
lines changed

1 file changed

+122
-120
lines changed

README.md

Lines changed: 122 additions & 120 deletions
Original file line numberDiff line numberDiff line change
@@ -1,120 +1,122 @@
1-
# ClickHouse Dump
2-
3-
A powerful tool for dumping and restoring ClickHouse databases to/from various storage backends.
4-
5-
## Features
6-
7-
- Dump and restore ClickHouse databases and tables
8-
- Filter databases and tables using regular expressions
9-
- Support for multiple storage backends:
10-
- Local file system
11-
- Amazon S3 (and compatible services like MinIO)
12-
- Google Cloud Storage
13-
- Azure Blob Storage
14-
- SFTP
15-
- FTP
16-
- Compression support (gzip, zstd)
17-
- Configurable batch sizes for optimal performance
18-
19-
## Installation
20-
21-
```bash
22-
go install github.com/Slach/clickhouse-dump@latest
23-
```
24-
25-
Or download the latest binary from the [releases page](https://github.com/Slach/clickhouse-dump/releases).
26-
27-
## Usage
28-
29-
### Basic Commands
30-
31-
```bash
32-
# Dump a database
33-
clickhouse-dump dump BACKUP_NAME
34-
35-
# Restore from a backup
36-
clickhouse-dump restore BACKUP_NAME
37-
```
38-
39-
### Connection Parameters
40-
41-
| Flag | Environment Variable | Default | Description |
42-
|------|---------------------|---------|-------------|
43-
| `--host`, `-H` | `CLICKHOUSE_HOST` | `localhost` | ClickHouse host |
44-
| `--port`, `-p` | `CLICKHOUSE_PORT` | `8123` | ClickHouse HTTP port |
45-
| `--user`, `-u` | `CLICKHOUSE_USER` | `default` | ClickHouse user |
46-
| `--password`, `-P` | `CLICKHOUSE_PASSWORD` | | ClickHouse password |
47-
48-
### Filtering Options
49-
50-
| Flag | Environment Variable | Default | Description |
51-
|------|---------------------|---------|-------------|
52-
| `--databases`, `-d` | `CLICKHOUSE_DATABASES` | `.*` | Regexp pattern for databases to include |
53-
| `--exclude-databases` | `EXCLUDE_DATABASES` | `^system$\|^INFORMATION_SCHEMA$\|^information_schema$` | Regexp pattern for databases to exclude |
54-
| `--tables`, `-t` | `TABLES` | `.*` | Regexp pattern for tables to include |
55-
| `--exclude-tables` | `EXCLUDE_TABLES` | | Regexp pattern for tables to exclude |
56-
57-
### Dump Options
58-
59-
| Flag | Environment Variable | Default | Description |
60-
|------|---------------------|---------|-------------|
61-
| `--batch-size` | `BATCH_SIZE` | `100000` | Batch size for SQL Insert statements |
62-
| `--compress-format` | `COMPRESS_FORMAT` | `gzip` | Compression format: gzip, zstd, or none |
63-
| `--compress-level` | `COMPRESS_LEVEL` | `6` | Compression level (gzip: 1-9, zstd: 1-22) |
64-
65-
### Storage Options
66-
67-
| Flag | Environment Variable | Required For | Description |
68-
|------|---------------------|--------------|-------------|
69-
| `--storage-type` | `STORAGE_TYPE` | All | Storage backend type: file, s3, gcs, azblob, sftp, ftp |
70-
| `--storage-path` | `STORAGE_PATH` | file | Base path in storage for dump/restore files |
71-
| `--storage-bucket` | `STORAGE_BUCKET` | s3, gcs | S3/GCS bucket name |
72-
| `--storage-region` | `STORAGE_REGION` | s3 | S3 region |
73-
| `--storage-account` | `AWS_ACCESS_KEY_ID`, `STORAGE_ACCOUNT` | s3, azblob | Storage account name/access key |
74-
| `--storage-key` | `AWS_SECRET_ACCESS_KEY`, `STORAGE_KEY` | s3, gcs, azblob | Storage secret key |
75-
| `--storage-endpoint` | `STORAGE_ENDPOINT` | s3, gcs, azblob | Custom endpoint URL |
76-
| `--storage-container` | `STORAGE_CONTAINER` | azblob | Azure Blob Storage container name |
77-
| `--storage-host` | `STORAGE_HOST` | sftp, ftp | SFTP/FTP host (and optional port) |
78-
| `--storage-user` | `STORAGE_USER` | sftp, ftp | SFTP/FTP user |
79-
| `--storage-password` | `STORAGE_PASSWORD` | sftp, ftp | SFTP/FTP password |
80-
81-
### Other Options
82-
83-
| Flag | Environment Variable | Default | Description |
84-
|------|---------------------|---------|-------------|
85-
| `--debug` | `DEBUG` | `false` | Enable debug logging |
86-
87-
## Examples
88-
89-
### Dump to Local File System
90-
91-
```bash
92-
clickhouse-dump --storage-type file --storage-path /backups dump my_backup
93-
```
94-
95-
### Dump to S3
96-
97-
```bash
98-
clickhouse-dump --storage-type s3 --storage-bucket my-bucket --storage-region us-east-1 \
99-
--storage-account AKIAIOSFODNN7EXAMPLE --storage-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY \
100-
dump my_backup
101-
```
102-
103-
### Restore from S3
104-
105-
```bash
106-
clickhouse-dump --storage-type s3 --storage-bucket my-bucket --storage-region us-east-1 \
107-
--storage-account AKIAIOSFODNN7EXAMPLE --storage-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY \
108-
restore my_backup
109-
```
110-
111-
### Dump Specific Databases with Compression
112-
113-
```bash
114-
clickhouse-dump --databases "^(db1|db2)$" --compress-format zstd --compress-level 19 \
115-
--storage-type file --storage-path /backups dump my_backup
116-
```
117-
118-
## License
119-
120-
MIT
1+
# ClickHouse Dump
2+
3+
A simple tool for sql dumping and restoring ClickHouse databases to/from various storage backends.
4+
Main motivation is create simple tool for export/import small (less 10Gb) data via different clickhouse-servers, without direct access to disk.
5+
Use https://github.com/Altinity/clickhouse-backup for production backup.
6+
7+
## Features
8+
9+
- Dump and restore ClickHouse databases and tables
10+
- Filter databases and tables using regular expressions
11+
- Support for multiple storage backends:
12+
- Local file system
13+
- Amazon S3 (and compatible services like MinIO)
14+
- Google Cloud Storage
15+
- Azure Blob Storage
16+
- SFTP
17+
- FTP
18+
- Compression support (gzip, zstd)
19+
- Configurable batch sizes for optimal performance
20+
21+
## Installation
22+
23+
```bash
24+
go install github.com/Slach/clickhouse-dump@latest
25+
```
26+
27+
Or download the latest binary from the [releases page](https://github.com/Slach/clickhouse-dump/releases).
28+
29+
## Usage
30+
31+
### Basic Commands
32+
33+
```bash
34+
# Dump a database
35+
clickhouse-dump dump BACKUP_NAME
36+
37+
# Restore from a backup
38+
clickhouse-dump restore BACKUP_NAME
39+
```
40+
41+
### Connection Parameters
42+
43+
| Flag | Environment Variable | Default | Description |
44+
|------|---------------------|---------|-------------|
45+
| `--host`, `-H` | `CLICKHOUSE_HOST` | `localhost` | ClickHouse host |
46+
| `--port`, `-p` | `CLICKHOUSE_PORT` | `8123` | ClickHouse HTTP port |
47+
| `--user`, `-u` | `CLICKHOUSE_USER` | `default` | ClickHouse user |
48+
| `--password`, `-P` | `CLICKHOUSE_PASSWORD` | | ClickHouse password |
49+
50+
### Filtering Options
51+
52+
| Flag | Environment Variable | Default | Description |
53+
|------|---------------------|---------|-------------|
54+
| `--databases`, `-d` | `CLICKHOUSE_DATABASES` | `.*` | Regexp pattern for databases to include |
55+
| `--exclude-databases` | `EXCLUDE_DATABASES` | `^system$\|^INFORMATION_SCHEMA$\|^information_schema$` | Regexp pattern for databases to exclude |
56+
| `--tables`, `-t` | `TABLES` | `.*` | Regexp pattern for tables to include |
57+
| `--exclude-tables` | `EXCLUDE_TABLES` | | Regexp pattern for tables to exclude |
58+
59+
### Dump Options
60+
61+
| Flag | Environment Variable | Default | Description |
62+
|------|---------------------|---------|-------------|
63+
| `--batch-size` | `BATCH_SIZE` | `100000` | Batch size for SQL Insert statements |
64+
| `--compress-format` | `COMPRESS_FORMAT` | `gzip` | Compression format: gzip, zstd, or none |
65+
| `--compress-level` | `COMPRESS_LEVEL` | `6` | Compression level (gzip: 1-9, zstd: 1-22) |
66+
67+
### Storage Options
68+
69+
| Flag | Environment Variable | Required For | Description |
70+
|------|---------------------|--------------|-------------|
71+
| `--storage-type` | `STORAGE_TYPE` | All | Storage backend type: file, s3, gcs, azblob, sftp, ftp |
72+
| `--storage-path` | `STORAGE_PATH` | file | Base path in storage for dump/restore files |
73+
| `--storage-bucket` | `STORAGE_BUCKET` | s3, gcs | S3/GCS bucket name |
74+
| `--storage-region` | `STORAGE_REGION` | s3 | S3 region |
75+
| `--storage-account` | `AWS_ACCESS_KEY_ID`, `STORAGE_ACCOUNT` | s3, azblob | Storage account name/access key |
76+
| `--storage-key` | `AWS_SECRET_ACCESS_KEY`, `STORAGE_KEY` | s3, gcs, azblob | Storage secret key |
77+
| `--storage-endpoint` | `STORAGE_ENDPOINT` | s3, gcs, azblob | Custom endpoint URL |
78+
| `--storage-container` | `STORAGE_CONTAINER` | azblob | Azure Blob Storage container name |
79+
| `--storage-host` | `STORAGE_HOST` | sftp, ftp | SFTP/FTP host (and optional port) |
80+
| `--storage-user` | `STORAGE_USER` | sftp, ftp | SFTP/FTP user |
81+
| `--storage-password` | `STORAGE_PASSWORD` | sftp, ftp | SFTP/FTP password |
82+
83+
### Other Options
84+
85+
| Flag | Environment Variable | Default | Description |
86+
|------|---------------------|---------|-------------|
87+
| `--debug` | `DEBUG` | `false` | Enable debug logging |
88+
89+
## Examples
90+
91+
### Dump to Local File System
92+
93+
```bash
94+
clickhouse-dump --storage-type file --storage-path /backups dump my_backup
95+
```
96+
97+
### Dump to S3
98+
99+
```bash
100+
clickhouse-dump --storage-type s3 --storage-bucket my-bucket --storage-region us-east-1 \
101+
--storage-account AKIAIOSFODNN7EXAMPLE --storage-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY \
102+
dump my_backup
103+
```
104+
105+
### Restore from S3
106+
107+
```bash
108+
clickhouse-dump --storage-type s3 --storage-bucket my-bucket --storage-region us-east-1 \
109+
--storage-account AKIAIOSFODNN7EXAMPLE --storage-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY \
110+
restore my_backup
111+
```
112+
113+
### Dump Specific Databases with Compression
114+
115+
```bash
116+
clickhouse-dump --databases "^(db1|db2)$" --compress-format zstd --compress-level 19 \
117+
--storage-type file --storage-path /backups dump my_backup
118+
```
119+
120+
## License
121+
122+
MIT

0 commit comments

Comments
 (0)