Skip to content

Commit

Permalink
[Doc] Add doc for the incoming 1.2.10 (#390)
Browse files Browse the repository at this point in the history
Signed-off-by: PengFei Li <lpengfei2016@gmail.com>
  • Loading branch information
banmoy authored Oct 25, 2024
1 parent 353ab2d commit 831add1
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 0 deletions.
3 changes: 3 additions & 0 deletions docs/content/connector-sink.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ The Flink connector supports DataStream API, Table API & SQL, and Python API. It

| Connector | Flink | StarRocks | Java | Scala |
|-----------|--------------------------|---------------| ---- |-----------|
| 1.2.10 | 1.15,1.16,1.17,1.18,1.19 | 2.1 and later| 8 | 2.11,2.12 |
| 1.2.9 | 1.15,1.16,1.17,1.18 | 2.1 and later| 8 | 2.11,2.12 |
| 1.2.8 | 1.13,1.14,1.15,1.16,1.17 | 2.1 and later| 8 | 2.11,2.12 |
| 1.2.7 | 1.11,1.12,1.13,1.14,1.15 | 2.1 and later| 8 | 2.11,2.12 |
Expand Down Expand Up @@ -102,6 +103,7 @@ In your Maven project's `pom.xml` file, add the Flink connector as a dependency
| sink.buffer-flush.interval-ms | No | 300000 | The interval at which data is flushed. This parameter is available only when `sink.semantic` is `at-least-once`. Valid values: 1000 to 3600000. Unit: ms. |
| sink.max-retries | No | 3 | The number of times that the system retries to perform the Stream Load job. This parameter is available only when you set `sink.version` to `V1`. Valid values: 0 to 10. |
| sink.connect.timeout-ms | No | 30000 | The timeout for establishing HTTP connection. Valid values: 100 to 60000. Unit: ms. Before 1.2.9, the default value is 1000. |
| sink.socket.timeout-ms | No | -1 | Supported since 1.2.10. The time duration for which the HTTP client waits for data. Unit: ms. The default value `-1` means there is no timeout. |
| sink.wait-for-continue.timeout-ms | No | 10000 | Supported since 1.2.7. The timeout for waiting response of HTTP 100-continue from the FE. Valid values: `3000` to `600000`. Unit: ms |
| sink.ignore.update-before | No | true | Supported since version 1.2.8. Whether to ignore `UPDATE_BEFORE` records from Flink when loading data to Primary Key tables. If this parameter is set to false, the record is treated as a delete operation to StarRocks table. |
| sink.parallelism | No | NONE | The parallelism of loading. Only available for Flink SQL. If this parameter is not specified, Flink planner decides the parallelism. **In the scenario of multi-parallelism, users need to guarantee data is written in the correct order.** |
Expand All @@ -111,6 +113,7 @@ In your Maven project's `pom.xml` file, add the Flink connector as a dependency
| sink.properties.row_delimiter | No | \n | The row delimiter for CSV-formatted data. |
| sink.properties.max_filter_ratio | No | 0 | The maximum error tolerance of the Stream Load. It's the maximum percentage of data records that can be filtered out due to inadequate data quality. Valid values: `0` to `1`. Default value: `0`. See [Stream Load](https://docs.starrocks.io/en-us/latest/sql-reference/sql-statements/data-manipulation/STREAM%20LOAD) for details. |
| sink.properties.strict_mode | No | false | Specifies whether to enable the strict mode for Stream Load. It affects the loading behavior when there are unqualified rows, such as inconsistent column values. Valid values: `true` and `false`. Default value: `false`. See [Stream Load](https://docs.starrocks.io/en-us/latest/sql-reference/sql-statements/data-manipulation/STREAM%20LOAD) for details. |
| sink.properties.compression | No | NONE | Supported since 1.2.10. The compression algorithm used for Stream Load. Currently, compression is only supported for the JSON format. Valid values: `lz4_frame`. Compression for json format is supported only in StarRocks v3.2.7 and later. |

## Data type mapping between Flink and StarRocks

Expand Down
5 changes: 5 additions & 0 deletions docs/content/connector-source.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ Unlike the JDBC connector provided by Flink, the Flink connector of StarRocks su

| Connector | Flink | StarRocks | Java | Scala |
|-----------|--------------------------|---------------| ---- |-----------|
| 1.2.10 | 1.15,1.16,1.17,1.18,1.19 | 2.1 and later| 8 | 2.11,2.12 |
| 1.2.9 | 1.15,1.16,1.17,1.18 | 2.1 and later| 8 | 2.11,2.12 |
| 1.2.8 | 1.13,1.14,1.15,1.16,1.17 | 2.1 and later| 8 | 2.11,2.12 |
| 1.2.7 | 1.11,1.12,1.13,1.14,1.15 | 2.1 and later| 8 | 2.11,2.12 |
Expand Down Expand Up @@ -141,6 +142,10 @@ The following data type mapping is valid only for Flink reading data from StarRo
| DECIMAL128 | DECIMAL |
| CHAR | CHAR |
| VARCHAR | STRING |
| JSON | STRING <br> **NOTE:** <br> **Supported since version 1.2.10** |
| ARRAY | ARRAY <br> **NOTE:** <br> **Supported since version 1.2.10, and StarRocks v3.1.12/v3.2.5 or later is required.** |
| STRUCT | ROW <br> **NOTE:** <br> **Supported since version 1.2.10, and StarRocks v3.1.12/v3.2.5 or later is required.** |
| MAP | MAP <br> **NOTE:** <br> **Supported since version 1.2.10, and StarRocks v3.1.12/v3.2.5 or later is required.** |

## Examples

Expand Down

0 comments on commit 831add1

Please sign in to comment.