Skip to content

Commit

Permalink
[Doc] Add ports to enable for stream load related docs (backport #44215
Browse files Browse the repository at this point in the history
…) (#44256)

Co-authored-by: amber-create <48005258@qq.com>
  • Loading branch information
mergify[bot] and amber-create authored Apr 17, 2024
1 parent 7b4a448 commit 1087386
Show file tree
Hide file tree
Showing 4 changed files with 30 additions and 6 deletions.
4 changes: 4 additions & 0 deletions docs/en/loading/StreamLoad.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,10 @@ Stream Load and Broker Load both support data transformation at data loading and

<InsertPrivNote />

#### Check network configuration

Make sure that the machine on which the data you want to load resides can access the FE and BE nodes of the StarRocks cluster via the [`http_port`](../administration/management/FE_configuration.md#http_port) (default: `8030`) and [`be_http_port`](../administration/management/BE_configuration.md#be_http_port) (default: `8040`) , respectively.

## Loading from a local file system via Stream Load

Stream Load is an HTTP PUT-based synchronous loading method. After you submit a load job, StarRocks synchronously runs the job, and returns the result of the job after the job finishes. You can determine whether the job is successful based on the job result.
Expand Down
12 changes: 10 additions & 2 deletions docs/en/loading/Stream_Load_transaction_interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,6 @@ From v2.4 onwards, StarRocks provides a Stream Load transaction interface to imp

This topic describes the Stream Load transaction interface and how to load data into StarRocks by using this interface.

<InsertPrivNote />

## Description

The Stream Load transaction interface supports using an HTTP protocol-compatible tool or language to call API operations. This topic uses curl as an example to explain how to use this interface. This interface provides various features, such as transaction management, data write, transaction pre-commit, transaction deduplication, and transaction timeout management.
Expand Down Expand Up @@ -86,6 +84,16 @@ The Stream Load transaction interface has the following limits:
- If you the label of a previous transaction to call the `/api/transaction/begin` operation to start a new transaction, the previous transaction will fail and be rolled back.
- The default column separator and row delimiter that StarRocks supports for CSV-formatted data are `\t` and `\n`. If your data file does not use the default column separator or row delimiter, you must use `"column_separator: <column_separator>"` or `"row_delimiter: <row_delimiter>"` to specify the column separator or row delimiter that is actually used in your data file when calling the `/api/transaction/load` operation.

## Before you begin

### Check privileges

<InsertPrivNote />

#### Check network configuration

Make sure that the machine on which the data you want to load resides can access the FE and BE nodes of the StarRocks cluster via the [`http_port`](../administration/management/FE_configuration.md#http_port) (default: `8030`) and [`be_http_port`](../administration/management/BE_configuration.md#be_http_port) (default: `8040`) , respectively.

## Basic operations

### Prepare sample data
Expand Down
4 changes: 4 additions & 0 deletions docs/zh/loading/StreamLoad.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,10 @@ Stream Load 和 Broker Load 均支持在导入过程中做数据转换、以及

<InsertPrivNote />

### 查看网络配置

确保待导入数据所在的机器能够访问 StarRocks 集群中 FE 节点的 [`http_port`](../administration/management/FE_configuration.md#http_port) 端口(默认 `8030`)、以及 BE 节点的 [`be_http_port`](../administration/management/BE_configuration.md#be_http_port) 端口(默认 `8040`)。

## 使用 Stream Load 从本地导入

Stream Load 是一种基于 HTTP PUT 的同步导入方式。提交导入作业以后,StarRocks 会同步地执行导入作业,并返回导入作业的结果信息。您可以通过返回的结果信息来判断导入作业是否成功。
Expand Down
16 changes: 12 additions & 4 deletions docs/zh/loading/Stream_Load_transaction_interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,12 @@ keywords: ['Stream Load']

# 使用 Stream Load 事务接口导入

import InsertPrivNote from '../assets/commonMarkdown/insertPrivNote.md'

为了支持和 Apache Flink®、Apache Kafka® 等其他系统之间实现跨系统的两阶段提交,并提升高并发 Stream Load 导入场景下的性能,StarRocks 自 2.4 版本起提供 Stream Load 事务接口。

本文介绍 Stream Load 事务接口、以及如何使用该事务接口把数据导入到 StarRocks 中。

> **注意**
>
> 导入操作需要目标表的 INSERT 权限。如果您的用户账号没有 INSERT 权限,请参考 [GRANT](../sql-reference/sql-statements/account-management/GRANT.md) 给用户赋权。
## 接口说明

Stream Load 事务接口支持通过兼容 HTTP 协议的工具或语言发起接口请求。本文以 curl 工具为例介绍如何使用该接口。该接口提供事务管理、数据写入、事务预提交、事务去重和超时管理等功能。
Expand Down Expand Up @@ -86,6 +84,16 @@ Stream Load 事务接口具有如下优势:
- 重复调用标签相同的 `/api/transaction/begin` 接口,会导致前面使用相同标签已开启的事务失败并回滚。
- StarRocks支持导入的 CSV 格式数据默认的列分隔符是 `\t`,默认的行分隔符是 `\n`。如果源数据文件中的列分隔符和行分隔符不是 `\t``\n`,则在调用 `/api/transaction/load` 接口时必须通过 `"column_separator: <column_separator>"``"row_delimiter: <row_delimiter>"` 指定行分隔符和列分隔符。

## 准备工作

### 查看权限

<InsertPrivNote />

### 查看网络配置

确保待导入数据所在的机器能够访问 StarRocks 集群中 FE 节点的 [`http_port`](../administration/management/FE_configuration.md#http_port) 端口(默认 `8030`)、以及 BE 节点的 [`be_http_port`](../administration/management/BE_configuration.md#be_http_port) 端口(默认 `8040`)。

## 基本操作

### 准备数据样例
Expand Down

0 comments on commit 1087386

Please sign in to comment.