Skip to content

Commit 8fe8bbc

Browse files
authored
Merge branch 'risingwavelabs:main' into main
2 parents 9522546 + 461f529 commit 8fe8bbc

File tree

16 files changed

+104
-31
lines changed

16 files changed

+104
-31
lines changed

.github/ISSUE_TEMPLATE/bug_report.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ name: Bug Report
22
description: Noticed any typos, misused terms, inconsistencies, or ambiguities in the docs? Let us know!
33
title: "Bug: "
44
labels: ["bug"]
5-
assignees: [CharlieSYH, hengm3467]
5+
assignees: [WanYixian, ShanlanLi]
66
body:
77
- type: markdown
88
attributes:

docs/guides/create-sink-kafka.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -72,6 +72,7 @@ When creating a Kafka sink in RisingWave, you can specify the following Kafka-sp
7272
|queue.buffering.max.kbytes |properties.queue.buffering.max.kbytes| int|
7373
|queue.buffering.max.messages |properties.queue.buffering.max.messages |int|
7474
|queue.buffering.max.ms |properties.queue.buffering.max.ms |float|
75+
|request.required.acks| properties.request.required.acks| int |
7576
|retry.backoff.ms |properties.retry.backoff.ms| int|
7677
|receive.message.max.bytes | properties.receive.message.max.bytes | int |
7778
|ssl.endpoint.identification.algorithm | properties.ssl.endpoint.identification.algorithm | str |

docs/guides/ingest-from-mysql-cdc.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ CREATE USER 'user'@'%' IDENTIFIED BY 'password';
4848
2. Grant the appropriate privileges to the user.
4949

5050
```sql
51-
GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'user'@'%';
51+
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'user'@'%';
5252
```
5353

5454
3. Finalize the privileges.

docs/guides/sink-to-elasticsearch.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,20 @@
22
id: sink-to-elasticsearch
33
title: Sink data from RisingWave to Elasticsearch
44
description: Sink data from RisingWave to Elasticsearch.
5-
slug: /sink-to-elasticsearch
5+
slug: /sink-to-elasticsearch
66
---
77
You can deliver the data that has been ingested and transformed in RisingWave to Elasticsearch to serve searches or analytics.
88

99
This guide describes how to sink data from RisingWave to Elasticsearch using the Elasticsearch sink connector in RisingWave.
1010

1111
[Elasticsearch](https://www.elastic.co/elasticsearch/) is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases. It centrally stores your data for lightning-fast search, fine‑tuned relevancy, and powerful analytics that scale with ease.
1212

13+
The Elasticsearch sink connecter in RisingWave will perform index operations via the [bulk API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html#docs-bulk-api-request), flushing updates whenever one of these criteria is met:
14+
15+
- 1,000 operations
16+
- 5mb of updates
17+
- 5 seconds since the last flush (assuming new actions are queued)
18+
1319
:::note Beta Feature
1420
The Elasticsearch sink connector in RisingWave is currently a Beta feature that supports only versions 7.x and 8.x of Elasticsearch. Please contact us if you encounter any issues or have feedback.
1521
:::
@@ -38,7 +44,7 @@ WITH (
3844
primary_key = '<primary key of the sink_from object>',
3945
{ index = '<your Elasticsearch index>' | index_column = '<your index column>' },
4046
url = 'http://<ES hostname>:<ES port>',
41-
username = '<your ES username>',
47+
username = '<your ES username>',
4248
password = '<your password>',
4349
delimiter='<delimiter>'
4450
);

docs/guides/sink-to-nats.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,9 @@ WITH (
3737
connect_mode=<connect_mode>
3838
username='<your user name>',
3939
password='<your password>'
40-
jwt=`<your jwt>`,
41-
nkey=`<your nkey>`
40+
jwt='<your jwt>',
41+
nkey='<your nkey>',
42+
type='<sink data type>'
4243
);
4344
```
4445

@@ -52,7 +53,7 @@ The NATS sink connector in RisingWave provides at-least-once delivery semantics.
5253

5354
:::note
5455

55-
According to the [NATS documentation](https://docs.nats.io/running-a-nats-service/nats_admin/jetstream_admin/naming), stream names must adhere to subject naming rules as well as being friendly to the file system. Here are the recommended guidelines for stream names:
56+
According to the [NATS documentation](https://docs.nats.io/running-a-nats-service/nats_admin/jetstream_admin/naming), stream names must adhere to subject naming rules as well as be friendly to the file system. Here are the recommended guidelines for stream names:
5657

5758
- Use alphanumeric values.
5859
- Avoid spaces, tabs, periods (`.`), greater than (`>`) or asterisks (`*`).
@@ -72,3 +73,4 @@ According to the [NATS documentation](https://docs.nats.io/running-a-nats-servic
7273
|`connect_mode`|Required. Authentication mode for the connection. Allowed values: `plain`: No authentication; `user_and_password`: Use user name and password for authentication. For this option, `username` and `password` must be specified; `credential`: Use JSON Web Token (JWT) and NKeys for authentication. For this option, `jwt` and `nkey` must be specified. |
7374
|`jwt` and `nkey`|JWT and NKEY for authentication. For details, see [JWT](https://docs.nats.io/running-a-nats-service/configuration/securing_nats/auth_intro/jwt) and [NKeys](https://docs.nats.io/running-a-nats-service/configuration/securing_nats/auth_intro/nkey_auth).|
7475
|`username` and `password`| Conditional. The client user name and pasword. Required when `connect_mode` is `user_and_password`.|
76+
|`type`|Required. Sink data type. Its value should be `append-only`.|

docs/manage/view-configure-runtime-parameters.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ Below is the detailed information about the parameters you may see after using t
7272
| lock_timeout | 0 | See [here](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-LOCK-TIMEOUT) for details. Unused in RisingWave, support for compatibility. |
7373
| row_security | `true`/`false` | See [here](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-ROW-SECURITY) for details. Unused in RisingWave, support for compatibility. |
7474
| standard_conforming_strings | on | See [here](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-STANDARD-CONFORMING-STRINGS) for details. |
75-
| streaming_rate_limit | 0 | Set the maximum number of records per second per source, for each parallelism. The source here refers to an upstream source or snapshot read in the backfilling process. |
75+
| streaming_rate_limit | `default`/ A positive integer / `0` | Set the maximum number of records per second per source, for each parallelism. The source here refers to an upstream source or snapshot read in the backfilling process.<br/><br/> `SET STREAMING_RATE_LIMIT TO 0` will pause the snapshot read stream for backfill and pause the source read for sources (Previously, this disabled the rate limit within the session). `SET STREAMING_RATE_LIMIT TO DEFAULT` will disable the rate limit within the session, but it will not change the rate limits of existing DDLs. |
7676
| rw_streaming_over_window_cache_policy | full | Cache policy for partition cache in streaming over window. Can be "full", "recent", "recent_first_n" or "recent_last_n". |
7777
| background_ddl | `true`/`false` | Run DDL statements in background. |
7878
| server_encoding | UTF8 | Show the server-side character set encoding. At present, this parameter can be shown but not set, because the encoding is determined at database creation time. |
@@ -104,4 +104,4 @@ You can also use the [`ALTER SYSTEM SET`](/sql/commands/sql-alter-system.md) com
104104

105105
```sql title="Syntax"
106106
ALTER SYSTEM SET session_param_name TO session_param_value;
107-
```
107+
```

docs/sql/commands/sql-recover.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
id: sql-recover
3+
title: RECOVER
4+
description: Trigger recovery manually.
5+
slug: /sql-recover
6+
---
7+
<head>
8+
<link rel="canonical" href="https://docs.risingwave.com/docs/current/sql-recover/" />
9+
</head>
10+
11+
Use the `RECOVER` command to trigger an ad-hoc recovery manually. This is helpful when there is a high barrier latency and you need to force a recovery to activate. By doing this, commands like `CANCEL` or `DROP` can take effect immediately.
12+
13+
14+
```sql title="Syntax"
15+
RECOVER;
16+
```
17+
18+
```sql title="Syntax"
19+
RECOVER;
20+
----RESULT
21+
RECOVER
22+
```

docs/sql/commands/sql-set-background-ddl.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,9 +20,9 @@ Use the `SET BACKGROUND_DDL` command to run Data Definition Language (DDL) opera
2020
SET BACKGROUND_DDL = { true | false };
2121
```
2222

23-
- When `BACKGROUND_DDL` is set to true, any subsequent DDL operations will be executed in the background, allowing you to proceed with other tasks.
23+
- By default, `BACKGROUND_DDL` is set as `false` to disable it, meaning that DDL operations will execute in the foreground. The commands `CREATE MATERIALIZED VIEW ON TABLE`, `CREATE MATERIALIZED VIEW ON MATERIALIZED VIEW`, `CREATE MATERIALIZED VIEW ON SOURCE` and `CREATE SINK` will only be executed once the backfilling process is completed.
2424

25-
- When `BACKGROUND_DDL` is set to false (or not set at all), the DDL operations will execute in the foreground.
25+
- When `BACKGROUND_DDL` is set to `true`, any subsequent DDL operations will be executed in the background, allowing you to proceed with other tasks.
2626

2727
## Supported DDL operations
2828

docs/sql/query-syntax/query-syntax-join-clause.md

Lines changed: 42 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ ON s1.id = s2.id and s1.window_start = s2.window_start;
112112

113113
## Interval joins
114114

115-
Window joins require that the two sources have the same window type and window size. This requirement can be too strict in some scenarios. If you want to join two sources that have some time offset, you can create an interval join by specifying an accepted internval range based on watermarks.
115+
Window joins require that the two sources have the same window type and window size. This requirement can be too strict in some scenarios. If you want to join two sources that have some time offset, you can create an interval join by specifying an accepted interval range based on watermarks.
116116

117117
The syntax of an interval join is:
118118

@@ -136,23 +136,27 @@ ON s1.id = s2.id and s1.ts between s2.ts and s2.ts + INTERVAL '1' MINUTE;
136136

137137
## Process-time temporal joins
138138

139-
A temporal join is often used to widen a fact table. Its advantage is that it does not require RisingWave to maintain the join state, making it suitable for scenarios where the dimension table is not updated, or where updates to the dimension table do not affect the previously joined results. To further improve performance, you can use the index of a dimension table to form a join with the fact table.
139+
Process-time temporal joins are divided into two categories: append-only process-time temporal join and non-append-only process-time temporal join. Check the following instructions for their differences.
140140

141-
### Syntax
141+
### Append-only process-time temporal join
142+
143+
An append-only temporal join is often used to widen a fact table. Its advantage is that it does not require RisingWave to maintain the join state, making it suitable for scenarios where the dimension table is not updated, or where updates to the dimension table do not affect the previously joined results. To further improve performance, you can use the index of a dimension table to form a join with the fact table.
144+
145+
#### Syntax
142146

143147
```sql
144148
<table_expression> [ LEFT | INNER ] JOIN <table_expression> FOR SYSTEM_TIME AS OF PROCTIME() ON <join_conditions>;
145149
```
146150

147-
### Notes
151+
#### Notes
148152

149153
- The left table expression is an append-only table or source.
150154
- The right table expression is a table, index or materialized view.
151155
- The process-time syntax `FOR SYSTEM_TIME AS OF PROCTIME()` is included in the right table expression.
152156
- The join type is INNER JOIN or LEFT JOIN.
153157
- The Join condition includes the primary key of the right table expression.
154158

155-
### Example
159+
#### Example
156160

157161
If you have an append-only stream that includes messages like below:
158162

@@ -179,11 +183,43 @@ You can use a temporal join to fetch the latest product name and price from the
179183
SELECT transaction_id, product_id, quantity, sale_date, product_name, price
180184
FROM sales
181185
JOIN products FOR SYSTEM_TIME AS OF PROCTIME()
182-
ON product_id = id
186+
ON product_id = id WHERE process_time BETWEEN valid_from AND valid_to;
183187
```
184188

185189
| transaction_id | product_id | quantity | sale_date | product_name | price |
186190
|----------------|------------|----------|------------|--------------|-------|
187191
| 1 | 101 | 3 | 2023-06-18 | Product A | 25 |
188192
| 2 | 102 | 2 | 2023-06-19 | Product B | 15 |
189193
| 3 | 101 | 1 | 2023-06-20 | Product A | 22 |
194+
195+
### Non-append-only process-time temporal join
196+
197+
Compared to the append-only temporal join, the non-append-only temporal join can accommodate non-append-only input for the left table. However, it introduces an internal state to materialize the lookup result for each left-hand side (LHS) insertion. This allows the temporal join operator to retract the join result it sends downstream when update or delete messages arrive.
198+
199+
#### Syntax
200+
201+
The non-append-only temporal join shares the same syntax as the append-only temporal join.
202+
203+
```sql
204+
<table_expression> [ LEFT | INNER ] JOIN <table_expression> FOR SYSTEM_TIME AS OF PROCTIME() ON <join_conditions>;
205+
```
206+
207+
#### Example
208+
209+
Now if you update the table `sales`:
210+
211+
```sql
212+
UPDATE sales SET quantity = quantity + 1;
213+
```
214+
215+
You will get these results:
216+
217+
| transaction_id | product_id | quantity | sale_date | product_name | price |
218+
| --- | --- | --- | --- | --- | --- |
219+
| 1 | 101 | 4 | 2023-06-18 | Product A | 25 |
220+
| 2 | 102 | 3 | 2023-06-19 | Product B | 15 |
221+
| 3 | 101 | 2 | 2023-06-20 | Product A | 22 |
222+
223+
:::note
224+
Every time you update the left-hand side table, it will look up the latest data from the right-hand side table.
225+
:::

docs/sql/syntax/sql-pattern-topn.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,12 +69,12 @@ INSERT INTO t (x, y, z) VALUES
6969
```
7070

7171
```sql title="Run a top-N query"
72-
SELECT r1
72+
SELECT r
7373
FROM (
7474
SELECT
7575
*,
7676
row_number() OVER (PARTITION BY x ORDER BY y) r
7777
FROM t
7878
) Q
79-
WHERE Q.r1 < 10;
79+
WHERE Q.r < 10;
8080
```

0 commit comments

Comments
 (0)