Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 2.3.1 #824

Merged
merged 16 commits into from
Sep 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
16 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions release-notes/2.3.1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
## What's Changed
* Update state_storage.md by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/771
* Update README.md to include state storage link by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/772
* Updated offset state storage documentation. by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/774
* Update state_storage.md to include schema storage. by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/775
* Update state_storage.md to include postgresql offsets by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/776
* Update quickstart.md to start sink connector service by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/785
* Added logic to get latest release from github by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/786
* Update quickstart.md with script to set environment variable. by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/787
* Update quickstart_postgres.md by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/788
* Added updates to script by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/789
* Added single.threaded flag to Mariadb test to validate replication in… by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/781
* Update production_setup.md by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/796
* Changed logging level to info for STRUCT EMPTY not a valid CDC record by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/795
* Update production_setup.md to include max_paritions_per_insert by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/804
* Update production_setup.md , fixed broken link by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/809
* Update config.ym by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/813
* Update README.md to include initial data dump/load by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/821
* 794 change logging level of struct empty not a valid cdc record + record to info by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/825
* 801 records are not acknowledged or the offsets are not updated in singlethreaded mode by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/827


**Full Changelog**: https://github.com/Altinity/clickhouse-sink-connector/compare/2.3.0...2.3.1
11 changes: 9 additions & 2 deletions sink-connector-lightweight/docker/log4j2.xml
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,19 @@
additivity="false">
<AppenderRef ref="console"/>
</Logger>-->
<Logger name="io.debezium" level="ERROR"
additivity="false">
<AppenderRef ref="console"/>
</Logger>
<Logger name="com.clickhouse" level="ERROR"
additivity="false">
<AppenderRef ref="console"/>
</Logger>

<Root level="info" additivity="false">
<Logger name="io.debezium" level="ERROR"
additivity="false">
<AppenderRef ref="console"/>
</Logger>
<Root level="warn" additivity="false">
<AppenderRef ref="console" />
</Root>
</Loggers>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,7 @@ private ClickHouseStruct processEveryChangeRecord(Properties props, ChangeEvent<
Struct struct = (Struct) sr.value();

if (struct == null) {
log.warn(String.format("STRUCT EMPTY - not a valid CDC record + Record(%s)", record.toString()));
log.debug(String.format("STRUCT EMPTY - not a valid CDC record + Record(%s)", record.toString()));
return null;
}
if (struct.schema() == null) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
* Integration test to validate support for replication of multiple databases.
*/
@Testcontainers
@DisplayName("Integration Test that validates basic replication of MariaDB databases")
@DisplayName("Integration Test that validates basic replication of MariaDB databases in single threaded mode")
public class MariaDBIT
{

Expand Down Expand Up @@ -66,7 +66,7 @@ public void startContainers() throws InterruptedException {
clickHouseContainer.start();
}

@DisplayName("Integration Test that validates handle of JSON data type from MySQL")
@DisplayName("Integration Test that validates replication of MariaDB databases in single.threaded mode")
@Test
public void testMultipleDatabases() throws Exception {

Expand All @@ -77,6 +77,7 @@ public void testMultipleDatabases() throws Exception {
// Set the list of databases captured.
props.put("database.whitelist", "employees,test_db,test_db2");
props.put("database.include.list", "employees,test_db,test_db2");
props.put("single.threaded", true);

ExecutorService executorService = Executors.newFixedThreadPool(1);
executorService.execute(() -> {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,14 @@ public void persistRecords(List<ClickHouseStruct> records) {
//throw new RuntimeException(e);
log.error("Error marking records as processed"+ e);
}

if(record.isLastRecordInBatch()) {
try {
record.getCommitter().markBatchFinished();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
});
}
} catch(Exception e) {
Expand Down
Loading