Description
As title says - I have a problem in which I would like to rewrite kafka record (key + payload) from one topic to another, but no matter if I'm using Sink or Source option of this connector, the outbound topic receives message without a key (key ends up being null), which also means partition ordering guarantee is lost.
My setup:
curl --location --request PUT 'http://localhost:8083/connectors/camel-sink-connector/config' \ --header 'Content-Type: application/json' \ --data-raw '{ "connector.class": "org.apache.camel.kafkaconnector.kafka.CamelKafkaSinkConnector", "tasks.max": "1", "camel.sink.contentLogLevel": "DEBUG", "camel.sink.path.topic": "example.topic.2", "camel.sink.endpoint.brokers": "kafka-1:9092,kafka-2:9092,kafka-3:9092", "topics": "example.topic", "key.converter": "org.apache.kafka.connect.storage.StringConverter", "value.converter": "org.apache.kafka.connect.storage.StringConverter", "consumer.override.isolation.level": "read_committed" }'
Using: https://camel.apache.org/camel-kafka-connector/latest/connectors/camel-kafka-kafka-sink-connector.html in version 0.7.1
And while sending message (key, payload) onto inbound topic - example.topic - on outbound - example.topic.2 - I receive (null, payload). Is there a way to adjust this configuration to have desire behavior - key is also being past to an endpoint?