Skip to content

Provide ability to disable/limit retries in batch listener when using DefaultAfterRollbackProcessor #2588

Closed
@RuslanHryn

Description

@RuslanHryn

Expected Behavior
Configuration to disable/limit retries in the batch listener when using DB transactions and DefaultAfterRollbackProcessor
Example of listener @KafkaListener(topics = BATCH_TOPIC, batch = "true") {}

Current Behavior
Currently, the DefaultAfterRollbackProcessor retries the failing message infinitely if using batch listener with DB transactions.

Context
The main problem if we forget to handle the failing message inside the listener then the failed message will stuck until we take some manual action to skip this message. That's why we would like to have a config to disable/limit retries at the configurations level.
I tried to find a way to disable retries for batch listeners but unsuccessfully.
new FixedBackOff(0, 0)
new DefaultAfterRollbackProcessor<>(deadLetterPublishingRecoverer, backOff);

The workaround can be to force to use recoverable=true always in order to limit retries

return new DefaultAfterRollbackProcessor<>(deadLetterPublishingRecoverer, backOff) {
            @Override
            public void process(
                    @NotNull List<ConsumerRecord<Object, Object>> consumerRecords,
                    @NotNull org.apache.kafka.clients.consumer.Consumer<Object, Object> consumer,
                    MessageListenerContainer container,
                    @NotNull Exception exception,
                    boolean recoverable,
                    @NotNull ContainerProperties.EOSMode eosMode) {
                // Force all messages to be recoverable.
                super.process(consumerRecords, consumer, container, exception, true, eosMode);
            }
        };

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions