Skip to content

Fix GroupIdToPartitionKeyRule to prefer outgoing GroupId over originator's consumer group#2338

Merged
jeremydmiller merged 3 commits intoJasperFx:mainfrom
lyall-sc:fix/group-id-to-kafka-partition-key
Mar 24, 2026
Merged

Fix GroupIdToPartitionKeyRule to prefer outgoing GroupId over originator's consumer group#2338
jeremydmiller merged 3 commits intoJasperFx:mainfrom
lyall-sc:fix/group-id-to-kafka-partition-key

Conversation

@lyall-sc
Copy link
Copy Markdown
Contributor

This fix aims to solve two cases where the outgoing Kafka partition key was observed to be either:

  • Kafka consumer group name (e.g. application name)
  • Wolverine message id

The first case is caused by the Kafka listener setting the envelope.GroupId = config.GroupId on the incoming message. If the handler from the Kafka topic cascades an outgoing message, the GroupIdToPartitionKeyRule runs and copies this GroupId value to be the outgoing partition key.

The second case happens when the handler is processing a message from a local queue, or when publishing fresh from application code. At routing time, DetermineGroupId correctly resolves outgoing.GroupId = stream id (via ByPropertyNamed). But GroupIdToPartitionKeyRule only looks at originator.Envelope?.GroupId, which is null or unrelated, so it sets nothing, and outgoing.PartitionKey stays null. Then CreateMessage in the Kafka producer reaches its fallback: envelope.Id.ToString().

Fix the propagation priority in ApplyCorrelation:

  1. Outgoing message's own GroupId (set by ByPropertyNamed) - the business partition key
  2. Originator's PartitionKey (the actual Kafka message key of the incoming message)
  3. Originator's GroupId as a last resort

Also implement Modify so that messages published outside a handler context (e.g. background services) have their GroupId promoted to PartitionKey when no explicit key is set.

Stop stamping envelope.GroupId = config.GroupId in KafkaListener and KafkaTopicGroupListener - the consumer group name is not meaningful metadata on the incoming envelope.

…tor's consumer group

Previously, GroupIdToPartitionKeyRule set the Kafka consumer group name (config.GroupId)
on every received envelope and propagated it as the partition key on outgoing messages.
This caused cascaded messages to use the application's consumer group name (e.g.
"my-application-name") as the partition key instead of the actual business key derived
from the message property via ByPropertyNamed / UseInferredMessageGrouping.

Fix the propagation priority in ApplyCorrelation:
1. Outgoing message's own GroupId (set by ByPropertyNamed) — the business partition key
2. Originator's PartitionKey (the actual Kafka message key of the incoming message)
3. Originator's GroupId as a last resort

Also implement Modify so that messages published outside a handler context (e.g. background
services) have their GroupId promoted to PartitionKey when no explicit key is set.

Stop stamping envelope.GroupId = config.GroupId in KafkaListener and KafkaTopicGroupListener
— the consumer group name is not meaningful metadata on the incoming envelope and was the
root cause of the incorrect propagation.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
var envelope = mapper!.CreateEnvelope(result.Topic, message);
envelope.Offset = result.Offset.Value;
envelope.MessageType ??= _messageTypeName;
envelope.GroupId = config.GroupId;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change will have to be "opt in" some how

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a bool on the KafkaTopic, a method on KafkaListenerConfiguration to change its value, and a doc update explaining the change (well, I told Claude what to do). Reckon this is okay?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants