This project implements an event-driven outbox pattern using Kafka, Debezium, Kafka Streams, and Schema Registry to propagate state changes from a PostgreSQL-based microservice into a shared, normalized event stream that can be consumed by downstream services (e.g., shipment service, S3 sinks, etc.).
- Decouple producers from consumers — expose a stable, versioned public contract.
- Normalize change events — transform raw CDC JSON into Avro records.
- Guarantee compatibility — enforce schemas via Confluent Schema Registry.
- Polyglot friendly — consumers only need the Avro schema, not internal DB details.
- Debezium monitors changes in the outbox table (
outboxevent) via logical decoding. - Kafka Streams normalizes and transforms the private JSON-based stream to a public Avro-based stream.
- Schema Registry ensures schema evolution and validation.
- Consumers (e.g., shipment-service) deserialize strongly typed Avro messages.
| Module | Description |
|---|---|
order-service |
Produces domain events into the outboxevent table (CDC source). |
debezium |
PostgreSQL connector with logical decoding. |
kafka-streams |
Reads from private JSON topic, normalizes into typed Avro records. |
shipment-service |
Sample consumer of the public outbox stream using Specific Avro. |
- Ensures reliable propagation of domain events by writing to an outbox table as part of the local transaction.
- Eliminates dual-write problems between DB and Kafka.
- Versioned topics (
orders.public.outbox.v1) separate schema contracts. - Avro ensures serialization compatibility with enforced schema types.
- Debezium reads logical changes using PostgreSQL's logical replication.
- Handles special PostgreSQL cases like TOAST columns:
- Debezium connector transforms the CDC stream:
ExtractNewRecordStateto unwrap payloadsByLogicalTableRouterto rename topics
- Kafka Streams performs final enrichment and emits versioned Avro events.
- Null or
__debezium_unavailable_valuefields are filtered to prevent deserialization failures. - Consumers can independently subscribe to public topics without breaking changes via Avro + topic versioning.
- Tombstones can be handled if
tombstones.on.delete=true, though dropped by default.
make upThen, interact via:
localhost:8080→ Order service APIlocalhost:8091→ Shipment service consumer logs
To generate Java POJOs from your Avro schema files, run:
mvn generate-sources- The Schema Registry API is Not How You Use Schema with Kafka
- Is Kafka a Database (with ksqlDB)?
- Database Inside Out – Martin Kleppmann
- Handling Unchanged Postgres TOAST Values
- Debezium PostgreSQL Connector Docs
- Add schema validation and metrics.
- Implement event reprocessing and dead-letter handling.
- Enable S3 archival via Kafka Connect + LocalStack.
- Integrate observability with Prometheus and Micrometer.
