This project demonstrates a credit card transaction flow with event lineage tracing using OpenTelemetry (OTel) and Kafka. It simulates a real-time financial system with multiple producers, stream processors, and consumers, instrumented end-to-end to support traceable and observable pipelines. This version runs without Kafka Connect. For the variant that includes Kafka Connect, refer to the demo-with-connect
branch.
👉 📽️ Demo Slides 📖 📚 Project Documentation
This demo tracks distributed events across services using OpenTelemetry's Java agent and custom extensions.
Core Components:
OpenTelemetry Java Agent
for auto-instrumentationCustom Header Extensions
for tracking lineageOpenTelemetry Collector
for telemetry routingKafka Streams
for stateful processingJaeger
,Prometheus
, andSplunk
for observability
Infrastructure:
- Single-node Kafka cluster with Schema Registry
- Services written in Java, built with Maven
- Containerised via Docker Compose
The simulation begins with the data injector, which pre-loads account and merchant datasets and continuously emits synthetic events for 60 seconds:
Type | Frequency | Payloads | Route |
---|---|---|---|
Account Open | every 10s | New account creation | → account-event-producer → Kafka topic: account |
Account Close | every 30s | Close existing accounts | (starts after 30s) → account-event-producer |
Transactions | every 100ms | Deposits, Payments, Failures | → transaction-producer → Kafka topic: transaction |
- Ingests
account
events into a KTable - Handles state transitions (open → active, close → inactive)
- Outputs updates to Kafka topic
account-update
-
Joins transactions with the account KTable
-
Validates:
- Unknown or inactive accounts → reject
- Active accounts → check balance
-
Updates balances and emits results to:
transaction-update
balance-update
Sink Application | Kafka Topic | Description |
---|---|---|
account-updates-sink |
account-update |
Processes account state changes |
transaction-sink |
transaction-update |
Handles transaction outcomes |
balance-updates-sink |
balance-update |
Updates running account balances |
All services are instrumented with the OpenTelemetry Java agent (v1.13.0
) and a custom extension for lineage tracking:
- Propagates headers like
account_nr_header
andsystem_id
- Automatically generates and correlates spans
- Enables traceability across producer, stream, and sink layers
- Visualisable via Jaeger and searchable in Splunk
- Docker
- Java 11+
- Maven
./run_demo.sh
This builds all services, starts the containers, and injects data automatically. Wait ~1–2 minutes for full initialisation.
Tool | URL |
---|---|
Jaeger (Tracing) | http://localhost:16686 |
Confluent Control Center | http://localhost:9021 |
Prometheus (Metrics) | http://localhost:9090 |
Splunk (Logs) | http://localhost:8000 (admin/abcd1234) |
OTel Collector Metrics | http://localhost:8888 |
docker-compose down -v # Remove all containers and volumes
docker-compose up -d # Restart without rebuild
Component | Description |
---|---|
demo-data-injector |
Emits account and transaction events over HTTP to simulate activity |
account-event-producer |
REST service posting account events to Kafka |
transaction-producer |
REST service posting transactions to Kafka |
kstream-app |
Kafka Streams processor for stateful validation and transformation |
account-updates-sink |
Kafka consumer writing processed account states |
balance-updates-sink |
Kafka consumer maintaining account balances |
transaction-sink |
Kafka consumer processing transaction results |
- Auto-instrumentation: Kafka, HTTP, JVM
- Custom Propagation: Application-specific headers for correlation
- Span Creation & Linking: Full trace graph from injector to sinks
- Prometheus Metrics: JVM, Kafka, and custom app metrics
- Splunk Logs: Searchable trace events and errors
- This version does not use Kafka Connect. For that version, see the
demo-with-connect
branch. - All services are built to simulate a realistic, multi-service topology for testing observability, trace correlation, and lineage propagation.