This project sets up Apache Kafka using Docker containers and demonstrates a Producer-Consumer workflow in Python.
The producer (producer.py
) generates transaction data, while the consumer (app.py
) consumes the data from a Kafka topic.
Before you start, ensure you have the following installed:
- Docker
- Docker Compose
- Python 3.x
- Python Kafka client library:
pip install kafka-python
.
├── docker-compose.yml # Docker Compose file for Kafka + ZooKeeper
├── producer.py # Kafka producer that generates transactions
├── app.py # Kafka consumer that reads transactions
└── README.md # Setup guide
Make sure you are in the folder that contains the docker-compose.yml
file:
cd /path/to/your/project
docker-compose up -d
-
-d
runs the containers in the background. -
This starts:
- ZooKeeper (required by Kafka)
- Kafka Broker
docker ps
You should see containers for kafka
and zookeeper
.
docker exec -it <kafka-container-name> bash
bin/kafka-topics.sh --create \
--topic transactions \
--bootstrap-server localhost:9092 \
--partitions 1 \
--replication-factor 1
bin/kafka-topics.sh --list --bootstrap-server localhost:9092
You should see transactions
in the list.
This script generates transaction data and sends it to the Kafka topic transactions
.
python producer.py
You should see messages being sent to Kafka.
This script consumes messages from the transactions
topic and displays/uses them.
python app.py
You should see messages being received in real time.
Stop Kafka and ZooKeeper:
docker-compose down
Restart later with:
docker-compose up -d
-
Kafka default ports:
- 9092 → Kafka broker
- 2181 → ZooKeeper
-
Both
producer.py
andapp.py
are configured to use the topictransactions
. If you change the topic name or Kafka connection settings, update the scripts accordingly. -
Logs can be checked using:
docker logs <kafka-container-name>
✅ With this setup, you can:
- Run Kafka & ZooKeeper inside Docker
- Produce transaction data via
producer.py
- Consume transaction data via
app.py