β οΈ IMPORTANT: This project is for learning purposes only. No security measures have been implemented. DO NOT use in production environments.
A comprehensive Kubernetes learning environment featuring ELK Stack (Elasticsearch, Logstash, Kibana, Filebeat), Grafana, Loki, Promtail, RabbitMQ, PostgreSQL, Cerebro, and C# applications with MassTransit for event-driven workflows. Everything is auth-free, password-free, and fully open for educational purposes.
This project demonstrates a minimal Kubernetes environment with ELK Stack (Elasticsearch, Logstash, Kibana, Filebeat), Grafana, Loki, Promtail, RabbitMQ, PostgreSQL, Cerebro, and C# applications using MassTransit for an event-driven workflow. Everything is auth-free, password-free, and fully open. This is NOT production-ready.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Kubernetes Cluster (kind) β
β β
β ββββββββββββββββββ βββββββββββββββββ βββββββββββββββββ β
β β API App β β Worker β β CronJob β β
β β (Deployment) β β (Deployment) β β (CronJob) β β
β β + HPA β β β β β β
β β + MassTransit β β +MassTransit β β +MassTransit β β
β β Publisher β β Consumer β β Producer β β
β ββββββββ¬ββββββββββ ββββββββ¬βββββββββ ββββββββ¬βββββββββ β
β β β β β
β βββββββββββββββββββββΌβββββββββββββββββββ β
β β β
β ββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ β
β β RabbitMQ (Deployment) β β
β β (Message Broker - Auth-free) β β
β ββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ β
β β β
β ββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ β
β β Filebeat (DaemonSet) β β
β β (Collects pod logs) β β
β ββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ β
β β β
β ββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ β
β β Logstash (Deployment) β β
β β (Log processing and transformation) β β
β ββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ β
β β β
β ββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ β
β β Elasticsearch (Deployment) β β
β β (Log storage and indexing) β β
β ββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ β
β β β
β ββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββ β
β β Kibana (Deployment) β β
β β (Log visualization) β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β ββββββββββββββββββββββββββ ββββββββββββββββββββββββββ β
β β Grafana β β Loki β β
β β (Anonymous mode) β β (Log aggregation) β β
β β + Promtail β β + Promtail β β
β ββββββββββββββββββββββββββ ββββββββββββββββββββββββββ β
β β
β ββββββββββββββββββββββββββ ββββββββββββββββββββββββββ β
β β Cerebro β β Postgres β β
β β (ES Management UI) β β (Cerebro Database) β β
β β + Postgres DB β β β β
β ββββββββββββββββββββββββββ ββββββββββββββββββββββββββ β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Ingress Nginx Controller β β
β β (Direct browser access - no port-forward) β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ELK Stack Pipeline:
Pod Logs β Filebeat (DaemonSet) β Logstash β Elasticsearch β Kibana
β
Grafana
Loki Stack Pipeline:
Pod Logs β Promtail (DaemonSet) β Loki β Grafana
- Filebeat: DaemonSet running on every node, collects pod logs and sends to Logstash
- Logstash: Processes logs and sends them to Elasticsearch
- Elasticsearch: Indexes and stores logs
- Kibana: Visualizes logs and enables search
- Promtail: DaemonSet running on every node, collects pod logs and sends to Loki
- Loki: Aggregates logs from Promtail
- Grafana: Creates dashboards from Elasticsearch and Loki data
API (Publisher) β RabbitMQ β Worker (Consumer)
CronJob (Producer) β RabbitMQ β Worker (Consumer)
- API: Publishes messages to RabbitMQ via MassTransit
- CronJob: Produces scheduled messages to RabbitMQ via MassTransit
- RabbitMQ: Message broker (auth-free, guest/guest)
- Worker: Consumes messages from RabbitMQ via MassTransit and logs them
Just 3 steps:
-
Install prerequisites:
brew install kind kubectl make
-
Start Docker (Docker Desktop)
-
Run everything with a single command:
make
That's it! π
π‘ Note: First run may take 5-10 minutes as Docker images are downloaded.
- Docker Desktop (or Docker Engine)
- kind (Kubernetes in Docker)
- kubectl
- Make (optional, for convenience)
# Install via Homebrew
brew install kind kubectl make# Install kind
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# Install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/kubectlJust run this command, everything will be automated:
makeor
make allThis command automatically:
- β Creates Kubernetes cluster (kind)
- β Deploys metrics-server
- β Builds Docker images (api, worker, cronjob)
- β Loads images into kind cluster
- β Deploys all components (ELK, Grafana, Loki, RabbitMQ, Postgres, Cerebro, C# apps)
- β Deploys Ingress for direct browser access
- β Runs self-validation
Total time: ~5-10 minutes (first run downloads images)
If you want to run step by step:
make setupmake buildmake load-imagesmake deploymake validateServices are accessible directly via Ingress:
π Kibana: http://kibana.localhost
π Grafana: http://grafana.localhost
π Elasticsearch: http://elasticsearch.localhost
π§ Cerebro: http://cerebro.localhost (Elasticsearch management UI)
Login: admin / admin
π° RabbitMQ: http://rabbitmq.localhost (guest/guest)
π API: http://api.localhost
Note: If addresses don't work, add to /etc/hosts:
sudo sh -c 'echo "127.0.0.1 kibana.localhost grafana.localhost elasticsearch.localhost cerebro.localhost rabbitmq.localhost api.localhost" >> /etc/hosts'After deployment, verify the system is working correctly:
# Comprehensive validation (all checks)
make validate
# Quick status check
make validate-quickThe validation script checks:
- β Kubernetes Cluster - Nodes, metrics-server
- β ELK Stack - Elasticsearch, Kibana, Logstash, Filebeat
- β Observability - Grafana (anonymous mode), Cerebro (Elasticsearch UI with Postgres), Loki, datasources
- β Database - Postgres (for Cerebro REST history)
- β C# Applications - API, Worker, CronJob
- β RabbitMQ + MassTransit - RabbitMQ pod/service, message flow
- β HPA & Metrics - HPA configuration, metrics-server
- β Service Discovery - DNS, headless services
- β Ports & Connectivity - All service ports
- β Ingress - Ingress Controller and rules
- β Log Verification - Log pipeline verification
Validation ends with a detailed report:
- β Successful checks
β οΈ Warnings (normal, pods may not be ready yet)- β Errors (issues that need fixing)
# Test API
make test-api
# Or manually:
curl http://api.localhost/# Test service discovery endpoint
make test-service-discovery
# API accesses worker service via DNS
curl http://api.localhost/service-discovery# Trigger HPA with CPU stress test
make test-hpa
# Or manually:
for i in {1..20}; do curl http://api.localhost/stress; doneCheck HPA status:
kubectl get hpa api-hpa
kubectl describe hpa api-hpaImportant: Make sure /etc/hosts is configured (see "Direct Browser Access" section above).
# 1. Publish message from API
curl http://api.localhost/publish
# If api.localhost doesn't work, use port-forward:
# Terminal 1: kubectl port-forward svc/api 8080:80
# Terminal 2: curl http://localhost:8080/publish
# 2. Check Worker logs (should show consumed message)
make logs-worker
# 3. Check RabbitMQ Management UI
# Open http://rabbitmq.localhost (guest/guest)
# Navigate to Queues tab to see message flow- Access Kibana:
http://kibana.localhost - Go to Discover from left menu
- If first time, create index pattern:
- Index pattern:
logs-* - Time field:
@timestamp
- Index pattern:
- View logs!
- Access Grafana:
http://grafana.localhost(anonymous mode, direct access) - Go to Explore from left menu
- Select data source:
- Elasticsearch - For ELK Stack logs
- Loki - For Loki Stack logs
- Run log queries and create visualizations
- Access Cerebro:
http://cerebro.localhost - Login with credentials:
admin/admin - View Elasticsearch cluster status, indices, and perform management tasks
- Cerebro uses PostgreSQL for REST history storage
The API application accesses the worker service via DNS:
// Inside API
var response = await httpClient.GetAsync("http://worker.default.svc.cluster.local");Kubernetes DNS format:
http://<service-name>.<namespace>.svc.cluster.local- Short format:
http://worker(within same namespace)
Test it:
# Call API's service-discovery endpoint
curl http://api.localhost/service-discovery.
βββ k8s/
β βββ cluster/
β β βββ kind-config.yaml # kind cluster config
β β βββ metric-server.yaml # metrics-server deployment
β β βββ metrics-server-apiservice.yaml
β βββ elk/
β β βββ elasticsearch/
β β β βββ elasticsearch-deployment.yaml
β β β βββ elasticsearch-service.yaml
β β β βββ ingress.yaml
β β βββ kibana/
β β β βββ kibana-deployment.yaml
β β β βββ kibana-service.yaml
β β β βββ ingress.yaml
β β βββ logstash/
β β β βββ logstash-deployment.yaml
β β β βββ logstash-service.yaml
β β β βββ logstash-configmap.yaml
β β βββ filebeat/
β β βββ filebeat-daemonset.yaml
β β βββ filebeat-configmap.yaml
β β βββ filebeat-rbac.yaml
β βββ observability/
β β βββ grafana/
β β β βββ grafana-deployment.yaml
β β β βββ grafana-service.yaml
β β β βββ datasources-configmap.yaml
β β β βββ ingress.yaml
β β βββ cerebro/
β β β βββ cerebro-deployment.yaml
β β β βββ cerebro-service.yaml
β β β βββ cerebro-configmap.yaml
β β β βββ ingress.yaml
β β βββ loki/
β β βββ loki-deployment.yaml
β β βββ loki-service.yaml
β β βββ promtail-daemonset.yaml
β β βββ promtail-configmap.yaml
β βββ rabbitmq/
β β βββ rabbitmq-deployment.yaml
β β βββ rabbitmq-service.yaml
β β βββ ingress.yaml
β βββ database/
β βββ postgres/
β βββ postgres-deployment.yaml
β βββ postgres-service.yaml
βββ apps/
β βββ api/
β β βββ Program.cs
β β βββ Dockerfile
β β βββ deployment.yaml
β β βββ service.yaml
β β βββ hpa.yaml
β β βββ ingress.yaml
β βββ worker/
β β βββ Program.cs
β β βββ Dockerfile
β β βββ deployment.yaml
β β βββ service.yaml
β β βββ service-discovery-example.yaml
β βββ cronjob/
β βββ Program.cs
β βββ Dockerfile
β βββ cronjob.yaml
βββ scripts/
β βββ setup.sh # Cluster setup script
β βββ deploy.sh # Deployment script
β βββ validate.sh # Self-validation script
β βββ create-dashboards.sh # Dashboard creation script
βββ k8s/
β βββ dashboards/
β βββ grafana/
β β βββ dashboard.json
β βββ kibana/
β βββ dashboard.json
βββ Makefile # Make commands
βββ README.md # This file
With this project, you can learn:
-
Kubernetes Fundamentals
- Deployment and ReplicaSet
- Service (ClusterIP, Headless)
- ConfigMap and Volume
- DaemonSet
- CronJob and Job
- HorizontalPodAutoscaler (HPA)
- Ingress and Ingress Controller
-
Service Discovery
- Kubernetes DNS
- Service name resolution
- Cross-service communication
-
Log Management
- ELK Stack architecture
- Log aggregation pipeline
- Filebeat log collection
- Logstash log processing
- Elasticsearch indexing
- Kibana visualization
-
Observability
- Grafana monitoring (anonymous mode, dashboards)
- Cerebro (Elasticsearch management UI with PostgreSQL database)
- Loki log aggregation (Grafana data source)
- Promtail log collection (DaemonSet)
-
Event-Driven Architecture
- RabbitMQ message broker (auth-free, guest/guest)
- MassTransit integration (.NET message bus)
- Publisher/Consumer pattern
- Scheduled message production (CronJob)
-
Database
- PostgreSQL (for Cerebro REST history)
- JDBC driver integration
- Init container for driver download
-
Container Orchestration
- Docker image building
- kind local cluster
- Resource limits and requests
- Pod scheduling
- Horizontal Pod Autoscaler (HPA)
- Metrics Server
# Delete cluster
make clean
# Or reset cluster (no confirmation)
make reset
# Or manually:
kind delete cluster --name k8s-learningThis project is for learning purposes only and the following security measures have been intentionally omitted:
- β No authentication
- β No authorization (RBAC)
- β No TLS/SSL encryption
- β No secret management
- β No network policies
- β No pod security policies
- β No resource quotas
DO NOT use in production!
# Check pod status
kubectl get pods
# View pod details
kubectl describe pod <pod-name>
# Check logs
kubectl logs <pod-name># Check Elasticsearch status
kubectl get pods -l app=elasticsearch
kubectl logs -l app=elasticsearch
# Elasticsearch health check
curl http://elasticsearch.localhost/_cluster/health# Check metrics-server status
kubectl get deployment metrics-server -n kube-system
kubectl logs -l k8s-app=metrics-server -n kube-system
# Check HPA status
kubectl get hpa
kubectl describe hpa api-hpa# DNS test
kubectl run -it --rm dns-test --image=busybox --restart=Never -- nslookup worker
# Check services
kubectl get svc
kubectl get endpoints# Check RabbitMQ pod
kubectl get pods -l app=rabbitmq
kubectl logs -l app=rabbitmq
# Check RabbitMQ service
kubectl get svc rabbitmq
# Test connection from API pod
kubectl exec -it deployment/api -- curl http://rabbitmq:5672Cerebro requires a PostgreSQL database for REST history functionality. The deployment includes:
-
Postgres Deployment - Automatically deployed before Cerebro
kubectl get pods -l app=postgres kubectl get svc postgres
-
Cerebro Configuration - Configured to connect to Postgres:
- Database:
cerebro - User:
cerebro - Password:
cerebro-password - Connection:
jdbc:postgresql://postgres:5432/cerebro
- Database:
-
Postgres JDBC Driver - Automatically downloaded via init container
Check Cerebro status:
kubectl get pods -l app=cerebro
kubectl logs -l app=cerebroAccess Cerebro:
http://cerebro.localhostLogin Credentials:
- Username:
admin - Password:
admin
If Cerebro fails to start, check Postgres is running:
kubectl get pods -l app=postgres
kubectl logs -l app=postgres# Check Ingress Controller
kubectl get pods -n ingress-nginx
kubectl get ingress
# Check /etc/hosts
cat /etc/hosts | grep localhost- Kubernetes Documentation
- ELK Stack Documentation
- Grafana Documentation
- Loki Documentation
- Promtail Documentation
- RabbitMQ Documentation
- MassTransit Documentation
- PostgreSQL Documentation
- Cerebro GitHub
- kind Documentation
- .NET Documentation
This project is licensed under the MIT License - see the LICENSE file for details.
Disclaimer: This project is for educational purposes only. No security measures have been implemented. DO NOT use in production environments.
Contributions, issues, and feature requests are welcome! Feel free to check the issues page.
- Kubernetes Documentation
- ELK Stack Documentation
- Grafana Documentation
- Loki Documentation
- RabbitMQ Documentation
- MassTransit Documentation
- PostgreSQL Documentation
- Cerebro GitHub
- kind Documentation
- .NET Documentation
Happy Learning! π
Made with β€οΈ for Kubernetes learners