Skip to content

Complete Kubernetes tutorial with ELK Stack, Grafana, Loki, Promtail, RabbitMQ, PostgreSQL, Cerebro, and C# applications using MassTransit. Learn Kubernetes, observability, event-driven architectures, and microservices.

License

Notifications You must be signed in to change notification settings

hgulbicim/kubernetes-elk-grafana-cerebro-rabbitmq-postgres-tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Kubernetes Learning Environment - ELK Stack + Grafana + RabbitMQ + MassTransit

⚠️ IMPORTANT: This project is for learning purposes only. No security measures have been implemented. DO NOT use in production environments.

License: MIT Kubernetes .NET

A comprehensive Kubernetes learning environment featuring ELK Stack (Elasticsearch, Logstash, Kibana, Filebeat), Grafana, Loki, Promtail, RabbitMQ, PostgreSQL, Cerebro, and C# applications with MassTransit for event-driven workflows. Everything is auth-free, password-free, and fully open for educational purposes.

🎯 Purpose

This project demonstrates a minimal Kubernetes environment with ELK Stack (Elasticsearch, Logstash, Kibana, Filebeat), Grafana, Loki, Promtail, RabbitMQ, PostgreSQL, Cerebro, and C# applications using MassTransit for an event-driven workflow. Everything is auth-free, password-free, and fully open. This is NOT production-ready.

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    Kubernetes Cluster (kind)                        β”‚
β”‚                                                                     β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”           β”‚
β”‚  β”‚   API App      β”‚  β”‚   Worker      β”‚  β”‚   CronJob     β”‚           β”‚
β”‚  β”‚  (Deployment)  β”‚  β”‚  (Deployment) β”‚  β”‚  (CronJob)    β”‚           β”‚
β”‚  β”‚  + HPA         β”‚  β”‚               β”‚  β”‚               β”‚           β”‚
β”‚  β”‚  + MassTransit β”‚  β”‚  +MassTransit β”‚  β”‚  +MassTransit β”‚           β”‚
β”‚  β”‚  Publisher     β”‚  β”‚  Consumer     β”‚  β”‚  Producer     β”‚           β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜           β”‚
β”‚         β”‚                   β”‚                  β”‚                    β”‚
β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                    β”‚
β”‚                             β”‚                                       β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”‚
β”‚  β”‚                RabbitMQ (Deployment)                   β”‚         β”‚
β”‚  β”‚           (Message Broker - Auth-free)                 β”‚         β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚
β”‚                             β”‚                                       β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”‚
β”‚  β”‚                Filebeat (DaemonSet)                    β”‚         β”‚
β”‚  β”‚           (Collects pod logs)                          β”‚         β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚
β”‚                             β”‚                                       β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”‚
β”‚  β”‚                Logstash (Deployment)                   β”‚         β”‚
β”‚  β”‚           (Log processing and transformation)          β”‚         β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚
β”‚                             β”‚                                       β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”‚
β”‚  β”‚           Elasticsearch (Deployment)                   β”‚         β”‚
β”‚  β”‚           (Log storage and indexing)                   β”‚         β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚
β”‚                           β”‚                                         β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”‚
β”‚  β”‚             Kibana (Deployment)                        β”‚         β”‚
β”‚  β”‚           (Log visualization)                          β”‚         β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚
β”‚                                                                     β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”             β”‚
β”‚  β”‚        Grafana         β”‚  β”‚          Loki          β”‚             β”‚
β”‚  β”‚    (Anonymous mode)    β”‚  β”‚     (Log aggregation)  β”‚             β”‚
β”‚  β”‚    + Promtail          β”‚  β”‚     + Promtail         β”‚             β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜             β”‚
β”‚                                                                     β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”             β”‚
β”‚  β”‚        Cerebro         β”‚  β”‚        Postgres        β”‚             β”‚
β”‚  β”‚    (ES Management UI)  β”‚  β”‚    (Cerebro Database)  β”‚             β”‚
β”‚  β”‚    + Postgres DB       β”‚  β”‚                        β”‚             β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜             β”‚
β”‚                                                                     β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚         Ingress Nginx Controller                              β”‚  β”‚
β”‚  β”‚         (Direct browser access - no port-forward)             β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“¦ Log Pipeline Flow

ELK Stack Pipeline:
Pod Logs β†’ Filebeat (DaemonSet) β†’ Logstash β†’ Elasticsearch β†’ Kibana
                                                              ↓
                                                          Grafana

Loki Stack Pipeline:
Pod Logs β†’ Promtail (DaemonSet) β†’ Loki β†’ Grafana
  1. Filebeat: DaemonSet running on every node, collects pod logs and sends to Logstash
  2. Logstash: Processes logs and sends them to Elasticsearch
  3. Elasticsearch: Indexes and stores logs
  4. Kibana: Visualizes logs and enables search
  5. Promtail: DaemonSet running on every node, collects pod logs and sends to Loki
  6. Loki: Aggregates logs from Promtail
  7. Grafana: Creates dashboards from Elasticsearch and Loki data

πŸ“¨ Event-Driven Message Flow

API (Publisher) β†’ RabbitMQ β†’ Worker (Consumer)
CronJob (Producer) β†’ RabbitMQ β†’ Worker (Consumer)
  1. API: Publishes messages to RabbitMQ via MassTransit
  2. CronJob: Produces scheduled messages to RabbitMQ via MassTransit
  3. RabbitMQ: Message broker (auth-free, guest/guest)
  4. Worker: Consumes messages from RabbitMQ via MassTransit and logs them

⚑ Quick Start

Just 3 steps:

  1. Install prerequisites:

    brew install kind kubectl make
  2. Start Docker (Docker Desktop)

  3. Run everything with a single command:

    make

That's it! πŸŽ‰

πŸ’‘ Note: First run may take 5-10 minutes as Docker images are downloaded.


πŸš€ Installation

Prerequisites

  • Docker Desktop (or Docker Engine)
  • kind (Kubernetes in Docker)
  • kubectl
  • Make (optional, for convenience)

macOS Installation

# Install via Homebrew
brew install kind kubectl make

Linux Installation

# Install kind
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

# Install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/kubectl

πŸ“ Usage

πŸš€ Single Command Deployment (RECOMMENDED)

Just run this command, everything will be automated:

make

or

make all

This command automatically:

  1. βœ… Creates Kubernetes cluster (kind)
  2. βœ… Deploys metrics-server
  3. βœ… Builds Docker images (api, worker, cronjob)
  4. βœ… Loads images into kind cluster
  5. βœ… Deploys all components (ELK, Grafana, Loki, RabbitMQ, Postgres, Cerebro, C# apps)
  6. βœ… Deploys Ingress for direct browser access
  7. βœ… Runs self-validation

Total time: ~5-10 minutes (first run downloads images)


Manual Steps (If Desired)

If you want to run step by step:

1. Cluster Setup

make setup

2. Build Docker Images

make build

3. Load Images into Kind Cluster

make load-images

4. Deploy All Components

make deploy

5. System Validation

make validate

Direct Browser Access (No Port-Forward Needed!)

Services are accessible directly via Ingress:

πŸ“Š Kibana:        http://kibana.localhost
πŸ“ˆ Grafana:       http://grafana.localhost
πŸ” Elasticsearch: http://elasticsearch.localhost
🧠 Cerebro:       http://cerebro.localhost (Elasticsearch management UI)
   Login: admin / admin
🐰 RabbitMQ:      http://rabbitmq.localhost (guest/guest)
πŸš€ API:           http://api.localhost

Note: If addresses don't work, add to /etc/hosts:

sudo sh -c 'echo "127.0.0.1 kibana.localhost grafana.localhost elasticsearch.localhost cerebro.localhost rabbitmq.localhost api.localhost" >> /etc/hosts'

πŸ” Self-Validation

After deployment, verify the system is working correctly:

# Comprehensive validation (all checks)
make validate

# Quick status check
make validate-quick

The validation script checks:

  1. βœ… Kubernetes Cluster - Nodes, metrics-server
  2. βœ… ELK Stack - Elasticsearch, Kibana, Logstash, Filebeat
  3. βœ… Observability - Grafana (anonymous mode), Cerebro (Elasticsearch UI with Postgres), Loki, datasources
  4. βœ… Database - Postgres (for Cerebro REST history)
  5. βœ… C# Applications - API, Worker, CronJob
  6. βœ… RabbitMQ + MassTransit - RabbitMQ pod/service, message flow
  7. βœ… HPA & Metrics - HPA configuration, metrics-server
  8. βœ… Service Discovery - DNS, headless services
  9. βœ… Ports & Connectivity - All service ports
  10. βœ… Ingress - Ingress Controller and rules
  11. βœ… Log Verification - Log pipeline verification

Validation ends with a detailed report:

  • βœ… Successful checks
  • ⚠️ Warnings (normal, pods may not be ready yet)
  • ❌ Errors (issues that need fixing)

πŸ§ͺ Testing

API Test

# Test API
make test-api

# Or manually:
curl http://api.localhost/

Service Discovery Test

# Test service discovery endpoint
make test-service-discovery

# API accesses worker service via DNS
curl http://api.localhost/service-discovery

HPA Test

# Trigger HPA with CPU stress test
make test-hpa

# Or manually:
for i in {1..20}; do curl http://api.localhost/stress; done

Check HPA status:

kubectl get hpa api-hpa
kubectl describe hpa api-hpa

MassTransit Message Flow Test

Important: Make sure /etc/hosts is configured (see "Direct Browser Access" section above).

# 1. Publish message from API
curl http://api.localhost/publish

# If api.localhost doesn't work, use port-forward:
# Terminal 1: kubectl port-forward svc/api 8080:80
# Terminal 2: curl http://localhost:8080/publish

# 2. Check Worker logs (should show consumed message)
make logs-worker

# 3. Check RabbitMQ Management UI
# Open http://rabbitmq.localhost (guest/guest)
# Navigate to Queues tab to see message flow

πŸ“Š Viewing Logs in Kibana

  1. Access Kibana: http://kibana.localhost
  2. Go to Discover from left menu
  3. If first time, create index pattern:
    • Index pattern: logs-*
    • Time field: @timestamp
  4. View logs!

πŸ“ˆ Creating Dashboards in Grafana

  1. Access Grafana: http://grafana.localhost (anonymous mode, direct access)
  2. Go to Explore from left menu
  3. Select data source:
    • Elasticsearch - For ELK Stack logs
    • Loki - For Loki Stack logs
  4. Run log queries and create visualizations

🧠 Managing Elasticsearch with Cerebro

  1. Access Cerebro: http://cerebro.localhost
  2. Login with credentials: admin / admin
  3. View Elasticsearch cluster status, indices, and perform management tasks
  4. Cerebro uses PostgreSQL for REST history storage

πŸ” Service Discovery Example

The API application accesses the worker service via DNS:

// Inside API
var response = await httpClient.GetAsync("http://worker.default.svc.cluster.local");

Kubernetes DNS format:

  • http://<service-name>.<namespace>.svc.cluster.local
  • Short format: http://worker (within same namespace)

Test it:

# Call API's service-discovery endpoint
curl http://api.localhost/service-discovery

πŸ“ Project Structure

.
β”œβ”€β”€ k8s/
β”‚   β”œβ”€β”€ cluster/
β”‚   β”‚   β”œβ”€β”€ kind-config.yaml          # kind cluster config
β”‚   β”‚   β”œβ”€β”€ metric-server.yaml        # metrics-server deployment
β”‚   β”‚   └── metrics-server-apiservice.yaml
β”‚   β”œβ”€β”€ elk/
β”‚   β”‚   β”œβ”€β”€ elasticsearch/
β”‚   β”‚   β”‚   β”œβ”€β”€ elasticsearch-deployment.yaml
β”‚   β”‚   β”‚   β”œβ”€β”€ elasticsearch-service.yaml
β”‚   β”‚   β”‚   └── ingress.yaml
β”‚   β”‚   β”œβ”€β”€ kibana/
β”‚   β”‚   β”‚   β”œβ”€β”€ kibana-deployment.yaml
β”‚   β”‚   β”‚   β”œβ”€β”€ kibana-service.yaml
β”‚   β”‚   β”‚   └── ingress.yaml
β”‚   β”‚   β”œβ”€β”€ logstash/
β”‚   β”‚   β”‚   β”œβ”€β”€ logstash-deployment.yaml
β”‚   β”‚   β”‚   β”œβ”€β”€ logstash-service.yaml
β”‚   β”‚   β”‚   └── logstash-configmap.yaml
β”‚   β”‚   └── filebeat/
β”‚   β”‚       β”œβ”€β”€ filebeat-daemonset.yaml
β”‚   β”‚       β”œβ”€β”€ filebeat-configmap.yaml
β”‚   β”‚       └── filebeat-rbac.yaml
β”‚   β”œβ”€β”€ observability/
β”‚   β”‚   β”œβ”€β”€ grafana/
β”‚   β”‚   β”‚   β”œβ”€β”€ grafana-deployment.yaml
β”‚   β”‚   β”‚   β”œβ”€β”€ grafana-service.yaml
β”‚   β”‚   β”‚   β”œβ”€β”€ datasources-configmap.yaml
β”‚   β”‚   β”‚   └── ingress.yaml
β”‚   β”‚   β”œβ”€β”€ cerebro/
β”‚   β”‚   β”‚   β”œβ”€β”€ cerebro-deployment.yaml
β”‚   β”‚   β”‚   β”œβ”€β”€ cerebro-service.yaml
β”‚   β”‚   β”‚   β”œβ”€β”€ cerebro-configmap.yaml
β”‚   β”‚   β”‚   └── ingress.yaml
β”‚   β”‚   └── loki/
β”‚   β”‚       β”œβ”€β”€ loki-deployment.yaml
β”‚   β”‚       β”œβ”€β”€ loki-service.yaml
β”‚   β”‚       β”œβ”€β”€ promtail-daemonset.yaml
β”‚   β”‚       └── promtail-configmap.yaml
β”‚   β”œβ”€β”€ rabbitmq/
β”‚   β”‚   β”œβ”€β”€ rabbitmq-deployment.yaml
β”‚   β”‚   β”œβ”€β”€ rabbitmq-service.yaml
β”‚   β”‚   └── ingress.yaml
β”‚   └── database/
β”‚       └── postgres/
β”‚           β”œβ”€β”€ postgres-deployment.yaml
β”‚           └── postgres-service.yaml
β”œβ”€β”€ apps/
β”‚   β”œβ”€β”€ api/
β”‚   β”‚   β”œβ”€β”€ Program.cs
β”‚   β”‚   β”œβ”€β”€ Dockerfile
β”‚   β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”‚   β”œβ”€β”€ service.yaml
β”‚   β”‚   β”œβ”€β”€ hpa.yaml
β”‚   β”‚   └── ingress.yaml
β”‚   β”œβ”€β”€ worker/
β”‚   β”‚   β”œβ”€β”€ Program.cs
β”‚   β”‚   β”œβ”€β”€ Dockerfile
β”‚   β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”‚   β”œβ”€β”€ service.yaml
β”‚   β”‚   └── service-discovery-example.yaml
β”‚   └── cronjob/
β”‚       β”œβ”€β”€ Program.cs
β”‚       β”œβ”€β”€ Dockerfile
β”‚       └── cronjob.yaml
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ setup.sh                      # Cluster setup script
β”‚   β”œβ”€β”€ deploy.sh                     # Deployment script
β”‚   β”œβ”€β”€ validate.sh                   # Self-validation script
β”‚   └── create-dashboards.sh          # Dashboard creation script
β”œβ”€β”€ k8s/
β”‚   └── dashboards/
β”‚       β”œβ”€β”€ grafana/
β”‚       β”‚   └── dashboard.json
β”‚       └── kibana/
β”‚           └── dashboard.json
β”œβ”€β”€ Makefile                          # Make commands
└── README.md                         # This file

🎯 Learning Objectives

With this project, you can learn:

  1. Kubernetes Fundamentals

    • Deployment and ReplicaSet
    • Service (ClusterIP, Headless)
    • ConfigMap and Volume
    • DaemonSet
    • CronJob and Job
    • HorizontalPodAutoscaler (HPA)
    • Ingress and Ingress Controller
  2. Service Discovery

    • Kubernetes DNS
    • Service name resolution
    • Cross-service communication
  3. Log Management

    • ELK Stack architecture
    • Log aggregation pipeline
    • Filebeat log collection
    • Logstash log processing
    • Elasticsearch indexing
    • Kibana visualization
  4. Observability

    • Grafana monitoring (anonymous mode, dashboards)
    • Cerebro (Elasticsearch management UI with PostgreSQL database)
    • Loki log aggregation (Grafana data source)
    • Promtail log collection (DaemonSet)
  5. Event-Driven Architecture

    • RabbitMQ message broker (auth-free, guest/guest)
    • MassTransit integration (.NET message bus)
    • Publisher/Consumer pattern
    • Scheduled message production (CronJob)
  6. Database

    • PostgreSQL (for Cerebro REST history)
    • JDBC driver integration
    • Init container for driver download
  7. Container Orchestration

    • Docker image building
    • kind local cluster
    • Resource limits and requests
    • Pod scheduling
    • Horizontal Pod Autoscaler (HPA)
    • Metrics Server

🧹 Cleanup

# Delete cluster
make clean

# Or reset cluster (no confirmation)
make reset

# Or manually:
kind delete cluster --name k8s-learning

⚠️ Security Warnings

This project is for learning purposes only and the following security measures have been intentionally omitted:

  • ❌ No authentication
  • ❌ No authorization (RBAC)
  • ❌ No TLS/SSL encryption
  • ❌ No secret management
  • ❌ No network policies
  • ❌ No pod security policies
  • ❌ No resource quotas

DO NOT use in production!

πŸ› Troubleshooting

Pods not starting

# Check pod status
kubectl get pods

# View pod details
kubectl describe pod <pod-name>

# Check logs
kubectl logs <pod-name>

Elasticsearch not ready

# Check Elasticsearch status
kubectl get pods -l app=elasticsearch
kubectl logs -l app=elasticsearch

# Elasticsearch health check
curl http://elasticsearch.localhost/_cluster/health

HPA not working

# Check metrics-server status
kubectl get deployment metrics-server -n kube-system
kubectl logs -l k8s-app=metrics-server -n kube-system

# Check HPA status
kubectl get hpa
kubectl describe hpa api-hpa

Service discovery not working

# DNS test
kubectl run -it --rm dns-test --image=busybox --restart=Never -- nslookup worker

# Check services
kubectl get svc
kubectl get endpoints

RabbitMQ connection issues

# Check RabbitMQ pod
kubectl get pods -l app=rabbitmq
kubectl logs -l app=rabbitmq

# Check RabbitMQ service
kubectl get svc rabbitmq

# Test connection from API pod
kubectl exec -it deployment/api -- curl http://rabbitmq:5672

Cerebro database dependency

Cerebro requires a PostgreSQL database for REST history functionality. The deployment includes:

  1. Postgres Deployment - Automatically deployed before Cerebro

    kubectl get pods -l app=postgres
    kubectl get svc postgres
  2. Cerebro Configuration - Configured to connect to Postgres:

    • Database: cerebro
    • User: cerebro
    • Password: cerebro-password
    • Connection: jdbc:postgresql://postgres:5432/cerebro
  3. Postgres JDBC Driver - Automatically downloaded via init container

Check Cerebro status:

kubectl get pods -l app=cerebro
kubectl logs -l app=cerebro

Access Cerebro:

http://cerebro.localhost

Login Credentials:

  • Username: admin
  • Password: admin

If Cerebro fails to start, check Postgres is running:

kubectl get pods -l app=postgres
kubectl logs -l app=postgres

Ingress not working

# Check Ingress Controller
kubectl get pods -n ingress-nginx
kubectl get ingress

# Check /etc/hosts
cat /etc/hosts | grep localhost

πŸ“š Additional Resources

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

Disclaimer: This project is for educational purposes only. No security measures have been implemented. DO NOT use in production environments.


🀝 Contributing

Contributions, issues, and feature requests are welcome! Feel free to check the issues page.

πŸ“š Learning Resources


Happy Learning! πŸš€

Made with ❀️ for Kubernetes learners

About

Complete Kubernetes tutorial with ELK Stack, Grafana, Loki, Promtail, RabbitMQ, PostgreSQL, Cerebro, and C# applications using MassTransit. Learn Kubernetes, observability, event-driven architectures, and microservices.

Topics

Resources

License

Contributing

Stars

Watchers

Forks