From 7eb8a92e702450a1d74091fa84daa3a4d7b76124 Mon Sep 17 00:00:00 2001 From: Dan Rusei Date: Wed, 28 Aug 2024 10:53:41 +0300 Subject: [PATCH] document the instalation of Danube on the local k8s cluster --- .../Danube_local_machine_k8s.md | 187 ++++++++++++++++++ docs/getting_started/Danube_on_k8s_helm.md | 89 +++++---- docs/getting_started/Danube_on_vms.md | 14 +- mkdocs.yml | 1 + 4 files changed, 239 insertions(+), 52 deletions(-) create mode 100644 docs/getting_started/Danube_local_machine_k8s.md diff --git a/docs/getting_started/Danube_local_machine_k8s.md b/docs/getting_started/Danube_local_machine_k8s.md new file mode 100644 index 0000000..e3784c8 --- /dev/null +++ b/docs/getting_started/Danube_local_machine_k8s.md @@ -0,0 +1,187 @@ +# Setup Danube on the kubernetes cluster + +This documentation covers the instalation of the Danube cluster on the kubernetes, that is running on the local machine + +## Create the cluster with [kind](https://kind.sigs.k8s.io/) + +[Kind](https://github.com/kubernetes-sigs/kind) is a tool for running local Kubernetes clusters using Docker container β€œnodes”. + +```bash +kind create cluster +Creating cluster "kind" ... + βœ“ Ensuring node image (kindest/node:v1.30.0) πŸ–Ό + βœ“ Preparing nodes πŸ“¦ + βœ“ Writing configuration πŸ“œ + βœ“ Starting control-plane πŸ•ΉοΈ + βœ“ Installing CNI πŸ”Œ + βœ“ Installing StorageClass πŸ’Ύ +Set kubectl context to "kind-kind" +You can now use your cluster with: + +kubectl cluster-info --context kind-kind +``` + +## Install the Ngnix Ingress controller + +Using the Official NGINX Ingress Helm Chart + +You can install the NGINX Ingress Controller using Helm: + +```bash +helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx +helm repo update +``` + +You can expose the NGINX Ingress controller using a NodePort service so that the traffic from the local machine (outside the cluster) can reach the Ingress controller. + +```bash +helm install nginx-ingress ingress-nginx/ingress-nginx \ + --set controller.service.type=NodePort +``` + +You can find out which port is assigned by running + +```bash +kubectl get svc + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kubernetes ClusterIP 10.96.0.1 443/TCP 4m17s +nginx-ingress-ingress-nginx-controller NodePort 10.96.245.118 80:30115/TCP,443:30294/TCP 2m58s +nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.96.169.82 443/TCP 2m58s +``` + +If ngnix is running as NodePort (usually for testing), you need local port in this case **30115**, in order to provide to danube_helm installation. + +## Install Danube PubSub + +First, add the repository to your Helm client: + +```sh +helm repo add danube https://danrusei.github.io/danube_helm +helm repo update +``` + +You can install the chart with the release name `my-danube-cluster` using the following command: + +```sh +helm install my-danube-cluster danube/danube-helm-chart --set broker.service.advertisedPort=30115 +``` + +The advertisedPort is used to allow the client to reach the brokers, through the ingress NodePort. + +You can further customize the installation, check the readme file. I'm installing it using the default configuration with 3 danube brokers. + +## Resource consideration + +Pay attention to resource allocation, the default configuration is just OK for testing. + +For production environment you may want to increase. + +### Sizing for Production + +**Small to Medium Load**: + +CPU Requests: 500m to 1 CPU +CPU Limits: 1 CPU to 2 CPUs +Memory Requests: 512Mi to 1Gi +Memory Limits: 1Gi to 2Gi + +**Heavy Load:** +CPU Requests: 1 CPU to 2 CPUs +CPU Limits: 2 CPUs to 4 CPUs +Memory Requests: 1Gi to 2Gi +Memory Limits: 2Gi to 4Gi + +## Check the install + +Make sure that the brokers, etcd and the ngnix ingress are running properly in the cluster. + +```bash +kubectl get all + +NAME READY STATUS RESTARTS AGE +pod/my-danube-cluster-danube-broker1-766665d6f4-qdbf6 1/1 Running 0 12s +pod/my-danube-cluster-danube-broker2-5774ff4dd6-dvx66 1/1 Running 0 12s +pod/my-danube-cluster-danube-broker3-6db6b5fccd-dkr2k 1/1 Running 0 12s +pod/my-danube-cluster-etcd-867f5b85f8-g4m9m 1/1 Running 0 12s +pod/nginx-ingress-ingress-nginx-controller-7bc7c7776d-wqc5g 1/1 Running 0 47m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/kubernetes ClusterIP 10.96.0.1 443/TCP 48m +service/my-danube-cluster-danube-broker1 ClusterIP 10.96.40.244 6650/TCP,50051/TCP,9040/TCP 12s +service/my-danube-cluster-danube-broker2 ClusterIP 10.96.204.21 6650/TCP,50051/TCP,9040/TCP 12s +service/my-danube-cluster-danube-broker3 ClusterIP 10.96.46.5 6650/TCP,50051/TCP,9040/TCP 12s +service/my-danube-cluster-etcd ClusterIP 10.96.232.70 2379/TCP 12s +service/nginx-ingress-ingress-nginx-controller NodePort 10.96.245.118 80:30115/TCP,443:30294/TCP 47m +service/nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.96.169.82 443/TCP 47m + +NAME READY UP-TO-DATE AVAILABLE AGE +deployment.apps/my-danube-cluster-danube-broker1 1/1 1 1 12s +deployment.apps/my-danube-cluster-danube-broker2 1/1 1 1 12s +deployment.apps/my-danube-cluster-danube-broker3 1/1 1 1 12s +deployment.apps/my-danube-cluster-etcd 1/1 1 1 12s +deployment.apps/nginx-ingress-ingress-nginx-controller 1/1 1 1 47m + +NAME DESIRED CURRENT READY AGE +replicaset.apps/my-danube-cluster-danube-broker1-766665d6f4 1 1 1 12s +replicaset.apps/my-danube-cluster-danube-broker2-5774ff4dd6 1 1 1 12s +replicaset.apps/my-danube-cluster-danube-broker3-6db6b5fccd 1 1 1 12s +replicaset.apps/my-danube-cluster-etcd-867f5b85f8 1 1 1 12s +replicaset.apps/nginx-ingress-ingress-nginx-controller-7bc7c7776d 1 1 1 47m +``` + +Validate that the brokers have started correctly: + +```bash +kubectl logs pod/my-danube-cluster-danube-broker1-766665d6f4-qdbf6 + +initializing metrics exporter +2024-08-28T04:30:22.969462Z INFO danube_broker: Use ETCD storage as metadata persistent store +2024-08-28T04:30:22.969598Z INFO danube_broker: Start the Danube Service +2024-08-28T04:30:22.969612Z INFO danube_broker::danube_service: Setting up the cluster MY_CLUSTER +2024-08-28T04:30:22.971978Z INFO danube_broker::danube_service::local_cache: Initial cache populated +2024-08-28T04:30:22.972013Z INFO danube_broker::danube_service: Started the Local Cache service. +2024-08-28T04:30:22.990763Z INFO danube_broker::danube_service::broker_register: Broker 14150019297734190044 registered in the cluster +2024-08-28T04:30:22.991620Z INFO danube_broker::danube_service: Namespace default already exists. +2024-08-28T04:30:22.991926Z INFO danube_broker::danube_service: Namespace system already exists. +2024-08-28T04:30:22.992480Z INFO danube_broker::danube_service: Namespace default already exists. +2024-08-28T04:30:22.992490Z INFO danube_broker::danube_service: cluster metadata setup completed +2024-08-28T04:30:22.992551Z INFO danube_broker::danube_service: Started the Broker GRPC server +2024-08-28T04:30:22.992563Z INFO danube_broker::broker_server: Server is listening on address: 0.0.0.0:6650 +2024-08-28T04:30:22.992605Z INFO danube_broker::danube_service: Started the Leader Election service +2024-08-28T04:30:22.993050Z INFO danube_broker::danube_service: Started the Load Manager service. +2024-08-28T04:30:22.993143Z INFO danube_broker::danube_service: Started the Danube Admin GRPC server +2024-08-28T04:30:22.993274Z INFO danube_broker::admin: Admin is listening on address: 0.0.0.0:50051 +``` + +## Setup in order to communicate with cluster PubSub brokers + +```bash +kubectl get nodes -o wide +NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME +kind-control-plane Ready control-plane 53m v1.30.0 172.20.0.2 Debian GNU/Linux 12 (bookworm) 5.15.0-118-generic containerd://1.7.15 +``` + +Use the **INTERNAL-IP** to route the traffic to broker hosts. Add the following in the hosts file, but make sure you match the number and the name of the brokers from the helm values.yaml file. + +```bash +cat /etc/hosts +172.20.0.2 broker1.example.com broker2.example.com broker3.example.com + +``` + +## Inspect the etcd instance (optional) + +If you want to connect from your local machine, use kubectl port-forward to forward the etcd port to your local machine: + +Port Forward etcd Service: + +```bash +kubectl port-forward service/my-danube-cluster-etcd 2379:2379 +``` + +Once port forwarding is set up, you can run etcdctl commands from your local machine: + +```bash +etcdctl --endpoints=http://localhost:2379 watch --prefix / +``` diff --git a/docs/getting_started/Danube_on_k8s_helm.md b/docs/getting_started/Danube_on_k8s_helm.md index 4f5af27..944f4f6 100644 --- a/docs/getting_started/Danube_on_k8s_helm.md +++ b/docs/getting_started/Danube_on_k8s_helm.md @@ -2,6 +2,8 @@ The Helm chart deploys the Danube Cluster with ETCD as metadata storage in the same namespace. +If you would like to configure for testing purposes you may want to see: [Run Danube on Kubernetes on Local Machine](https://dev-state.com/danube_docs/getting_started/Danube_local_machine_k8s/). + ## Prerequisites - Kubernetes 1.19+ @@ -9,7 +11,29 @@ The Helm chart deploys the Danube Cluster with ETCD as metadata storage in the s ## Installation -### Add Helm Repository +### Install NGNIX Ingress Controller + +This is required in order to route traffic to each broker service in the cluster. The configuration of the controller is already provided into the danube_helm, just need to tweak the values.yaml per your needs. + +You can install the NGINX Ingress Controller using Helm: + +```bash +helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx +helm repo update +``` + +You can expose the NGINX Ingress controller using a NodePort service so that traffic from the local machine (outside the cluster) can reach the Ingress controller. + +```bash +helm install nginx-ingress ingress-nginx/ingress-nginx --set controller.publishService.enabled=true +``` + +- The publishService feature enables the Ingress controller to publish information about itself (such as its external IP or hostname) in a Kubernetes Service resource. +- This is particularly useful when you are running the Ingress controller in a cloud environment (like AWS, GCP, or Azure) and need it to publish its external IP address to handle incoming traffic + +The Danube is not dependent on ngnix, can work with any ingress controller of your choice. + +### Add Danube Helm Repository First, add the repository to your Helm client: @@ -18,7 +42,7 @@ helm repo add danube https://danrusei.github.io/danube_helm helm repo update ``` -### Install the Helm Chart +### Install the Danube Helm Chart You can install the chart with the release name `my-danube-cluster` using the following command: @@ -30,40 +54,7 @@ This will deploy the Danube Broker and an ETCD instance with the default configu ## Configuration -### ETCD Configuration - -The following table lists the configurable parameters of the ETCD chart and their default values. - -| Parameter | Description | Default | -|-----------------------------|------------------------------------|-----------------------| -| `etcd.enabled` | Enable or disable ETCD deployment | `true` | -| `etcd.replicaCount` | Number of ETCD instances | `1` | -| `etcd.image.repository` | ETCD image repository | `bitnami/etcd` | -| `etcd.image.tag` | ETCD image tag | `latest` | -| `etcd.image.pullPolicy` | ETCD image pull policy | `IfNotPresent` | -| `etcd.service.type` | ETCD service type | `ClusterIP` | -| `etcd.service.port` | ETCD service port | `2379` | - -### Broker Configuration - -The following table lists the configurable parameters of the Danube Broker chart and their default values. - -| Parameter | Description | Default | -|-------------------------------|--------------------------------------|----------------------------------------| -| `broker.replicaCount` | Number of broker instances | `1` | -| `broker.image.repository` | Broker image repository | `ghcr.io/your-username/danube-broker` | -| `broker.image.tag` | Broker image tag | `latest` | -| `broker.image.pullPolicy` | Broker image pull policy | `IfNotPresent` | -| `broker.service.type` | Broker service type | `ClusterIP` | -| `broker.service.port` | Broker service port | `6650` | -| `broker.resources.limits.cpu` | CPU limit for broker container | `500m` | -| `broker.resources.limits.memory` | Memory limit for broker container | `512Mi` | -| `broker.resources.requests.cpu` | CPU request for broker container | `200m` | -| `broker.resources.requests.memory` | Memory request for broker container | `256Mi` | -| `broker.env.RUST_LOG` | Rust log level for broker | `danube_broker=trace` | -| `broker.brokerAddr` | Broker address | `0.0.0.0:6650` | -| `broker.clusterName` | Cluster name | `MY_CLUSTER` | -| `broker.metaStoreAddr` | Metadata store address | `etcd:2379` | +The Danube cluster configuration from the values.yaml file has to be adjusted for your needs. You can override the default values by providing a custom `values.yaml` file: @@ -74,18 +65,27 @@ helm install my-danube-cluster danube/danube-helm-chart -f custom-values.yaml Alternatively, you can specify individual values using the `--set` flag: ```sh -helm install my-danube-cluster danube/danube-helm-chart --set broker.replicaCount=2 --set broker.brokerAddr="0.0.0.0:6651" +helm install my-danube-cluster danube/danube-helm-chart --set broker.service.type="ClusterIP" ``` -## Accessing the Brokers +## Resource consideration -To access the broker service, you can use port forwarding: +The default configuration is just OK for testing, but you need to reconsider the values for production env. -```sh -kubectl port-forward svc/my-danube-cluster-broker 6650:6650 -``` +### Sizing for Production + +**Small to Medium Load**: + +CPU Requests: 500m to 1 CPU +CPU Limits: 1 CPU to 2 CPUs +Memory Requests: 512Mi to 1Gi +Memory Limits: 1Gi to 2Gi -Then you can connect to the broker at `localhost:6650`. +**Heavy Load:** +CPU Requests: 1 CPU to 2 CPUs +CPU Limits: 2 CPUs to 4 CPUs +Memory Requests: 1Gi to 2Gi +Memory Limits: 2Gi to 4Gi ## Uninstallation @@ -102,8 +102,7 @@ This command removes all the Kubernetes components associated with the chart and To get the status of the ETCD and Broker pods: ```sh -kubectl get pods -l app=etcd -kubectl get pods -l app=broker +kubectl get all ``` To view the logs of a specific broker pod: diff --git a/docs/getting_started/Danube_on_vms.md b/docs/getting_started/Danube_on_vms.md index ccfaa09..5cd9d0d 100644 --- a/docs/getting_started/Danube_on_vms.md +++ b/docs/getting_started/Danube_on_vms.md @@ -27,11 +27,16 @@ Danube is an open-source distributed Pub/Sub messaging platform written in Rust. 1. **Upload the `danube-broker` binary** to each of the 3 VMs designated for brokers. -2. **Run the Broker**: Start each broker with the appropriate configuration. +2. **Customize the Danube cluster config file** + + A sample file can be found [HERE](https://github.com/danrusei/danube/tree/main/config). + +3. **Run the Broker**: Start each broker with the appropriate configuration. + Example command to start a broker: ```bash - RUST_LOG=danube_broker=info ./danube-broker --cluster-name "MY_CLUSTER" --meta-store-addr "ETCD_SERVER_IP:2379" + RUST_LOG=danube_broker=info ./danube-broker --config-file config/danube_broker.yml ``` Replace `ETCD_SERVER_IP` with the IP address of your ETCD server. @@ -54,11 +59,6 @@ Danube is an open-source distributed Pub/Sub messaging platform written in Rust. **Log Files**: For debugging, check the logs of each Danube broker instance. -## Troubleshooting - -* **ETCD Connection Issues**: Ensure that the `--meta-store-addr` is correctly set and that ETCD is accessible from each broker VM. -* **Broker Not Starting**: Verify that the correct ports are available and that the Danube binary has execute permissions. - ## Additional Information For more details, visit the [Danube GitHub repository](https://github.com/danrusei/danube) or contact the project maintainers for support. diff --git a/mkdocs.yml b/mkdocs.yml index d41c172..e013dc3 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -4,6 +4,7 @@ nav: - Getting Started: - Run Danube on VMs or Bare Metal: getting_started/Danube_on_vms.md - Run Danube on Kubernetes Cluster: getting_started/Danube_on_k8s_helm.md + - Run Danube on Kubernetes on Local Machine: getting_started/Danube_local_machine_k8s.md - Architecture: - Danube architecture: architecture/architecture.md - Messages: architecture/messages.md