This is a setup that provides information regarding the temperature, humidity, dust and noise in a specific room (It is meant for indoor usage).
For this setup, I need 2 hosts, the raspberryPi and another PC/Server. Running kafka, spark and postgres on the raspberry is bit too much ...
- Raspberry Pi: I deployed the
publisher
andkafka
- Server: I deployed
spark
,backend
(APIs) andfrontend
. And because ... well why not, everything runs in a Minikube cluster.
I might eventually migrate my api backend to serverless functions running under Kubeless
- Download Apache Kafka (scala version 2.11), and extract the archive anywhere you want
(ex: under home dir)
- Set the kafka home environment variable as follows:
export KAFKA_HOME=<full_path>/kafka_2.11-2.3.1
- Allow kafka to expose it's IP address to other devices by setting
advertised.host.name
in server.properties andmetadata.broker.list
in producer.properties to public IP address andhost.name
to 0.0.0.0 - Goto the
kafka
folder and run the following command:
sh kafka_start.sh
- Goto the
publisher
folder, and install the dependencies
pip3 install -r requirements.txt
- Set the environment variable
CX_KAFKA_URL
as follows:
export CX_KAFKA_URL="<kafka_server_ip>:9092"
- Start the publisher
nohup python3 sensors_init.py > sensors.log &
- Download Apache Spark (scala version 2.11), and extract the archive anywhere you want
(ex: under home dir)
- Set the spark home environment variable as follows:
export SPARK_HOME=<full_path>/spark-2.4.4-bin-hadoop2.7
- Goto the
spark
folder, and install the dependencies
pip3 install -r requirements.txt
- Set the environment variables as follows (replace the ip and user/pwd with yours):
export CX_KAFKA_URL="localhost:9092"
export CX_DB_URL="jdbc:postgresql://localhost:5432/pi"
export CX_DB_DRIVER="org.postgresql.Driver"
export CX_DB_USER="sa"
export CX_DB_PWD="sa"
- Start the spark bootstrapper
nohup python3 spark_init.py >/dev/null 2>&1 &
minikube start --vm-driver="kvm2" --insecure-registry="ek:5000" --memory="4000mb"
eval $(minikube docker-env)
docker run -d -p 5000:5000 --restart=always --name ek registry:2
kubectl create ns kubeless
kubectl label namespace kubeless istio-injection=enabled
kubectl create -f https://github.com/kubeless/kubeless/releases/download/v1.0.5/kubeless-v1.0.5.yaml
kubectl create -f https://raw.githubusercontent.com/kubeless/kubeless-ui/master/k8s.yaml
kubectl create ns smart-home
kubectl label namespace smart-home istio-injection=enabled
kubectl expose deployment hello-minikube --type=NodePort --port=8080
minikube service ... --url
eval $(minikube docker-env)
docker build backend -t smarthome-api:0.8
docker build db -t smarthome-db:0.8
docker build frontend -t smarthome-webui:0.8
docker tag smarthome-api:0.8 ek:5000/smarthome-api:0.8
docker tag smarthome-db:0.8 ek:5000/smarthome-db:0.8
docker tag smarthome-webui:0.8 ek:5000/smarthome-webui:0.8
docker push ek:5000/smarthome-api:0.8
docker push ek:5000/smarthome-db:0.8
docker push ek:5000/smarthome-webui:0.8
bla bla ...
using ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: smart-home-ui
spec:
rules:
- host: webui.192.168.99.100.nip.io
http:
paths:
- backend:
serviceName: smart-home-ui
servicePort: 80
curl --location --request POST 'http://smart-home.info/temperature/range' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data-raw '{
"from_date":"2020-01-02",
"to_date":"2020-01-04"
}'
kubectl label namespace default istio-injection=enabled