Note: To follow this tutorial we are going to use Katacoda Single-Node-Cluster a minikube cloud provider. If you want to try this exercices locally, here is Minikube setup link.
minikube start
Start the cluster with start command, now have a running Kubernetes cluster in your terminal. Minikube just started a virtual machine, and a Kubernetes cluster is now running in that VM.
kubectl cluster-info
Kubernetes command line interface, kubectl, With Kubernetes we have a running master and a dashboard. The dashboard allows you to view your applications in a UI.
kubectl get nodes
This command shows all nodes that can be used to host our applications. Status Ready: this node it is ready to accept applications for deployment.
kubectl get all
List your deployments.
kubectl delete pod <pod-id>
Note: Every pod is generated based on its deployment file. Hence, every time you delete a pod, it comes up again because you defined the value 'replicas: X' in the deployment file.
kubectl delete deployment <deploy-name>
To delete a Pod/s permanently, You will have to first delete the deployment of that pod and then delete the pod. This will delete the pod permanently. And of course, the deployment itself will be deleted permanently.
And sure you can alternatively, delete the deployment file from Kubernetes's UI as well.
kubectl delete -f deployment_file_name.yml
kubectl run twogg --image=twogghub/k8s-intro:1.4-k8s
The run command creates a new deployment. Here we include the deployment name, and app image location (include the full repository url for images hosted outside Docker hub).
This command performed a few things for you:
- Searched for a suitable node where an instance of the application could be run (we have only 1 available node)
- Scheduled the application to run on that Node
- Configured the cluster to reschedule the instance on a new Node when needed
kubectl expose deployment twogg --port=8080 --external-ip=$(minikube ip) --type=LoadBalancer
kubectl get deployments
In this case, there is 1 deployment running a single instance of twogg. (The instance is running inside a Docker container on that node). Pods that are running inside Kubernetes are running on a private, isolated network. By default they are visible from other pods and services within the same kubernetes cluster, but not outside that network. When we use kubectl, we're interacting through an API endpoint to communicate with our application.
kubectl get services
kubectl get pods
kubectl get pods,services --output wide
kubectl describe service twog
kubectl set image deployment twogg twogg=twogghub/k8s-intro:1.5-k8s
kubectl scale --replicas=3 deployment twogg
kubectl get pods --output wide --watch
kubectl get pods --output wide
kubectl delete pod <pod-id>
kubectl get deployment twogg --output wide
kubectl logs <pod-id>
kubectl get pod <pod-id> --output=yaml
kubectl exec -ti <pod-id> /bin/bash
Note: Please try this commands locally, Katacoda Kubernetes playground does not support a cloud Kubernetes proxy.
kubectl proxy
This kubectl command create a proxy that will forward communications into the cluster-wide, private network. The proxy can be terminated by pressing control-C and won't show any output while its running.
With the proxy running now have a connection between our host and the Kubernetes cluster. The proxy enables direct access to the API from these terminals. Kubernetes hosted APIs now are available at: http://localhost:8001.
To get the version: http://localhost:8001/version
The API server will automatically create an endpoint for each pod, based on the pod name, that is also accessible through the proxy. First we need to get the Pod name, and we'll store in the environment variable POD_NAME:
export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
echo $POD_NAME
Now we can make an HTTP request to the application running in that pod: http://localhost:8001/api/v1/proxy/namespaces/default/pods/$POD_NAME/