Skip to content

Commit

Permalink
Merge branch 'hetzneronline:master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
optaris authored Oct 27, 2021
2 parents 4ef9f3b + 1405e15 commit 7070082
Show file tree
Hide file tree
Showing 6 changed files with 735 additions and 12 deletions.
126 changes: 126 additions & 0 deletions tutorials/install-caprover/01.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
---
SPDX-License-Identifier: MIT
path: "/tutorials/install-caprover"
slug: "install-caprover"
date: "2021-10-22"
title: "Installing CapRover on Hetzner Cloud"
short_description: "Install your own CapRover instance on a Hetzner Cloud server to easily deploy various apps"
tags: ["Docker", "Lang:YAML"]
author: "Kedas"
author_link: "https://github.com/K3das"
author_img: "https://avatars.githubusercontent.com/u/20052500?v=4"
author_description: ""
language: "en"
available_languages: ["en"]
header_img: "header-1"
cta: "cloud"
---

## Introduction

CapRover is a sleek new way to easily deploy your own apps or an app from their large selection to a Cloud Server. CapRover has a nice web panel to manage your installation, view metrics, and launch new apps. I wouldn't recommend this if you have multiple people who need to connect to one server, due to there being only one login.

**Prerequisites**

- A fresh Hetzner Cloud server running Ubuntu **18.04** *(Ubuntu 20.04 could also work, but isn't fully compatible with CapRover - so use at your own risk)* that you have shell access to
- Any domain

## Step 1 - Install Docker

Since CapRover runs on Docker, we'll need to install it.

First - Update your server and install the required packages by running these commands: (Make sure you're logged into the `root` user, as this tutorial assumes you are)

```bash
apt-get update
apt-get upgrade -y
apt-get install curl -y
```

Now we can proceed to installing Docker:

```bash
bash <(curl -s https://get.docker.com/)
```

This command will install the latest version of Docker. You could now move on to running CapRover.

## Step 2 - Connect your domain

CapRover can run on both a root domain (EX: `*.example.com`) and a subdomain (EX: `*.foo.example.com`).

Go to your domain's DNS management panel and create a new `A` record pointing to your server's IP address:

- **TYPE:** `A` record
- **HOST:** `*` (If you're using a subdomain set this to `*.<subdomain_name>`)
- **POINTS TO:** (IP address of your server)
- **TTL:** (Use the default TTL, or set it to `3600`)

To verify that you correctly configured your domain, visit <https://mxtoolbox.com/DNSLookup.aspx>, input `foobar.<your_domain.com>` (`foobar.<subdomain_name>.<your_domain.com>` if you used a subdomain) and check if the domain resolves to the IP address you configured in your DNS settings. **DNS propagation can take a few minutes, so if it doesn't work, just wait some more.** It should take under 30 minutes for DNS to fully propagate.

## Step 3 - Run CapRover

CapRover simply runs on top of Docker, making it very modular, and can be installed with one command:

```bash
docker run -p 80:80 -p 443:443 -p 3000:3000 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
```

Do not change the port mappings, as CapRover will not be able to run on different ports.

## Step 4 - Setup CapRover

Once the previous command has completed, and you waited 60 seconds for CapRover to fully start, you can attempt to login to the dashboard. You can find it at `http://captain.<your_domain.com>:3000/` (`http://captain.<subdomain_name>.<your_domain.com>:3000/` if you used a subdomain). **Note:** you should only use this URL for the initial setup.

![CapRover Login Screen](images/caprover_login_screen.png)

The default password is `captain42` - **you should change it later**

When you login you'll be greeted by the dashboard:

![CapRover Initial Dashboard](images/caprover_initial_dashboard.png)

Enter your domain (or subdomain) and press "Update Domain". You'll be redirected to `http://captain.<your_domain.com>` - this is where you can now access your dashboard. Press "Enable HTTPS". Enter your email address in the popup (this is required for the Let's Encrypt certificate). Once it's done you can press "Force HTTPS" to make sure every request is redirected to `https://`

Lastly, **don't forget to set a new password by going to "Settings" and filling out the "Change Password" form!**

## Conclusion

You know have a working CapRover instance 🎉! You can use their amazing docs to learn how to deploy existing One-Click Apps, or your own:

**Enabling NetData Monitoring:** <https://caprover.com/docs/resource-monitoring.html>

**Deploying a OneClick App:** <https://caprover.com/docs/one-click-apps.html>

**Deployment Methods:** <https://caprover.com/docs/deployment-methods.html>

##### License: MIT

<!--
Contributor's Certificate of Origin
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I have
the right to submit it under the license indicated in the file; or
(b) The contribution is based upon previous work that, to the best of my
knowledge, is covered under an appropriate license and I have the
right under that license to submit that work with modifications,
whether created in whole or in part by me, under the same license
(unless I am permitted to submit under a different license), as
indicated in the file; or
(c) The contribution was provided directly to me by some other person
who certified (a), (b) or (c) and I have not modified it.
(d) I understand and agree that this project and the contribution are
public and that a record of the contribution (including all personal
information I submit with it, including my sign-off) is maintained
indefinitely and may be redistributed consistent with this project
or the license(s) involved.
Signed-off-by: Michael Pigal <kedas@uncrftd.xyz>
-->
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
34 changes: 22 additions & 12 deletions tutorials/install-kubernetes-cluster/01.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -291,9 +291,10 @@ The servers are now prepared to finally install the Kubernetes cluster. Log on t
master$ kubeadm config images pull
master$ kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--kubernetes-version=v1.15.3 \
--kubernetes-version=v1.21.2 \
--ignore-preflight-errors=NumCPU \
--apiserver-cert-extra-sans <10.0.0.1>
--upload-certs \
--apiserver-cert-extra-sans 10.0.0.1
```

The `kubeadm init` process will print a `kubeadm join` command in between. You should copy that command for later use (not required as you can always create a new token when needed). The `--apiserver-cert-extra-sans` flag ensures your internal IP is recognized as valid IP for the apiserver.
Expand Down Expand Up @@ -337,21 +338,21 @@ Both services can use the same token, but if you want to be able to revoke them
Now deploy the Hetzner Cloud controller manager into the cluster

```bash
master$ kubectl apply -f https://raw.githubusercontent.com/hetznercloud/hcloud-cloud-controller-manager/master/deploy/v1.4.0-networks.yaml
master$ kubectl apply -f https://raw.githubusercontent.com/hetznercloud/hcloud-cloud-controller-manager/master/deploy/ccm-networks.yaml
```

And set up the cluster networking

```bash
master$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
master$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
```

> This tutorial uses flannel, as the CNI has very low maintenance requirements. For other options and comparisons check the [official documentation](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network)
As Kubernetes with the external cloud provider flag activated will add a taint to uninitialized nodes, the cluster critical pods need to be patched to tolerate these

```bash
master$ kubectl -n kube-system patch daemonset kube-flannel-ds-amd64 --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'
master$ kubectl -n kube-system patch ds kube-flannel-ds --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'
master$ kubectl -n kube-system patch deployment coredns --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'
```

Expand All @@ -364,7 +365,7 @@ Last but not least deploy the Hetzner Cloud Container Storage Interface to the c
```bash
master$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.14/pkg/crd/manifests/csidriver.yaml
master$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.14/pkg/crd/manifests/csinodeinfo.yaml
master$ kubectl apply -f https://raw.githubusercontent.com/hetznercloud/csi-driver/master/deploy/kubernetes/hcloud-csi.yml
master$ kubectl apply -f https://raw.githubusercontent.com/hetznercloud/csi-driver/master/deploy/kubernetes/hcloud-csi-master.yml
```

Your control plane is now ready to use. Fetch the kubeconfig from the master server to be able to use `kubectl` locally
Expand All @@ -375,7 +376,16 @@ local$ scp root@<116.203.0.1>:/etc/kubernetes/admin.conf ${HOME}/.kube/config

Or merge your existing kubeconfig with the `admin.conf` accordingly.

### Step 3.4 - Join worker nodes
### Step 3.4 - Secure nodes

Using the hetzner firewall you can secure your nodes. Replace <116.203.0.x> with the public IPs of your node servers.

```bash
local$ hcloud firewall add-rule k8s-nodes --protocol=tcp --direction=in --source-ips <116.203.0.1>/32 --source-ips <116.203.0.2>/32 --source-ips <116.203.0.3>/32 --port any
local$ hcloud firewall add-rule k8s-nodes --protocol=udp --direction=in --source-ips <116.203.0.1>/32 --source-ips <116.203.0.2>/32 --source-ips <116.203.0.3>/32 --port any
```

### Step 3.5 - Join worker nodes

In the `kubeadm init` process a join command for the worker nodes was printed. If you don't have that command noted anymore, a new one can be generated by running the following command on the master node

Expand All @@ -394,12 +404,12 @@ When the join was successful list all nodes
```bash
local$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-1 Ready master 11m v1.15.3
worker-1 Ready <none> 5m v1.15.3
worker-2 Ready <none> 5m v1.15.3
master-1 Ready master 11m v1.21.2
worker-1 Ready <none> 5m v1.21.2
worker-2 Ready <none> 5m v1.21.2
```

### Step 3.5 - Setup LoadBalancing (Optional)
### Step 3.6 - Setup LoadBalancing (Optional)

Hetzner Cloud does not support LoadBalancer as a Service (yet). Thus [MetalLB](https://metallb.universe.tf/) will be installed to make the LoadBalancer service type available in the cluster.

Expand Down Expand Up @@ -459,7 +469,7 @@ EOF

This will configure MetalLB to use the IPv4 floating IP as LoadBalancer IP. MetalLB can reuse IPs for multiple LoadBalancer services if some [conditions](https://metallb.universe.tf/usage/#ip-address-sharing) are met. This can be enabled by adding an annotation `metallb.universe.tf/allow-shared-ip` to the service.

### Step 3.6 - Setup floating IP failover (Optional)
### Step 3.7 - Setup floating IP failover (Optional)

As the floating IP is bound to one server only I wrote a little controller, which will run in the cluster and reassign the floating IP to another server, if the currently assigned node becomes NotReady.

Expand Down
Loading

0 comments on commit 7070082

Please sign in to comment.