Categories
Containerisation Kubernetes

Building the cheapest K8S Cluster: Creating a HA Kubernetes cluster with k3s for only R90

In this post I will show you how to create your very own Highly available K8s cluster for just R90 a month…the cheapest I could find.

For this we will be using k3s, a lightweight kubernetes distribution created by rancher labs.
So lightweight, that the requirements for the server nodes is only 512MB of RAM and 1 CPU.

That works well for our chosen infrastructure provider…cloudafrica. They let you get VM’s (on KVM) for only R30 a month that meet our requirments. (Excluding VAT)

Another good thing about the HA setup is that the minimum number of server nodes needed is 2, instead of k8s that requires 3 server node minimum (due to etcd and the quorum requirment). The downside is that an external database is needed (so you might need to get another vm) – in that case it would cost R120 a month.

So in total we are getting 4 vm’s (nodes):

  • 2 server nodes
  • 1 agent node
  • 1 external db

Cloud africa also provides OS images of distributions we need (or that k3s is tested on)…namely ubuntu 16.04 and ubuntu 18.04.

Initial Setup

Create 3 servers of Ubuntu 18.04, with 512MB of RAM and 1 CPU each.
Ensure they have unique hostnames, something like: master01.mydomain.co.za, master02.mydomain.co.za and worker01.mydomain.co.za

Ensure the following ports are open:

  • 6443 tcp (api-server)
  • 8472 udp (flannel)
  • 10250 tcp (metrics-server)

External DB Setup

Create a username and password for k3s on your external k3s

mysql> CREATE USER 'k3s'@'%' IDENTIFIED BY 's3cretp@ss';
mysql> CREATE SCHEMA k3s;
mysql> GRANT ALL ON k3s.* TO 'k3s'@'%';

So the datastore-endpoint will be:

mysql://k3s:s3cretp@ss@tcp(mydb.site.co.za:3306)/k3s

K3S_DATASTORE_ENDPOINT='mysql://username:password@tcp(hostname:3306)/k3s'

Launch Server Nodes

Run this on each node:

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--write-kubeconfig-mode 644 \
--datastore-endpoint mysql://k3s:s3cretp@ss@tcp(mydb.site.co.za:3306)/k3s \
-t agent-secret --tls-san k3s.site.co.za \
--node-taint k3s-controlplane=true:NoExecute" sh -

You can then check that the nodes are up and running with:

k3s kubectl get nodes

Configure the Fixed Registration Address

This can be the IP or hostname of any of the server nodes, but in many cases those may change over time – which I will use for simplicity.

If you are scaling your node group up and down then make use of one of these:

  • A layer-4 (TCP) load balancer
  • Round-robin DNS
  • Virtual or elastic IP addresses

So I am going to use dns to point to a node, eg:

k3s.site.co.za 127.0.0.1

Join Worker (agent) Node

Run this on the worker node:

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="agent -t agent-secret --server https://k3s.site.co.za:6443" sh -

Ensure the agentis running

sudo systemctl status k3s-agent

The Working Cluster

So now we should have all the k8s pods starting up and all 3 nodes showing

ubuntu@master02:~$ kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
worker01   Ready    <none>   16m   v1.17.0+k3s.1
master01   Ready    master   89m   v1.17.0+k3s.1
master02   Ready    master   87m   v1.17.0+k3s.1
ubuntu@master02:~$ kubectl get po -A -o wide
NAMESPACE     NAME                                      READY   STATUS             RESTARTS   AGE   IP          NODE       NOMINATED NODE   READINESS GATES
kube-system   metrics-server-6d684c7b5-mhvfj            1/1     Running            1          89m   10.42.2.2   worker01   <none>           <none>
kube-system   coredns-d798c9dd-x2v46                    1/1     Running            0          89m   10.42.2.5   worker01   <none>           <none>
kube-system   helm-install-traefik-bgcrk                0/1     Completed          2          89m   10.42.2.3   worker01   <none>           <none>
kube-system   svclb-traefik-rlj4x                       2/2     Running            0          15m   10.42.2.6   worker01   <none>           <none>
kube-system   traefik-6787cddb4b-5bwj6                  1/1     Running            0          15m   10.42.2.7   worker01   <none>           <none>
kube-system   local-path-provisioner-58fb86bdfd-49dgr   0/1     CrashLoopBackOff   6          89m   10.42.2.4   worker01   <none>           <none>

So looking all good.

Next steps is setting up networking and storage…which is beyond the scope of this article.

The Hard Part

Now comes the hard part…converting your legacy applications to use containers, testing you images and pushing them to a remote image registry.

Sources