You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
tobi d6b9dffc69 readme 1 year ago
README.md readme 1 year ago
calico.yaml fix calico ipv4 detection fail2 1 year ago
calicoctl init 1 year ago
dashboard.yaml dashboard floating 1 year ago
kubeadm.yaml dashboard floating 1 year ago
pool_ipv6.yaml dashboard floating 1 year ago

README.md

Kubernetes Cluster Setup

kubeadm init

Now we can bootstrap the cluster using kubeadm init --config kubeadm.yaml. Once this command finishes it will provide you with:

  • Some commands to copy configuration and keys to your users home directory. kubectl will automatically use these to establish an elevated connection to your cluster.
  • A join command, which you can run on other nodes that have kubeadm installed. Using this command you can add WORKER nodes. We will cover adding manager / control plane nodes later.

Run the commands that copy files to your “.kube” directory.

untainting nodes

By default kubernetes doesn’t allow master nodes to run normal pods, with the reasoning that core kubernetes functions should be snappy and available at all times. Since our small home lab won’t ever (hopefully) max out it’s resource usage, we can lift this restriction.

kubectl taint node NODENAME node-role.kubernetes.io/master:NoSchedule-

joining the other nodes

Using sudo kubeadm init phase upload-certs --upload-certs we can upload the control plane certificates a to a secret AND print them to our console. The created secret will expire after two hours so no need to delete it manually. We are only after the console output.

Next we need the output of the command kubeadm init --config kubeadm.yaml we ran earlier and combine in with the control plane certificate.

kubeadm init --config kubeadm.yaml - example output (you can reprint it using kubeadm token create --print-join-command)

kubeadm join 10.0.0.9:6443 --token 74fica.rglepqf1l8fonh9y --discovery-token-ca-cert-hash sha256:b603399ca6fef7c852155c0d9c5d392b3089b6511a674ce77e279dde24a31d9a

Append the certificate --control-plane --certificate-key 4ab4e7123123277bd513123123123a1808 and --control-plane to create the control plane join command

kubeadm join 10.0.0.9:6443 --token 74fica.rgle123123onh9y --discovery-token-ca-cert-hash sha256:b603399ca4565436345b6511a674ce77e279dde24a31d9a --control-plane --certificate-key 4ab4e7123123277bd513123123123a1808

Network plugin

This is where most guides just stop and leave you on your own, not even being able to run pods that communicate with eachother, nevermind store any kind of persistent data.

We will now address the first issue that comes with running pods node independent, that isn’t supported by kubernetes out of the box - network communication.

There are several network plugins that enable pod to pod communication, the most polished seems to me to be Calico. It also supports dual stack, meaning we can use ipv6 addresses within our cluster.

Calico will do the following relevant things for us:

  • Add IP routes to our diffrent nodes, so that each node can contact every existing pod and service
  • Establish a secure ingress network that can only be used inside the cluster

Besides telling kubernetes that we want to use IPv6 we also want to tell that to calico.

calico.yaml

We had to tweak some of the settings of the default yaml file (the yaml file from my repo already contains these changes).

Enable IPv6

"ipam": {
    "type": "calico-ipam",
    "assign_ipv4": "true",
    "assign_ipv6": "true"
}

Our cluster nodes don’t know on thier own what IP they should use to address each other, and since we are using keepalived, they also have volatile IP addresses. Because of that we need to tell calico what address ranges are permittet when choosing from existing IPs of a node.

containers:
- name: calico-node
  image: calico/node:v3.14.2
  env:
    - name: IP6
      value: "autodetect"
    - name: FELIX_IPV6SUPPORT
      value: "true"
    - name: IP6_AUTODETECTION_METHOD
      value: "cidr=fd00::/64"

Finally we can deploy calico to our cluster:

kubectl apply -f calico.yaml

By default calico only setups ipv4 pools. Since our pods need an ipv6 addresses, we have to apply an additional pool using the calicoctl.

The first to variables set environment variables for the following command. calicoctl needs to know where to find the kube client configuration and certificates.

DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config ./calicoctl create -f pool_ipv6.yaml

Verfiying connectivity

Our cluster now needs some time to pull all needed calico images and start it’s pods on all of our nodes using DaemonSets. You can check the progress using kubectl get pods --all-namespaces. If you run into any issues, take a look at Kubernetes Debugging.

Once all the pods are running, run DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config sudo ./calicoctl node status. The result should look something like this:

Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+------------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |   SINCE    |    INFO     |
+--------------+-------------------+-------+------------+-------------+
| 10.0.0.10    | node-to-node mesh | up    | 06:15:06   | Established |
| 10.0.0.12    | node-to-node mesh | up    | 2020-10-10 | Established |
+--------------+-------------------+-------+------------+-------------+

IPv6 BGP status
+---------------------------+-------------------+-------+------------+-------------+
|       PEER ADDRESS        |     PEER TYPE     | STATE |   SINCE    |    INFO     |
+---------------------------+-------------------+-------+------------+-------------+
| fd00::96c6:91ff:fead:47da | node-to-node mesh | up    | 06:15:05   | Established |
| fd00::1e69:7aff:fe0c:21cd | node-to-node mesh | up    | 2020-10-10 | Established |
+---------------------------+-------------------+-------+------------+-------------+

Kubernetes dashboard

The dashboard is a great tool for monitoring and grasping the big picture. It covers almost everything that’s going on inside your cluster on a general level.

dashboard.yaml

In order to access your dashboard safely within your private network you want to bind it to an private externalIPs. I used 10.0.0.9 the same IP my general API is running under.

Now whenever any traffic hits any node in your cluster, requesting 10.0.0.9:443, it will be redirected to a k8s-app : kubernetes-dashboard pods 8443 port. We bind it to 443 so that we can access it using https://10.0.0.9 without having to specify the 8443 port.

...
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  externalIPs:
    - 10.0.0.9
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
...

login

We will be using the “Token” login option.

Run these commands to create a service account for the dashboard, assign it rights to access the dashboard and to print it’s token.

kubectl create serviceaccount dashboard-admin-sa
kubectl create clusterrolebinding dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=default:dashboard-admin-sa
kubectl describe secret dashboard-admin-sa-token

Output:

Name:         dashboard-admin-sa-token-g6jw4
Namespace:    default
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin-sa
              kubernetes.io/service-account.uid: 361715bd-2be4-4394-883f-1bc9c710c3b8

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUyXxPvrMKu_55K0qlPNgDANxMJO9pRdExlLzYhWkmvwx8Rk0USSEtKvAxc5yYVIHYkYXR7zXsLgzZHMPThu2RVEgoO67O-lMzkh9bgoib8wLFLDYM0nUyT5sjUpb0ZK23PL9HCuTJUFHKUd2e5TpF-3h_cJJlToBjJWI247bU4iAYCTRh4eh4w4YO6Q27V0BQBmCcPQDXyjTFg7bTi7wID3H8zI1NiIsImtpZCI6IpY2UtYWNjb3VudC51aWQiOiIzNjE3MTViZC0yYmU0LTQzOTQtOD5YzcxMGMzYjgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkYXNoYm9hcmQtYWRtaW4tc2EifQ.DcoSyu6HT4wEUswlXUKPFcob7y7QEwbOaUivz4x6l8RBTEmUPxJJUf8Lb6HNi-NcnZSZV9FWGlzd2NpYVNwQkgyVjdyRDI2Y2dlS0pOcVAxZkNqRFRNSnRLQVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlgzbOpdTGZi0xYmMYWNjb3VudC9zZWNyZXQubmFtZSI6ImRhc2hib2FyZC1hZG1pbi1zYS10b2tlbi1nNmp3NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tc2EiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZm2S55fYG34RWrd1TiBmXeGpllFcdiOLbUy3r6s1_fXb8yDMk1hPxFxdA

Login with the value of token: (without any leading or trailing white space).