Kubernetes
Kubernetes (aka. k8s) is an open-source system for automating the deployment, scaling, and management of containerized applications.
A k8s cluster consists of its control-plane components and node components (each representing one or more host machines running a container runtime and kubelet.service
. There are two options to install kubernetes, "the real one", described here, and a local install with k3s, kind, or minikube.
Installation
When manually creating a Kubernetes cluster install etcdAUR and the package group kubernetes-control-plane (for a control-plane node) and kubernetes-node (for a worker node).
When creating a Kubernetes cluster with the help of kubeadm
, install kubeadm and kubelet on each node.
Both control-plane and regular worker nodes require a container runtime for their kubelet
instances which is used for hosting containers.
Install either containerd or cri-o to meet this dependency.
To control a kubernetes cluster, install kubectl on the control-plane hosts and any external host that is supposed to be able to interact with the cluster.
is the Kubernetes package manager.
Configuration
All nodes in a cluster (control-plane and worker) require a running instance of kubelet.service
.
All provided systemd services accept CLI overrides in environment files:
kubelet.service
:kube-apiserver.service
:/etc/kubernetes/kube-apiserver.env
- :
- :
- :
/etc/kubernetes/kube-scheduler.env
Networking
The networking setup for the cluster has to be configured for the respective container runtime. This can be done using .
Pass the virtual network's CIDR to with e.g. .
Container runtime
The container runtime has to be configured and started, before kubelet.service
can make use of it.
CRI-O
When using CRI-O as container runtime, it is required to provide or kubeadm join
with its CRI endpoint:
Running
Before creating a new kubernetes cluster with kubeadm
start and enable kubelet.service
.
Setup
When creating a new kubernetes cluster with kubeadm
a control-plane has to be created before further worker nodes can join it.
Control-plane
Use to initialize a control-plane on a host machine:
# kubeadm init --node-name=<name_of_the_node> --pod-network-cidr=<CIDR> --cri-socket=<SOCKET>
If run successfully, will have generated configurations for the kubelet
and various control-plane components below and .
Finally, it will output commands ready to be copied and pasted to setup kubectl and make a worker node join the cluster (based on a token, valid for 24 hours).
To use with the freshly created control-plane node, setup the configuration (either as root or as a normal user):
$ mkdir -p $HOME/.kube # cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # chown $(id -u):$(id -g) $HOME/.kube/config
To install a pod network such as calico, follow the upstream documentation.
Worker node
With the token information generated in #Control-plane it is possible to make a node machine join an existing cluster:
# kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash> --node-name=<name_of_the_node> --cri-socket=<SOCKET>
<SOCKET>
.Tips and tricks
Tear down a cluster
When it is necessary to start from scratch, use kubectl to tear down a cluster.
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
Here is the name of the node that should be drained and reset.
Use kubectl get node -A
to list all nodes.
Then reset the node:
# kubeadm reset
Operating from Behind a Proxy
kubeadm
reads the , , and environment variables. Kubernetes internal networking should be included in the latest one, for example
export no_proxy="192.168.122.0/24,10.96.0.0/12,192.168.123.0/24"
where the second one is the default service network CIDR.
Troubleshooting
Failed to get container stats
If kubelet.service
emits
Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
it is necessary to add configuration for the kubelet (see relevant upstream ticket).
Pods cannot communicate when using Flannel CNI and systemd-networkd
See upstream bug report.
systemd-networkd assigns a persistent MAC address to every link. This policy is defined in its shipped configuration file . However, Flannel relies on being able to pick its own MAC address. To override systemd-networkd's behaviour for interfaces, create the following configuration file:
Then restart systemd-networkd.service
.
If the cluster is already running, you might need to manually delete the interface and the pod on each node, including the master. The pods will be recreated immediately and they themselves will recreate the interfaces.
Delete the interface :
# ip link delete flannel.1
Delete the pod. Use the following command to delete all pods on all nodes:
$ kubectl -n kube-system delete pod -l="app=flannel"
See also
- Kubernetes Documentation - The upstream documentation
- Kubernetes Cluster with Kubeadm - Upstream documentation on how to setup a Kubernetes cluster using kubeadm
- Kubernetes Glossary - The official glossary explaining all Kubernetes specific terminology
- Kubernetes Addons - A list of third-party addons
- Kubelet Config File - Documentation on the Kubelet configuration file
- Taints and Tolerations - Documentation on node affinities and taints