1

I was trying to create a Kubernetes Cluster using kubeadm. I had spin up an Ubuntu 18.04 server, installed docker (made it sure that docker.service was running), installed kubeadm kubelet and kubectl.

The following are the steps that I did:

sudo apt-get update
sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu `lsb_release -cs` test"
sudo apt update
sudo apt install docker-ce
sudo systemctl enable docker
sudo systemctl start docker

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt-get install kubeadm kubelet kubectl -y
sudo apt-mark hold kubeadm kubelet kubectl 
kubeadm version
swapoff –a

Also, in order to configure the Docker cgroup driver, I had edited /etc/systemd/system/kubelet.service.d/10-kubeadm.conf. Within the file, I added Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" and commented out Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml".

/etc/systemd/system/kubelet.service.d/10-kubeadm.conf for reference:

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
#Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

After this I ran: systemctl daemon-reload and systemctl restart kubelet. kubelet.service was running fine.

Next, I ran sudo kubeadm init --pod-network-cidr=10.244.0.0/16 and got the following error:

root@ip-172-31-1-238:/home/ubuntu# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ip-172-31-1-238 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.1.238]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ip-172-31-1-238 localhost] and IPs [172.31.1.238 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ip-172-31-1-238 localhost] and IPs [172.31.1.238 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:  
            timed out waiting for the condition  

    This error is likely caused by:  
            - The kubelet is not running  
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)  

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:  
            - 'systemctl status kubelet'  
            - 'journalctl -xeu kubelet'  

    Additionally, a control plane component may have crashed or exited when started by the container runtime.  
    To troubleshoot, list all containers using your preferred container runtimes CLI.  

    Here is one example how you may list all Kubernetes containers running in docker:  
            - 'docker ps -a | grep kube | grep -v pause'  
             Once you have found the failing container, you can inspect its logs with:  
            - 'docker logs CONTAINERID'  

After running systemctl status kubelet.service, seems that kubelet is running fine.
However, after running journalctl -xeu kubelet, I got the following logs:

kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"
controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://172.31.1.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-1-238?timeout=10s": dial tcp 172.31.1.238:6443: connect: connection refused
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"
kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-1-238"
kubelet_node_status.go:92] "Unable to register node with API server" err="Post "https://172.31.1.238:6443/api/v1/nodes": dial tcp 172.31.1.238:6443: connect: connection refused" node="ip-172-31-1-238"
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"

Versions:
Docker: Docker version 20.10.12, build e91ed57
Kubeadm: {Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:39:51Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}

Not sure whether this is a connection issue between the Kube Api Server and Kubelet.
Does anyone know how to fix this?

arjunbnair
  • 25
  • 1
  • 2
  • 8

3 Answers3

3

The kubeadm version used here is 1.23.1. Kubernetes doesn't provide direct support for docker anymore. read here. In my understanding, you have both installed but they are not connected. Also I dont see you have containerd.io specified in the docker installation command. Refer here.

Option 1: Install contianerd. Please follow this step. If the problem still persists then, Configure kubelet service to use containerd by adding the following options in the kubelet service.

--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock

Option 2: Install docker properly and configure as mentioned here.

Rajesh Dutta
  • 306
  • 1
  • 5
1

I fixed this by going through the Kubernetes official documentation on creating a cluster using kubeadm. Following are the steps which I followed:

#!/bin/bash

sudo apt update -y && sudo apt upgrade -y
sudo apt-get install -y ca-certificates curl gnupg lsb-release

curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get install docker.io -y
systemctl enable docker.service
systemctl start docker.service

echo 1 > /proc/sys/net/ipv4/ip_forward
lsmod | grep br_netfilter
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo hostnamectl set-hostname master-node

kubeadm init --pod-network-cidr=10.244.0.0/16

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

This can by run within an Ubuntu 20.04 server like a shell script and the Master node will be created.

Tested on kubeadm version: 1.23.1
Tested on kubernetes version: 1.23.1
Container Runtime: Docker

Document reference:

arjunbnair
  • 25
  • 1
  • 2
  • 8
-1

The problem is that you are not specify advertiseAddress. I have the same issue and take me couple of hours to find it.