0

Situation:

I am trying to create a Kubernetes cluster running on Linux containers, however Kubeadm init fails by timing out (four minutes pass). I have done the same on Ubuntu VMs before with no issue, and that cluster is running happily.

The Kubelet is active (running) according to the system. I can successfully pull all Kubeadm images.

When I check journalctl -xeu kubelet, it says that it cannot connect to the API server, and then cannot add a node and connect to the node.

Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta ... 'Post "https://10.15.10.100:6443/api/v1/namespaces/default/events": dial tcp 10.15.10.100:6443: connect: connection refused'(may retry after sleeping)
...
vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://10.10.10.10:6443/api/v1/nodes?fieldSelector=metadata ... connect: connection refused
vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.10.10.10:6443/api/v1/no ... connect: connection refused
vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://10.10.10.10:6443/apis/storage.k8s.io/v1/csidriv ... connect: connection refused
vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.10.10.10:6443 ... connect: connection refused

In order to figure out the issue, I installed crictl, but it also could not connect to any API server.

I attempted to re-install everything on a fresh LXC, and received the same issue.

Thus, I attempted to join the node as a worker to an old Kubernetes cluster running on normal Ubuntu VMs, and see if I could use Kubectl to get information on what could be failing. When I checked the Calico CNI pod, it gave me this repeated error:

Warning  FailedCreatePodSandBox  2m19s               kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: failed to mount rootfs component &{overlay overlay [index=off workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/15003/work upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/15003/fs lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/16/fs]}: invalid argument: unknown

Environment:

  • LXCs are from TurnKeyLinux, according to the person who set up the server.
  • The LXC OS is Debian Buster
  • LXC config is the following (with placeholder ips):
arch: amd64
cores: 2
hostname: kubecontrol01
memory: 8192
net0: 
name=eth0,bridge=vmbr1,firewall=1,gw=10.10.10.1,hwaddr=5E:83:5C:16:4B:68,ip=10.10.10.10/24,type=veth
ostype: debian
rootfs: local-zfs:subvol-100-disk-0,size=50G
searchdomain: 1.2.3.4
swap: 0
lxc.apparmor.profile: unconfined
lxc.cap.drop:
lxc.cgroup.devices.allow: a
lxc.mount.auto: proc:rw sys:rw
  • This system is using ZFS, however, the person who set up the server says that it should be properly abstracted, so ZFS and OverlayFS should not be conflicting.
  • Kubeadm, Kubelet, and Kubectl are installed via the instructions on Kubernetes.io
  • Containerd.io is installed from download.docker.com using apt-get. However, Docker is NOT installed, as it is no longer natively supported in Kubernetes.
  • Swap is off on all LXCs, as well as their host machine.
  • The Firewall was temporarily disabled, and made no difference when it was on or off.

Summary of Question:

Can I get Kubernetes containers to run on this setup? Is it an incompatibility, or am I missing some program or configuration? What would I need to do to fix it?

If I missed any details, please tell me with a comment.

Paradoc
  • 101
  • 2

0 Answers0