0

I have deployed a kubeadm based kubernetes cluster v1.24.3 which consists of one control-plane node and 3 worker nodes (all Centos 7 VMs). These are all "on premises" on a single physical node.

On this setup, I am trying to deploy a CNI network plugging but the CNI provider containers are failing on worker nodes, the error reported from the kubectl logs is 'Get "https://10.96.0.1:443/api?timeout=32s": dial tcp 10.96.0.1:443: connect: no route to host'.

The pod deployed on the control-plane node is running without errors.

I get this behaviour when I install either Calico's tigera-operator or Weave net. Weave-net deploys a DaemonSet whose pod on the control-plane node runs successfully but the pods deployed on the worker nodes fail with the error above.

For Calico's tigera-operator, a single pod is deployed on one of the worker node, this too fails with the error above.

When I ssh into the control plane node and issue the command "nc -w 2 -v 10.96.0.1 443" I get connected. When I try issuing the same command on any one of the worker nodes, the connection is not established and I get the message "Ncat: Connection timed out.".

From the worker nodes, should I manually configure routing of 10.96.0.1 to the control-plane node(s), if so how should I go about it? In my setup the control-plane node has ip 192.168.12.17 while one of the worker nodes has the IP address 192.168.12.20.

Allan K
  • 111
  • 2

0 Answers0