Context.
I am following the basic kubernetes installation here (in Hetzner Cloud, if it is needed). 1 controller and 1 worker.
All is fine apparently.
- the servers have an external IP interface (public Ipv4) and one internal one (normally 10.0.0.2 or 10.0.0.3)
- The controller goes up
- I install flannel
- I set options with
kubeadm init
and I change the kubectl config to use only the internal ip (otherwise it goes showing the external ip as "internal ip" of the nodes). Namely
private_ipaddr=$( ifconfig eth1 | grep -i inet | head -1 | awk '{print $2}' ) #eth0 is the public Ipv4
echo "KUBELET_EXTRA_ARGS='--node-ip ${private_ipaddr}'" > /etc/sysconfig/kubelet
systemctl daemon-reload
systemctl restart kubelet
kubeadm init --apiserver-advertise-address=$private_ipaddr --pod-network-cidr=10.244.0.0/16
- then I join the worker, and the advertised address for the api is indeed internal (10.0.0.3 for example)
kubectl get nodes -o wide
shows two nodes ready after few minutes.- then I start to deploy one pod. The pod is a simply busybox or alpine.
- the pod gets deployed on the only worker.
- I try to reach from the deployed pod the fqdn of the coredns pod. It does not work.
- There is no firewall set on the system aside from kubernetes chains. Between the systems there is no firewall if using the internal network, while the traffic on the public ipv4 is firewalled (except port 22)
- What I discovered is: if I open the firewall between the two public ipv4 of the servers, then the connection between pod and core-dns pod works. Otherwise it doesn't .
This leds me to realize that despite the node-ip is set to the internal ip, for the communications kubernetes still goes over the public ipv4 interface, rather than communicating via internal network.
Hence the question. How can I say to kubernetes to use only the internal network and not the external one?