0

I want to create a Kubernetes cluster on 4 machines. One has a public IP address and is reachable from the internet (let's call this Master). The Master also has a domain name assigned to it (let's say it's master.foo).

Three machines are on the same private network (192.168.5.0), behind a router (let's call these Worker A/B/C).

I have successfully set up a cluster between these machines:

$ kubectl get nodes

NAME           STATUS   ROLES    AGE     VERSION
Master       Ready    master   5d20h   v1.17.1
WorkerA      Ready    <none>   21h     v1.17.1
WorkerB      Ready    <none>   21h     v1.17.1
WorkerC      Ready    <none>   5d17h   v1.17.1

I also have pods running on the different workers without errors.

I have initialized my cluster with:

sudo kubeadm init --control-plane-endpoint=master.foo --pod-network-cidr=10.244.0.0/16

(Using Flannel as the overlay network.)

The problem is that running kubectl exec or kubectl port-forward on the master results in an error:

paulb@galaxy:~$ kubectl -n pgo port-forward svc/postgres-operator 8443:8443 
error: error upgrading connection: error dialing backend: dial tcp 192.168.5.192:10250: i/o timeout

Where 192.168.5.192 is the private IP of one of the workers, of course. I think this is related to what is stated here: https://kubernetes.io/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model

pods on a node can communicate with all pods on all nodes without NAT

So, should I set up an OpenVPN network between all these machines? If yes, then how would I combine kubeadm init --control-plane-endpoint with the fact that I neet kubernetes going out on the OpenVPN tun0 adapter? Did I miss something? It seems strange that containers are running on the workers, but I cannot execute commands (kubectl exec) on them from the master.

Paul
  • 103
  • 4

1 Answers1

1

When you run kubectl port-forward (or exec), kubectl is connecting to api-server which then is connecting to kubelet that is running on every node on port 10250.

Since your master node is not in the same network as workers and workers are not accessible from outside, api-server (which is running on master nodes) is unable to connect to them.

Yes VPN might solve your problem. And to make kubelet use tun0 you need to advertise it's ip address with --node-ip option for kubelet

--node-ip string
IP address of the node. If set, kubelet will use this IP address for the node

I assume that master node is the only point of entry to your cluster from the outside so to be able to expose and access pods from outside you need full network connection between all your nodes, and especially master to worker in this case.

Let me know if it was helpful.

Matt
  • 528
  • 3
  • 7
  • Hello, thank you, very helpful. So I need to set `node-ip` to the VPN IP in `/etc/default/kubelet`, for every node in my cluster, right? What other methods are there? Can I set it "automatically" somehow, like at `kubeadm init`? – Paul Jan 22 '20 at 06:52
  • "I assume that master node is the only point of entry" - yes, for now I only have one master. In the future I'd like to make it HA, but that's another discussion right there. I assume the other masters will join the VPN and a load balancer will take the place of my current master in order to properly distribute the incoming application traffic. – Paul Jan 22 '20 at 06:55
  • There was a feature request [on github](https://github.com/kubernetes/kubeadm/issues/203) for *kubeadm* to implement `--node-ip` flag but it got closed and not implemented so you need to add this flag manually. – Matt Jan 22 '20 at 08:19
  • I marked your answer as solved since it seems `kubectl exec` works now. I set up OpenVPN and assigned static IPs to all the servers in the cluster. I also added `--node-ip=` to `/etc/default/kubelet` (including the master). – Paul Jan 23 '20 at 14:44
  • However, would there be a way to avoid using `kubeadm reset` in order to do this? Initially I initialized my cluster using `kubeadm init --control-plane-endpoint=some.public.dns.name` and now I used `/etc/hosts` to map a name to the static IP of the master hence the command now was: `kubeadm init --control-plane-endpoint=`. – Paul Jan 23 '20 at 14:46