1

I posted this in StackOverflow but got redirected here, so I'm asking it again. I'm currently working on a use case where multiple machines behind NAT routers need to be able to act as nodes in a Kubernetes cluster. This presents some serious networking difficulties, because instead of being on the same local network and being able to access other pods by IP trivially, nodes don't even have publicly accessible IPs. I've been trying to figure out a solution using tunneling, but I'm not sure exactly how that would work. All pods need to be able to communicate, so would I have to set up a tunnel between each pod and also from each node to the api server? All machines that act as nodes in our cluster will be connected via WebRTC connections, so theoretically data could be passed by WebRTC as well. Someone else mentioned using a VPN, so if anyone knows more specifics of how to do that, that'd be awesome too. I'm hoping this has been done by others before in one way or another.

1 Answers1

0

A good example of building Kubernetes cluster over OpenVPN network is below:

The example uses tun mode of the opevpn, however I would consider 'tap mode as more suitable, because it creates Layer2 overlay network and allows to use any IPv4 subnet on the nodes' vpn interfaces.

In both cases at least one machine with public IP is required to serve as VPN server for nodes behind NAT.

You may also need to:

  • specify --apiserver-advertise-address during kubeadm init,
  • set the correct interface in Flannel addon YAML file (to avoid cluster traffic on the default route interface),
  • specify node names manually (--node-name) when join them to the cluster (kubeadm join).
VAS
  • 370
  • 1
  • 9