4

I have a GKE cluster with a handful of nodes and I would like pods in this cluster to be able to connect to remote hosts on a private network that can be reached via a site-to-site VPN provided by GCE. As far as I can tell, there is no simple way to assign an address to a pod for outbound connections? (It does not seem feasible to add each pod-cidr to VPN configuration each time a node is added or replaced.) Do I have to set up a NAT bradge external to the cluster, or is there some Kubernetes way to control the outbound address of a pod?

Bittrance
  • 2,970
  • 2
  • 21
  • 27

1 Answers1

3

Instead of adding each node's pod-CIDR to the VPN config, you could add the entire cluster-CIDR (where any new/recreated nodes will have their pod-CIDRs pulled from).

To find the cluster-cidr:

gcloud container clusters describe your-cluster | grep clusterIpv4Cidr
CJ Cullen
  • 261
  • 1
  • 3
  • Aha, that would probably work. Of course, nrpe has no config to accept a cidr for host address, but inetd could probably help. How small can clusterIpv4Cidr be? Does GKE assume that it can hand out /24:s? (I can't see that this cluster will ever need more than a handful of pods per node.) – Bittrance Jan 21 '16 at 05:54
  • GKE (and the defaults for Kubernetes) are very *ahem* generous in their IP space allocations. It always gives each node a /24 (256 - broadcast - net intf = 254 pods), and gives each cluster a /14 (space for 1024 /24's). The *current* behavior dishes out /24's from the bottom of that /14, so you might be able to get away with only configuring your VPN for a subset, but no guarantees that that continues to work. Services on GKE are allocated from the top of that /14, so depending on if/how you want those to work across your VPN, you may need to consider that too. – CJ Cullen Jan 21 '16 at 17:51