3

I have two clusters. Cluster A (on google container engine) is a public facing cluster and it needs to connect to a private Cluster B (a click-to-deploy cluster on GCE) to access a service. I would like to have Cluster A connect to Cluster B through a load balancer, and that can work even though it seems as though all GCE load balancers require a public IP address https://groups.google.com/d/topic/gce-discussion/Dv6289i4_rg/discussion (I'd prefer if it were all private).

The public IP address isn't so bad by itself if you could just set a simple firewall rule and use the standard Google Load Balancer. Unfortunately source-tags don't seem to cross the WAN threshold (or are just not passed by the load balancer). This is the rule that I'd want to use:

gcloud compute firewall-rules create servicename-lb-from-gke-cluster --allow tcp:1234 --source-tags k8s-gke-cluster-node --target-tags servicename #DOES NOT WORK

After entering the above command, Cluster A cannot communicate to Cluster B (via load balancer) through tcp port 1234.

This does work, but it is painful because it requires supervision to automate setting the public IP addresses of the source cluster:

gcloud compute firewall-rules create servicename-lb-from-gke-cluster --allow tcp:1234 --source-ranges 100.2.3.4/32 100.3.5.9/32 100.9.1.2/32 --target-tags servicename #Does work, but is painful

As suggested in the google groups thread, HA proxy is another suggestion.

Another idea is to open up the firewall to the entire WAN and add secure authentication between cluster A and B. Maybe this is a good idea to do for security reasons anyway? The difficulty may range from easy to hard depending on what cluster A and B are running though - it might be nice to have a more general solution.

Does anyone have a better idea? Does anyone else have the same problem?

chrishiestand
  • 974
  • 12
  • 23
  • Just a quick question - the "--target-tags servicename" here stands the servicename actually for the service? I thought that it should be the cluster name you want to connect it to. But may be wrongly assumed. – koressak Sep 25 '15 at 11:51

2 Answers2

4

I'm sorry about the complexity! I'm not an expert on Compute Engine firewalls, but I expect that you're correct about the limitations of the source tags to only work for internal traffic.

The Kubernetes team is aware that coordinating multiple clusters is difficult, and we're beginning to work on solutions, but unfortunately we don't have anything particularly solid and usable for you yet.

In the meantime, there is a hacky way to load balance traffic from one cluster to the other without requiring the Google Cloud Load Balancer or something like haproxy. You can specify the internal IP address of one of the nodes in cluster B (or the IP of a GCE route that directs traffic to one of the nodes in cluster B) in the PublicIPs field of the service that you want to talk to. Then, have cluster A send its requests to that IP on the service's port, and they'll be balanced across all the pods that back the service.

It should work because there's something called a kube-proxy running on each node of the kubernetes cluster, which automatically proxies traffic intended for a service's IP and port to the pods backing the service. As long as the PublicIP is in the service definition, the kube-proxy will balance the traffic for you.

If you stop here, this is only as reliable as the node whose IP you're sending traffic to (but single-node reliability is actually quite high). However, if you want to get really fancy, we can make things a little more reliable, by load balancing from cluster A across all the nodes in cluster B.

To make this work, you would put all of cluster B's nodes' internal IPs (or routes to all the nodes' internal IPs) in your service's PublicIPs field. Then, in cluster A, you could create a separate service with an empty label selector, and populate the endpoints field in it manually when you create it with an (IP, port) pair for each IP in cluster B. The empty label selector prevents the kubernetes infrastructure from overwriting your manually-entered endpoints, and the kube-proxies in cluster A will load balance traffic for the service across cluster B's IPs. This was made possible by PR #2450, if you want more context.

Let me know if you need more help with any of this!

Alex Robinson
  • 311
  • 1
  • 4
  • Thanks @Alex Robinson, that looks like a clever, not-too-kludgy workaround :-). I'd be happy to test out your second solution (because I just had a GCE VM crash over the weekend, so I'm not too trusting of single-node-reliability at this point). I'll give this a try the next chance I get, maybe this upcoming weekend. – chrishiestand Apr 13 '15 at 19:10
1

This is now possible with the official GCP Internal LB: https://cloud.google.com/compute/docs/load-balancing/internal/

Specifically here is the kubernetes (GKE) documentation: https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing

Note the service annotation:

  annotations:
    cloud.google.com/load-balancer-type: "Internal"

Note that while this LB is already only accessible internally, if you want to further restrict to e.g. your cluster pods you could do something like this:

loadBalancerSourceRanges: 10.4.0.0/14

To get your pod IP range you can run: gcloud container clusters describe $CLUSTER_NAME |grep clusterIpv4Cidr

chrishiestand
  • 974
  • 12
  • 23