1

I have a multi-cluster/multi-zone k8s platform running on Google Kubernetes Engine. The underlying GCP VPC network is running in global routing mode. The k8s services are assigned internal IP (clusterIP) addresses via Alias IP subnets.

I can ping nodes from one cluster to the other so there are no problems with firewall rules or routing in general but I cannot connect to the individual services on their internal Alias IPs across clusters.

I can connect from other nodes and containers on the same cluster to the services although if I create an instance outside the k8s cluster in the same zone I cannot connect.

It seems likely that the Alias IP ranges are not being routed even though the subnets appear in the VPC routing table.

Is there some way to ensure that all the Alias IP subnets are correct routed across the whole VPC?

Some detail...

kubectl get services --namespace production
NAME                               TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)                         AGE
elasticsearch                      LoadBalancer   10.0.64.103   xxx.xxx.xxx.xxx   9200:30182/TCP,9300:31166/TCP   1m

gcloud compute routes list
NAME                            NETWORK  DEST_RANGE     NEXT_HOP                  PRIORITY
default-route-ac89edf7c623eb22  foo      10.0.64.0/19   foo                       1000

The clusterIP is in the listed subnet range but is not reachable outside the local k8s cluster.

JohnnyD
  • 121
  • 2

1 Answers1

1

This is expected behavior, with your existing setup. I believe what you are experiencing is a restriction of alias IPs, which is documented in this GCP documentation on "Creating VPC-native clusters using Alias IPs":

"Cluster IPs for internal Services remain only available from within the cluster. If you want to access a Kubernetes Service from within the VPC, but from outside of the cluster (for example, from a Compute Engine instance), use an internal load balancer."

So you should consider using an Internal Load Balancing to be able to reach the services from outside that are running inside a cluster.

Nur
  • 386
  • 1
  • 7
  • I found that after I posted the question. Unfortunately internal load balancer IPs are not routable outside the region even if the underlying VPC is. I'm not really any further forward - I need to reach Europe from North America. – JohnnyD Sep 20 '18 at 13:57
  • 1
    You can try using [multi-cluster ingress](https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-ingress#overview) which will provide a single front end for both clusters. In this case, no ILB will be required. – Nur Sep 21 '18 at 01:07
  • I will use a multi-cluster ingress for public access but it doesn't help with inter-cluster communications. I think a VPN is the only solution. – JohnnyD Sep 21 '18 at 05:09
  • I believe there was a feature request for Global Internal Load Balancers, so that may be a thing in the future. For now, though, you can also use nodePort services and route traffic to a node_IP:NodePort – Patrick W Sep 21 '18 at 19:44