1

I am trying to expose a service inside of a Kubernetes cluster in one Google Cloud Region (us-east) and have it accessible to another Kubernetes cluster in a different region (us-central). I would highly prefer this traffic not use any public IPs but instead, stay on the internal to the project.

What I tried to setup was an Internal Google Cloud Load balancer. I didn't read till later that:

Internal load balancers are only accessible from within the same network and region.

I thought that was kind of odd and tried to work around it by applying firewalls or routes, but nothing seems to work. Anything inside of the region's subnet can reach the LB, but anything outside of that region can't.

Now, what is truly perplexing, I have this working in a different way on a different project. Here I have 2 projects that have peering setup, and this works just fine. However, this may be because they are both in the same region...

So my question: How do I connect two Kubernetes clusters and keep traffic internal to the google cloud without using public IPs but instead use the internal 10.x IPs. Reason for being so strict on the network traffic being internal is that I have isolated on one Kubernetes cluster and would rather not have that cluster have any public facing resource of any kind.

Erik L
  • 111
  • 4

2 Answers2

1

You may use a GKE Private Cluster. This is a Cluster not accessible to the public. This is a cluster that can only be accessed internally. Your cluster can only be exposed to the trusted VPC network. IP addresses are internal ones of the format 10.x.x.x.

Instructions on how to setup a Private Cluster can be found in this article.

Pang
  • 273
  • 3
  • 8
0

Unfortunately ILBs are regional for now.

Tim Hockin
  • 282
  • 1
  • 6