0

I am struggling to make Services become visible though peered VPC.

I have two GKE clusters (cluster-A and cluster-B) each of then in a different VPC.

I've created a VPC network peering connecting both VPC's.

I followed the instructions to enable ip-masquerade-agent and allow clusters to be able to reach each others Pod's (https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent)

The thing is, when I try from cluster-A to curl a Pod in cluster-B it works, but when I do curl a Service it doesn't work.

From a Pod running in cluster-A:

$ curl http://10.132.0.13:8080 # cluster-B Pod
Hello World

$ curl http://10.134.145.111:8080 # cluster-B Service
curl: Connection timed out

How do I make Services visible on both clusters?

Some important information that might help:

cluster-A

servicesIpv4Cidr: 10.30.0.0/18
clusterIpv4Cidr: 10.32.0.0/11

ip-masq-agent configmap:

apiVersion: v1
kind: ConfigMap
data:
  config: |
    nonMasqueradeCIDRs:
    - 10.32.0.0/11
    - 10.30.0.0/18
    resyncInterval: 60s
    masqLinkLocal: true
metadata:
  name: ip-masq-agent
  namespace: kube-system

cluster-B

servicesIpv4Cidr: 10.134.0.0/16
clusterIpv4Cidr: 10.132.0.0/16

ip-masq-agent configmap:

apiVersion: v1
kind: ConfigMap
data:
  config: |
    nonMasqueradeCIDRs:
    - 10.132.0.0/16
    - 10.134.0.0/16
    resyncInterval: 60s
    masqLinkLocal: false
metadata:
  name: ip-masq-agent
  namespace: kube-system
Tariq
  • 30
  • 4
  • I am having the same problem while trying to access K8s (created with Native VPC enabled) ClusterIP service from VM running on the same VPC. I can ping pod from VM, but not service. It looks like some Firewall limitation. It suppose to be possible - see this article - https://medium.com/@kyralak/accessing-kubernetes-services-without-ingress-nodeport-or-loadbalancer-de6061b42d72 and this is the whole point of having VPC native cluster as far as I understand (https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips?hl=en_US&_ga=2.97102108.-2005070644.1541062440) – Miro Feb 07 '19 at 19:57
  • Pods are part of the larger VPC. Services are virtual IPs that really only exist within the cluster. If you want to expose a Service beyond the cluster you can use an ILB. This does not cross VPCs, though. To do that you need either a public LB or a VM with 2 NICs that straddles the VPCs and applies NAT. – Tim Hockin Feb 28 '19 at 18:20
  • did you find a solution @Miro? – Diego Marangoni Aug 28 '19 at 14:28

2 Answers2

2

Reaching one cluster service from another cluster is not directly possible, you can use a load balancer for it.

The reference that explains it is in the VPC native Clusters and Alias section of documentation where it states:

"Cluster IP addresses for internal Services remain only available from within the cluster. If you want to access a Kubernetes Service from within the VPC, but from outside of the cluster (for example, from a Compute Engine instance), use an internal load balancer."

Even if the example refers to the same VPC this also applies to different VPCs using VPC Peering (as they are considered as "same" VPC with some exceptions as transitivity:

"Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not also directly connected, VPC network N2 cannot communicate with VPC network N3 over the peering."

Daniel
  • 46
  • 2
  • what do you mean by "Even if the example refers to the same VPC"? If I create two clusters in the same VPC but different subnets, it not gonna work? – Diego Marangoni Dec 31 '18 at 00:36
  • Yes, but remember that Internal Load balancers are regional, so 2 subnets in the same region will be able to communicate using an Internal Load Balancer (subnet A - Cluster A -> ILB-B - Cluster B - Subnet B) – Daniel Jan 02 '19 at 22:12
  • Is there a way to edit gke `kube-proxy`? So I could achieve this by enabling option `--masquerade-all`. [like seeing here](https://stackoverflow.com/questions/43276562/what-does-kube-proxy-masquerade-all-true-mean) – Diego Marangoni Jan 04 '19 at 13:06
  • Is not possible to edit Kube-Proxy in GKE as its part of the [Master component](https://kubernetes.io/docs/concepts/overview/components/#master-components) which is managed by Google. On a side note, have you tried adding ip-masq-agent [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) to the Cluster? – Sunny J Jan 14 '19 at 04:07
  • Yes, I did, but ip-masq-agent only made visible the Pods network, but not the Services – Diego Marangoni Jan 30 '19 at 13:37
0

not supported by google GKE always creates an 'extra hop' VPC, effectively breaking any comms from other VPCs, nothing any customer has anything to do with this, vote for a feature request: https://issuetracker.google.com/issues/244483997?pli=1