1

In this Google Cloud Platform page https://cloud.google.com/load-balancing/docs/internal/setting-up-internal#test-from-backend-vms (2020-10-12), underneath section "Sending requests from load balanced VMs", it says that any traffic from a backend instance under an Internal Load Balancer will always send requests to itself if it wants to send requests to the Internal Load Balancer.

GCP documentation caution on not sending requests from backend vm

I need requests to go to the internal load balancer and not local because the node is trying to join a Kubernetes control plane, and the way it should communicate with the control plane is through the control plane load balancer. The first control node is also the only one that has a "healthy" status under the load balancer, so the request would always be routed to the correct instance.

Where "10.0.0.149" is the IP of the Internal Load Balancer, I tried using

ip route del 10.0.0.149 dev ens4 table local

but it was replaced again with the same route.

Is there any technical reason why this must be the case? I understand that it is more efficient network-wise, but I don't understand why it is mandatory. How can I send requests to the Internal Load Balancer if the request is coming from a backend that is part of the load balanced backend service? Is there an alternative that I should be doing instead? I need this particular setup where the backend vm communicates to another first-initialized backend vm under the same load balancer.

usui
  • 11
  • 2
  • So you want to all the VM's under a certain LB were able to communicate to each other ? is that right ? What is your goal / purpose of such communication ? – Wojtek_B Oct 13 '20 at 10:55
  • I want the VMs under a certain LB to communicate to each other using the load balancer endpoint, because the load balancer will determine which node is healthy and unhealthy, and it will direct requests to the already healthy VMs – usui Oct 13 '20 at 18:51
  • If the VM's in question are only a part of some internal LB it's not possible to do due to GCP's LB design and restrictions. But you mentioned that ** the node is trying to join a Kubernetes control plane** - is the VM in question a part of a Kubernetes cluster ? Are you trying to create a multi-master control plane ?? – Wojtek_B Oct 14 '20 at 08:33
  • Yes ​​.​​​​​ It is a k8s cluster, trying to add more control plane nodes. But I want the internal IP addresses to continue to be dynamic, as this setup is not able to reserve internal IP addresses. kubeadm also does not allow to use DNS names for advertiseAddress – usui Oct 15 '20 at 03:16
  • Hi @usui, welcome to StackEx. Can you please explain me a couple of things regarding your specific use case. Where the traffic to ILB IP address originates from, is it from outside of cluster (control plane node), or from within the cluster (K8S workload). I'm assuming you are using currently kubeadm based HA cluster, so here the question: why you use 'advertiseAddress' flag to init phase instead of 'control-plane-endpoint' to advertise kubeapi-server pool on port/DNS name (A DNS record pointing to static IP address of ILB) ? – Nepomucen Oct 16 '20 at 14:29

0 Answers0