1

In our google cloud (kubernetes backed) configuration of multiple projects, workloads and service (load-balancers), we specifically configure load-balancers but the configuration seems to change.

We specifically add nodes to our load-balancers, after a while, all of our nodes (from different pools) end up attached to all our load balancers. After they have been magically added back to the load-balancer, we remove them (again), and then sometime later, they're all back.

I realize there is a lot of missing implementation information but was hoping there was some well-known patterns someone thinks we might not be following. I will try my best to post configuration details.

Brad Rust
  • 13
  • 2
  • Are your load balancers created by Kubernetes using Service with `type LoadBalancer`, or you created them manually? – Anton Kostenko May 23 '18 at 13:20
  • The load-balancers were created manually. Really so we could internally use nginx for some service routing. However, we are open to changing that or re-creating the load-balancers if required/suggested. – Brad Rust May 23 '18 at 20:09

1 Answers1

0

When you use manually created Load Balancers on Google Cloud with Kubernetes Services and type: NodePort, it is almost the same thing as you use type: Load Balancer for your services, but in the second case, LB is created and managed by Kubernetes, and you don't need to care about it.

Because Service with type NodePort is binding address on all your nodes, they should be added to the LB as a backend. Check the documentation, path "Proxy-mode: iptables". Maybe in your situation, Kubernetes tried to manage it and added all your nodes to LoadBalancers, because they process requests to its services. Actually, I haven’t seen installations where LoadBalancers were created manually and pointed to Kubernetes.

On Google Cloud, I highly recommend you to use Ingress (based on Google Load Balancer or Nginx) or Services with the type: Load Balancer, if you don't need custom routing.

You can implement it like that (for Nginx Ingress):

  • Deploy Nginx Ingress Controller with Service type: LoadBalancer. It will create for you LoadBalancer as an entrypoint for all your traffic.
  • Deploy your application Service with type: ClusterIP.
  • Create an Ingress object for your application's Service and write all your routing rules there.
Anton Kostenko
  • 652
  • 6
  • 5