0

This is a pretty basic question so I figure I must be missing something obvious, Does openshift service uses round-robin to load balance between pods? Or does it forward requests to the pod with the greatest amount of available resources? Or is it totally random?

My service configuration looks like that:

kind: service
metadata:
  name: temp
  labels:
    app: temp
spec:
  port:
    targetPort: temp-port
  to:
    kind: Service
    name: temp

1 Answers1

1

Posting this community wiki answer to point on the official documentation of Openshift and Kubernetes (in additional resources) that should answer the question posted.

Feel free to edit and expand.

As per OpenShift documentation (v3.11):

Services

A Kubernetes service serves as an internal load balancer. It identifies a set of replicated pods in order to proxy the connections it receives to them. Backing pods can be added to or removed from a service arbitrarily while the service remains consistently available, enabling anything that depends on the service to refer to it at a consistent address. The default service clusterIP addresses are from the OpenShift Container Platform internal network and they are used to permit pods to access each other.


Service Proxy Mode

OpenShift Container Platform has two different implementations of the service-routing infrastructure. The default implementation is entirely iptables-based, and uses probabilistic iptables rewriting rules to distribute incoming service connections between the endpoint pods. The older implementation uses a user space process to accept incoming connections and then proxy traffic between the client and one of the endpoint pods.

The iptables-based implementation is much more efficient, but it requires that all endpoints are always able to accept connections; the user space implementation is slower, but can try multiple endpoints in turn until it finds one that works. If you have good readiness checks (or generally reliable nodes and pods), then the iptables-based service proxy is the best choice. Otherwise, you can enable the user space-based proxy when installing, or after deploying the cluster by editing the node configuration file.

Answering on the question how the traffic is load balanced when going to the Service:

The default implementation is entirely iptables-based, and uses probabilistic iptables rewriting rules to distribute incoming service connections between the endpoint pods.


I'd reckon you can also take a look on additional resources:

Dawid Kruk
  • 588
  • 2
  • 8
  • Thanks for your answer! Not sure what "probabilistic iptables rewriting rules" means though, and this [link](https://scalingo.com/blog/iptables) specify both "Random balancing" and "Round Robin" as a method, I need to know what is the default method used by openshift – Naama L Ackerman Jul 15 '21 at 07:52
  • AFAIK this would work with `Random balancing` (with probability). I'd reckon you can inspect the `iptables` rules on your Nodes for more reference. This guide should help you: https://www.stackrox.com/post/2020/01/kubernetes-networking-demystified/ – Dawid Kruk Jul 20 '21 at 12:48