0

I installed traefik via Helm. Then I scaled it with

kubectl scale --replicas=2 deployment traefik -n traefik

Now I have two pods running on the same node, despite there is a second node that is up and running with no problems. How can I tell it to scale on both nodes?

Peter
  • 123
  • 4
  • You could check Pod Affinity – c4f4t0r Jul 11 '22 at 10:04
  • nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your [Pod specification](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) and specify the [node labels](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels) you want the target node to have. – Ramesh kollisetty Jul 12 '22 at 09:47
  • This will work for now when I have only 2 nodes. But later on when I will have many of them and don't want to pick the nodes, but just tell the system to not select the same node for 2 replicas? – Peter Jul 12 '22 at 11:48
  • @Rameshkollisetty nodeSelector is not a good fit in this case, If I have two pods and I want those pods run on different nodes, nodeAffinity or podAffinity – c4f4t0r Jul 13 '22 at 06:44
  • @Peter Can you elaborate on your requirement why you want one pod per one node? – Ramesh kollisetty Jul 13 '22 at 09:56
  • In the future this should be a matter of scaling the workload between nodes. The idea is: One pod puts its node under high load, so I scale it to additional replicas. What only makes sense, if they are scaled to different nodes. – Peter Jul 13 '22 at 12:16
  • 2
    @Peter Pod anti-affinity can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod. The following [example](https://docs.openshift.com/container-platform/3.11/admin_guide/scheduling/pod_affinity.html#admin-guide-sched-affinity-examples2-pods) demonstrates pod anti-affinity for pods with matching labels and label selectors. – Ramesh kollisetty Jul 14 '22 at 07:34

2 Answers2

0

Pod anti-affinity can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod.
The following example demonstrates pod anti-affinity for pods with matching labels and label selectors.

0

My understanding of why this would happen by default is that at the scale k8s is designed for, there would always be so many nodes and so many replicas of pods that this scenario would be highyl unlikely, whereas for many of us on smaller clusters, it is very annoying. An example pod anti affinity is shown below. Note that the value in the match expression is automatically injected by Octopus Deploy when we deploy, you would need to inject a suitable label yourself.

affinity: 
  podAntiAffinity: 
    preferredDuringSchedulingIgnoredDuringExecution: 
      - podAffinityTerm: 
          labelSelector: 
            matchExpressions: 
              - key: Octopus.Deployment.Id 
                operator: In 
                values: 
                - "deployments-38119" 
            topologyKey: kubernetes.io/hostname 
          weight: 100 

Edit: I also noticed that you are deploying a proxy, in which case you might want to consider another deployment type like DaemonSet which ensures that 1 pod runs on every node.