My understanding of why this would happen by default is that at the scale k8s is designed for, there would always be so many nodes and so many replicas of pods that this scenario would be highyl unlikely, whereas for many of us on smaller clusters, it is very annoying. An example pod anti affinity is shown below. Note that the value in the match expression is automatically injected by Octopus Deploy when we deploy, you would need to inject a suitable label yourself.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: Octopus.Deployment.Id
operator: In
values:
- "deployments-38119"
topologyKey: kubernetes.io/hostname
weight: 100
Edit: I also noticed that you are deploying a proxy, in which case you might want to consider another deployment type like DaemonSet which ensures that 1 pod runs on every node.