2

I want to know whether this is a default behaviour or something wrong with my setup.

I have 150 workers running on kubernetes.

I made a set of kubernetes workers (10 workers) run only a specific deployment using nodeSelector, I created a service (type=LoadBalancer) for it, when the Load Balancer was created all the 150 workers of Kubernetes were registered to the Load Balancer, while I was expecting to see only this set of workers (10 workers) of this deployment/service.

It behaved the same with alb-ingress-controller and AWS NLB

kind: Service
apiVersion: v1
metadata:
  name: my-service
spec:
  selector:
    app: my-app
  ports:
  - port: 8080
  type: LoadBalancer

and the deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  replicas: 10
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: master-api
        image: private/my-app:prod
        resources:
          requests:
            memory: 8000Mi
        ports:
        - containerPort: 8080
      nodeSelector:
        role: api 

I was already labeled 10 workers nodes with the label role=api the 10 run only pods of this deployment, and no other worker is running this service I also don't have another service or container using port 8080

Eltorrooo
  • 51
  • 1
  • 4

1 Answers1

1

This is normal. When kubernetes configures the load balancer, it configures all of the nodes to be part of the backend service. The load balancer doesn't know or care which nodes are running which pods. You can think of it as routing traffic to the cluster (as a whole) rather than to particular workloads. Once the traffic arrives at the cluster, it's able to be routed intelligently to the correct pods based on the current state of the cluster and its workloads.

Matt Zimmerman
  • 361
  • 1
  • 10