8

I am busy setting up new k8s cluster.

I am using rke with the --max-pods: 200

kubelet: # https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/#extra-args
    extra_args: 
    - max-pods: 200  # https://forums.rancher.com/t/solved-setting-max-pods/11866/5

How do I check if the running node has been created with the correct settings.

nelaaro
  • 584
  • 4
  • 9
  • 25

4 Answers4

8

Inside Kubernetes docs regarding Building large clusters we can read that at v1.17 supports:

Kubernetes supports clusters with up to 5000 nodes. More pecifically, we support configurations that meet all of the following criteria:

  • No more than 5000 nodes
  • No more than 150000 total pods
  • No more than 300000 total containers
  • No more than 100 pods per node

Inside GKE a hard limit of pods per node is 110 because of available addresses.

With the default maximum of 110 Pods per node, Kubernetes assigns a /24 CIDR block (256 addresses) to each of the nodes. By having approximately twice as many available IP addresses as the number of pods that can be created on a node, Kubernetes is able to mitigate IP address reuse as Pods are added to and removed from a node.

This is described in Optimizing IP address allocation and Quotas and limits.

As or setting max pods for Rancher here is a solution [Solved] Setting Max Pods.

There also is a discussion about Increase maximum pods per node

... using a single number (max pods) can be misleading for the users, given the huge variation in machine specs, workload, and environment. If we have a node benchmark, we can let users profile their nodes and decide what is the best configuration for them. The benchmark can exist as a node e2e test, or in the contrib repository.

I hope this provides a bit more insides to the limits.

Crou
  • 714
  • 3
  • 9
6

The following command will return the maximum pods value for <node_name>:

kubectl get node <node_name> -ojsonpath='{.status.capacity.pods}{"\n"}'

edit: fixed typo in my command, thanks @Shtlzut.

Alexey S.
  • 71
  • 1
  • 2
3

I found this is the best way

kubectl get nodes -A
NAME            STATUS   ROLES                      AGE   VERSION
192.168.1.1   Ready    controlplane,etcd,worker   9d    v1.17.2
192.168.1.2   Ready    controlplane,etcd,worker   9d    v1.17.2

kubectl describe nodes 192.168.1.1 | grep -i pods

Capacity:
cpu:                16
ephemeral-storage:  55844040Ki
hugepages-2Mi:      0
memory:             98985412Ki
pods:               110
Allocatable:
cpu:                16
ephemeral-storage:  51465867179
hugepages-2Mi:      0
memory:             98883012Ki
pods:               110
nelaaro
  • 584
  • 4
  • 9
  • 25
1
  1. kubelet default max pods per node is 110.
  2. Kuberentes best practice is 100 pods per node
  3. Openshift sets the default value to 250 pods per node and has tested 500 pods per node.
  4. EKS pointed out that it is limited by the number of IP that pod can use. Azure CNI has some limitations. Default node pod cidr is /24 which means under 256.

Refer to the table here https://www.stackrox.com/post/2020/02/eks-vs-gke-vs-aks/ and https://learnk8s.io/kubernetes-node-size#:~:text=Most%20managed%20Kubernetes%20services%20even,of%20the%20type%20of%20node.

enter image description here

Most managed Kubernetes services even impose hard limits on the number of pods per node:

On Amazon Elastic Kubernetes Service (EKS), the maximum number of pods per node depends on the node type and ranges from 4 to 737. On Google Kubernetes Engine (GKE), the limit is 100 pods per node, regardless of the type of node. On Azure Kubernetes Service (AKS), the default limit is 30 pods per node but it can be increased up to 250.

paco alcacer
  • 119
  • 2