Inside Kubernetes docs regarding Building large clusters we can read that at v1.17 supports:
Kubernetes supports clusters with up to 5000 nodes. More pecifically, we support configurations that meet all of the following criteria:
- No more than 5000 nodes
- No more than 150000 total pods
- No more than 300000 total containers
- No more than 100 pods per node
Inside GKE a hard limit of pods per node is 110
because of available addresses.
With the default maximum of 110 Pods per node, Kubernetes assigns a /24 CIDR block (256 addresses) to each of the nodes. By having approximately twice as many available IP addresses as the number of pods that can be created on a node, Kubernetes is able to mitigate IP address reuse as Pods are added to and removed from a node.
This is described in Optimizing IP address allocation and Quotas and limits.
As or setting max pods for Rancher
here is a solution [Solved] Setting Max Pods.
There also is a discussion about Increase maximum pods per node
...
using a single number (max pods) can be misleading for the users, given the huge variation in machine specs, workload, and environment. If we have a node benchmark, we can let users profile their nodes and decide what is the best configuration for them. The benchmark can exist as a node e2e test, or in the contrib repository.
I hope this provides a bit more insides to the limits.