0

I'm looking at this guide for deploying a Kubernetes cluster to AWS, it says:

For the master, for clusters of less than 5 nodes it will use an m3.medium, for 6-10 nodes it will use an m3.large; for 11-100 nodes it will use an m3.xlarge.

For context an m3.medium has 3.75 GB memory and one vCPU.

My understanding is that the master node just monitors and controls the scaling up or down of pods and nodes.

I don't see why such a large node is recommended/required.

dwjohnston
  • 149
  • 1
  • 1
  • 5

1 Answers1

0

Master nodes run etcd databases that store not only current definitions of objects in kubernetes API but also things like event history etc, and they really do not like larger latencies on servers. Apart of that, they also run kubernetes apiserver, controller-manager and scheduler at bare minimum. While in small cluster it is possible to run control plane on smaller instances (I actualy managed to get away with t2.large on one of the setups) the recommendation takes into account what load can be generated in a more heavy scenarios.