0

I'm running a small Kubernetes cluster with one master and two worker nodes. I'm trying to understand its memory usage and whether I've exhausted the resources... And if so, how I should detect that accurately.

The nodes have 4 GB of memory each and no swap (per best practices). Looking at one of the nodes, the containers are using 16 GB of memory according to Docker. How is this possible?

khost1:~$ docker stats --no-stream --format 'table {{.MemUsage}}' | sed 's/\.\([0-9]*\)GiB/\1MiB/g' | sed 's/[A-Za-z]*//g' | awk '{sum += $1} END {print sum "MB"}'
16436.8MB
khost1:~$ free
              total        used        free      shared  buff/cache   available
Mem:        4039552     3255808      234400       54336      549344      473648
Swap:             0           0           0

Also, all I'm seeing on the Kubernetes Dashboard is that 3.6 GB of 11.3 GB (across all three nodes?) has been reserved. I assume this is because my pods aren't specifying requests and limits for the most part. Am I required to to do so for Kubernetes to manage memory effectively?

kodbuse
  • 103
  • 2

1 Answers1

2

docker stats reporting pages that are used for disk caching as well as used, which is a bit misleading.

If your project is sensitive for resource utilization, you should definitely use resource requests and limits parameters.

In order to protect your cluster running out of resources, you might also want to enable Resource Quotas

Once you enable it, you'd be able to check actual resource usage specifically by the pods in the namespace

kubectl describe quota
A_Suh
  • 324
  • 1
  • 7