2

Brand new to kubernetes here. I suspect there could be a simple answer to this.

Is there a way to disable resource quotas at the cluster level or at least reduce the resource quotas requested by the kube-system pods for very small clusters when using Google Kubernetes Engine? I would like to have a kubernetes cluster that is publicly available on a cloud provider rather than minikube on my laptop but have absolutely no concerns about HA and don't expect more than a handful of people to be utilizing services on it.

When bringing up a single node on GKE the kube-system pods request over 70% of the CPU but are using < 1% in reality. I grudgingly brought up a second node despite only using 1% of the CPU on the first node and found over 50% of the CPU was reserved. kube-dns for example takes 27% of the CPU on each node. For my own pods I define by hand I could just avoid any sort of CPU requests but when installing helm charts others wrote the almost always fail to schedule do to CPU insufficient CPU resources.

digitaladdictions
  • 1,465
  • 1
  • 11
  • 29

1 Answers1

1

You can edit the various deployments that are created by GKE, but they may revert in the future (for example, when upgrading your cluster).

To see all of the internal deployments:

$ kubectl get deployments --namespace kube-system
NAME                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
event-exporter-v0.1.9   1         1         1            1           20d
fluentd-gcp-scaler      1         1         1            1           15d
heapster-v1.5.2         1         1         1            1           20d
kube-dns                1         1         1            1           210d
kube-dns-autoscaler     1         1         1            1           210d
l7-default-backend      1         1         1            1           210d
metrics-server-v0.2.1   1         1         1            1           96d

And then to edit one of them:

$ kubectl edit deployment/kube-dns --namespace kube-system

Then edit the resources section. You can reduce the resource allocations, or just delete them all together. But, you might have some instability issues if your cluster is over-allocated.

robbyt
  • 1,622
  • 3
  • 14
  • 26
  • I've tried this but the modification were reverted a day later for no apparent reason.. – eug Oct 10 '18 at 00:24