1

I have a Kubernetes cluster with 4 nodes and around 100 pods and kube-apiserver start with flag --target-ram-mb=512

kube-apiserver consumes ~3GB of RAM and it is growing

(pprof) top
Showing nodes accounting for 1.42GB, 82.29% of 1.73GB total
Dropped 628 nodes (cum <= 0.01GB)
Showing top 10 nodes out of 174
      flat  flat%   sum%        cum   cum%
    1.15GB 66.53% 66.53%     1.15GB 66.53%  k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/cacher.newCacheWatcher
    0.11GB  6.10% 72.63%     0.11GB  6.10%  bufio.NewWriterSize
    0.03GB  1.74% 74.37%     0.03GB  1.74%  k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.newCounters
    0.03GB  1.47% 75.84%     0.05GB  2.88%  runtime.systemstack
    0.02GB  1.41% 77.25%     0.02GB  1.41%  runtime.malg
    0.02GB  1.31% 78.56%     0.02GB  1.31%  k8s.io/kubernetes/vendor/github.com/beorn7/perks/quantile.newStream
    0.02GB  1.30% 79.86%     0.02GB  1.30%  net/http.(*Request).WithContext
    0.02GB   0.9% 80.77%     0.06GB  3.46%  k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.serveWatch
    0.01GB  0.79% 81.56%     0.02GB  1.16%  k8s.io/kubernetes/vendor/k8s.io/kube-openapi/pkg/schemaconv.(*convert).VisitKind
    0.01GB  0.73% 82.29%     0.01GB  0.73%  net/textproto.MIMEHeader.Set

What else can I check?

Thanks

user2265148
  • 11
  • 1
  • 2

1 Answers1

1

Unfortunately there is bug related to that on github which is still open:

https://github.com/kubernetes/kubernetes/pull/85410

Seems like you also created a question on github which was linked to that bug.

acid_fuji
  • 533
  • 3
  • 8