0

I'm trying to benchmark the load on the Kubernetes API that two different deployments cause. I have tried the two following ways of accomplishing this:

  1. Evaluate the Prometheus metric apiserver_request_total. Unfortunately, this does not take into account how much load the API-Request actually causes (every API-Request is treated the same)
  2. In a Minikube-cluster I've used previously, the API-Server was observable as a pod named kube_apiserver. However, this doesn't seem to be the case in GKE:
> kubectl get pods --namespace kube-system
NAME                                              READY   STATUS    RESTARTS   AGE
event-exporter-gke-857959888b-lnk6x               2/2     Running   0          4d
fluentbit-gke-tjrjd                               2/2     Running   0          4h18m
gke-metrics-agent-jwpcs                           1/1     Running   0          4h18m
konnectivity-agent-78cc49969f-sf749               1/1     Running   0          4d
konnectivity-agent-autoscaler-7b4cb89b88-ljdqp    1/1     Running   0          4d
kube-dns-55d79c844b-mgqtv                         4/4     Running   0          4d
kube-dns-autoscaler-9f89698b6-5kwrv               1/1     Running   0          4d
kube-proxy-gke-lm-bachelor-pool-2-45f1ac8e-8jlz   1/1     Running   0          4h17m
l7-default-backend-58fd4695c8-njbwz               1/1     Running   0          4d
metrics-server-v0.5.2-6bf845b67f-6qp6v            2/2     Running   0          4d
pdcsi-node-wzhh4

Is there any way to access CPU usage metrics about the Kubernetes API server in a GKE cluster (for example by exposing the API-Server as a pod)?

0 Answers0