1

kubeadm creates certificates for the Kubernetes control plane that are valid for one year. They will be renewed on every Kubernetes upgrade. Since it is definitely a good idea to update a Kubernetes cluster at least once per year, this should lead to never expiring certificates.

However, we have some Kubernetes clusters running in air-gap environments (absolutely no Internet connection) where there is no guarantee that they will ever see updates. Certificates expiring within one year are not acceptable in such environments. Extending the certificate lifetime would be one idea to remedy this setup, but automatically renewing the certificates appears to be the better solution. This can easily be done with kubeadm alpha certs renew all (Kubernetes 1.15) run by cron or a systemd timer on every master node.

I have noticed that the API server, controller manager, and scheduler do not pickup the new certificates. Is there any way to notify these components of the new certificates? Even destroying the Pods is not that simple because the control plane Pods are static and kubectl delete pod just deletes the mirror Pod but does not kill the containers. Some error-prone docker |grep … could do the job but I am wondering if there is an API call or a smarter way to do it. I have not found any further docs on this topic.

Stephan
  • 245
  • 1
  • 7

1 Answers1

1

I thought there was an existing ticket for that behavior, but I could only find the one for kubelet and kube-proxy

The short version is that for as long as that behavior has been "TBD," I wouldn't expect it to be fixed anytime in the near future. If your cluster is a multi-master HA configuration, I would expect it would be safe to restart the control plane pods in a rolling fashion. The process which runs kubeadm alpha certs renew all can restart the machine to get out of the business of selectively bouncing individual docker containers.

Having said that, I wouldn't say that identifying the control plane docker containers is "error-prone," since kubelet labels the docker containers with labels that match the pod name and namespace, enabling one to trivially filter for the containers which make up the control plane and kill only them:

for comp_name in kube-apiserver kube-controller-manager etcetc; do
  for c_id in $(docker ps -q \
        --filter "label=io.kubernetes.pod.namespace=kube-system" \
        --filter "label=io.kubernetes.pod.name=${comp_name}"); do
    docker kill $c_id
    docker rm   $c_id
  done
done
mdaniel
  • 2,338
  • 1
  • 8
  • 13
  • `--filter "label=io.kubernetes.container.name=${comp_name}")` should be as pod name has postfix added – itiic Dec 21 '21 at 11:37