3

I've installed a kubernetes (v1.20.0) cluster with 3 masters and 3 nodes using kubeadm init and kubeadm join, all on Ubuntu 20.04. Now I need to update the configuration and

  • Add --cloud-provider=external kubelet startup flag on all nodes as I'm going to use vsphere-csi-driver
  • Change the --service-cidr due to network requirements

However I'm not entirely sure what is the proper way of making these changes.

Kubelet

Looking at /etc/systemd/system/kubelet.service.d/10-kubeadm.conf there is a reference to /etc/default/kubelet but it's considered a last resort and recommends updating .NodeRegistration.KubeletExtraArgs instead:

...
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
...

Where is this .NodeRegistration.KubeletExtraArgs and how do I change it for all nodes in the cluster?

control-plane

From what I understand the apiserver and controller-manager are run as static pods on each master and reading their configuration from /etc/kubernetes/manifests/kube-<type>.yaml. My first thought was to make necessary changes to these files, however according to the kubernetes docs on upgrading a kubeadm cluster, kubeadm will:

* Fetches the kubeadm ClusterConfiguration from the cluster.
* Optionally backups the kube-apiserver certificate.
* Upgrades the static Pod manifests for the control plane components.

Because I've changed the manifests manually they are not updated in the ClusterConfiguration (kubectl -n kube-system get cm kubeadm-config -o yaml), would my changes survive an upgrade this way? I suppose I could also edit the ClusterConfiguration manually with kubeadm edit cm ... but this seems error prone and it's easy to forget changing it every time.

According to the docs there is a way to customize control-plane configuration but that seems to be only when installing the cluster for the first time. For example, kubeadm config print init-defaults as the name suggests only give me the default values, not what's currently running in the cluster. Attempting to extract the ClusterConfiguration from kubectl -n kube-system get cm kubeadm-config -o yaml and run kubeadm init --config <config> fails in all kind of ways because the cluster is already initialized.

Kubeadm can run init phase control-plane which updates the static pod manifests but leaves the ClusterConfiguration untouched, so I would need to run the upload-config phase as well.

Based on the above, the workflow seems to be

  • Extract the ClusterConfiguration from kubeadm -n kube-system get cm kubeadm-config and save it to a yaml file
  • Modify the yaml file with whatever changes you need
  • Apply changes with kubeadm init phase control-plane all --config <yaml>
  • Upload modified config kubeadm init phase upload-config all --config <yaml>
  • Distribute the modified yaml file to all masters
  • For each master, apply with kubeadm init phase control-plane all --config <yaml>

What I'm concerned about here is the apparent disconnect between the static pod manifests and the ClusterConfiguration. Changes aren't made particularly often so it's quite easy to forget that changing in one place also require changes in the other - manually.

Is there no way of updating the kubelet and control-plane settings that ensure consistency between the kubernetes components and kubeadm? I'm still quite new to Kubernetes and there is a lot of documentation around it so I'm sorry if I've missed something obvious here.

2 Answers2

1

I will try to address both of your questions.


1. Add --cloud-provider=external kubelet startup flag on all nodes

Where is this .NodeRegistration.KubeletExtraArgs and how do I change it for all nodes in the cluster?

KubeletExtraArgs are any arguments and parameters supported by kubelet. They are documented here. You need to use the kubelet command with a proper flags in order to modify it. Also, notice that the flag you are about to use is going to be removed in k8s v1.23:

--cloud-provider string The provider for cloud services. Set to empty string for running with no cloud provider. If set, the cloud provider determines the name of the node (consult cloud provider documentation to determine if and how the hostname is used). (DEPRECATED: will be removed in 1.23, in favor of removing cloud provider code from Kubelet.)

EDIT:

To better address your question regarding: .NodeRegistration.KubeletExtraArgs

These are also elements of the kubeadm init configuration file:

It's possible to configure kubeadm init with a configuration file instead of command line flags, and some more advanced features may only be available as configuration file options. This file is passed using the --config flag and it must contain a ClusterConfiguration structure and optionally more structures separated by ---\n Mixing --config with others flags may not be allowed in some cases.

You can also find more details regarding the NodeRegistrationOptions as well as more information on the fields and usage of the configuration.

Also, note that:

KubeletExtraArgs passes through extra arguments to the kubelet. The arguments here are passed to the kubelet command line via the environment file

kubeadm writes at runtime for the kubelet to source. This overrides the generic base-level configuration in the kubelet-config-1.X ConfigMap

Flags have higher priority when parsing. These values are local and specific to the node kubeadm is executing on.

EDIT2:

kubeadm init is supposed to be used only once when creating a cluster whenever you use it with flags or a config file. You cannot change the configs by executing it again with different values. Here you will find info regarding kubeadm and it's usage. Once the cluster is setup kubeadm should be dropped and changes be made directly to the static pod manifests.


2. Change the --service-cidr due to network requirements

This is more complicated. You could try to do this similarly like here or here but that approach is prone to mistakes and rather not recommended.

The more feasible and safer way would be to simply recreate the cluster with kubeadm reset and kubeadm init --service-cidr. The option to automatically change the CIDRs was not even expected from the Kubernetes perspective. So in short, the kubeadm reset is the way to go here.

  • I understand that `KubletExtraArgs` refers to the kubelet command line arguments, what I don't understand is where this attribute is and how I can modify it. `.NodeRegistration.KubeletExtraArgs` seems to refer to an object within Kubernetes, like a ConfigMap or similar. I just can't find it. "The more feasible and safer way would be to simply recreate the cluster" - I'm not sure I agree with this. How is completely destroying a running cluster the more feasible option? – Diddi Oskarsson Jan 13 '21 at 15:16
  • @DiddiOskarsson Thank you for the feedback. I have edited my initial answer to better address your question. As for the second part (the CIDRs change) I know that it might not be something that you wanted to hear in the first place but recreating the cluster with new `kubeadm init` parameters is the way to go here. Executing `kubeadm init` with `--service-cidr` populates every k8s config on every node. Changing it is possible but is not recommended and prone to errors as doing so that way was not an expected behavior. Making such changes manually might not even work properly. – Wytrzymały Wiktor Jan 14 '21 at 13:36
  • I appreciate the edit, however the linked docs seems to refer to configuring kubelet for the first time on a node when joining it (or initializing in the case of first master). I tried anyway to create an init config and `kubeadm join phase kubelet-start --config ` but it failed with `error execution phase kubelet-start: a Node with name "kube-node1" and status "Ready" already exists in the cluster.`. So what you suggested doesn't seem to work. Or am I missing something else? – Diddi Oskarsson Jan 14 '21 at 16:28
  • As for the `--service-cidr` I understand it's a complex task. The main question wasn't about the flag itself but rather how to properly manage configuration changes over time on a running cluster with kubeadm, and if there is a way to update the static pod manifests with kubeadm while still keeping it in sync with the `ClusterConfiguration`. Another example would be adding/changing flags for [OIDC authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) which I'm looking at doing next. – Diddi Oskarsson Jan 14 '21 at 16:45
  • It's very important to understand that `kubeadm init` is supposed to be used only once when creating a cluster whenever you use it with flags or a config file. You cannot change the configs by executing it again with different values. [Here](https://kubernetes.io/docs/reference/setup-tools/kubeadm/) you will find info regarding `kubeadm` and it's usage. – Wytrzymały Wiktor Jan 15 '21 at 10:26
  • I see, so with regards to my question "Is there no way of updating the kubelet and control-plane settings that ensure consistency between the kubernetes components and kubeadm?" the answer is basically, no once the cluster is setup kubeadm should be dropped and changes be made directly to the static pod manifests? If that's the case, could you maybe add it to the answer and I'll mark it. – Diddi Oskarsson Jan 15 '21 at 16:56
  • I have edited the answer again to address the subjects that we have discussed here. – Wytrzymały Wiktor Jan 18 '21 at 10:34
0

With respect to

I understand that KubletExtraArgs refers to the kubelet command line arguments, what I don't understand is where this attribute is and how I can modify it.

multiple sources such as this one point to adding to

/etc/systemd/system/kubelet.service.d/10-kubeadm.conf

lines like e.g.

Environment="KUBELET_EXTRA_ARGS=--pod-manifest-path=/etc/kubelet.d/"

to set the environment for a custom directory for static pods here for example, instead of using the cli with

kubelet --pod-manifest-path=/etc/kubelet.d

as it suggests in the docs.

If you google for $KUBELET_EXTRA_ARGS you'll find a lot of examples wrt the aforementioned 10-kubeadm.conf file.