1

I'd like to set up the Kubernetes cluster and hide the control plane components for all clients (some kind of a managed cluster). Kubeadm uses Kubelet and static pods to run this components which lead to registering Node and Pod resources into the API Server so any user with ClusterRole can list and manage the master nodes and pods.

I can bootstrap the control plane, stop the Kubelet agent and delete the master node resources but it seems that in this way I can't use kubeadm to upgrade the components and Kubelet to recover pods if any crash occurs.

Can I run the control plane out of the Kubernetes cluster using kubeadm or should I use my own instruments in that case?

  • Hi Pavel Parshin welcome to S.F. It sounds like you're trying to cripple the control-plane to solve an RBAC problem; if you don't want folks twiddling with your control-plane, ensure their credentials do not have access to it – mdaniel Sep 23 '21 at 14:56
  • Which version of Kubernetes did you use and how did you set up the cluster? Did you use bare metal installation or some cloud providor? – Mikołaj Głodziak Sep 24 '21 at 10:13
  • @mdaniel, hi. I think it's not an RBAC problem because I'm going to give the end users an admin role. It's one of requirements – Pavel Parshin Sep 24 '21 at 11:19
  • @MikołajGłodziak, 1.19.14. I'm using ClusterAPI and OpenStack to set up the cluster. ClusterAPI uses kubeadm to bootstrap the control plane – Pavel Parshin Sep 24 '21 at 11:20

2 Answers2

1

Can I run the control plane out of the Kubernetes cluster using kubeadm

Short answer: No, it is not possible.

should I use my own instruments in that case

Yes, that will be the solution to this situation. If you find your own solution, feel free to write it as an answer.

As a workaround, you can try to create a separate control plane (as in Kubernetes the hard way) and thenkubeadm join. However, you must also be aware that this type of configuration will be complicated to perform. Look also at this blog page.

See also similar topics:

You can run the Kubernetes control plane outside Kubernetes as long as the worker nodes have network access to the control plane. This approach is used on most managed Kubernetes solutions.

Look also this page about Self-registration of Nodes.

EDIT: I have found another possible workaround.

EDIT2: This tutorial should help you too.

0

Eventually I rewrote the kubeadm added the option to deploy control plane components as unix services and run them out of the Kubernetes cluster.

If you are interested, have a look at PR and adopt it for your requirements. How to use:

# build the updated kubeadm
make WHAT=cmd/kubeadm KUBE_BUILD_PLATFORMS=linux/amd64

# install the control plane components
wget -q --show-progress --https-only --timestamping \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl"

chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

wget -q --show-progress --https-only --timestamping \
  "https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"

tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/

# run kubeadm with enabled service hosting option
kubeadm init --service-hosting

If you are using Cluster API, you have to write your own control plane controller and CRDs to support your own deployment.