I am able to install k8s(1.17.3) single master environment and now planning to deploy MultiMaster by following link with kubeadm (Stacked control plane and etcd nodes)
It requires to provide --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
.
why kubeadm requires load balancer info? because
- Every master runs its own API server.
- Only one master eligible to work in the system, which can avoid conflicting directions from different daemons while managing containers. To achieve this setup, we enable the --leader-elect flag. Only the one getting the lease can take duties.
- So at cluster level things get decided by electing leader so why we need load balancer?
i want to keep load balancer info out of cluster so i would be free to change Load Balancer IP any time without disturbing cluster as each master is listening at
- 10.10.10.1:6443 - 10.10.10.2:6443 - 10.10.10.3:6443
If any component within cluster want to communicate it use endpoint resolver, if i am correct?
By providing control-plane-endpoint our LB info is tightly coupled with cluster configuration. Can i achieve multi master without specifying control-plane-endoint?
Update1:
In response to @mdaniel If we provide IP of one of Master in --control-plane-endpoint, it become Single point of Failure for adding new control-plane.
Suppose i put 10.10.10.1:6443 as control-plane-endpoint and after that this master is down and i want to add another control-plane it would not be successfull, i tried following
kubeadm join 10.10.10.2:6443 --token ... --discovery-token-ca-cert-hash sha256:... --control-plane --certificate-key ...
it gives following error
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get https://10.10.10.1:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: dial tcp 10.10.10.1:6443: connect: connection refused
Point is if i explicitly mentioned different CP it should consider that rather then still picking CM from old CP.
Yes i can workaround by changing entries in
- kubectl edit cm cluster-info -n kube-public
- kubectl edit cm kubeadm-conf -n kube-system
PS: I knew i suppose to use LB but i have specific scenario where i will introduce LB later in my datacenter.