1

I am able to install k8s(1.17.3) single master environment and now planning to deploy MultiMaster by following link with kubeadm (Stacked control plane and etcd nodes)

It requires to provide --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT".

why kubeadm requires load balancer info? because

  • Every master runs its own API server.
  • Only one master eligible to work in the system, which can avoid conflicting directions from different daemons while managing containers. To achieve this setup, we enable the --leader-elect flag. Only the one getting the lease can take duties.
  • So at cluster level things get decided by electing leader so why we need load balancer?
  • i want to keep load balancer info out of cluster so i would be free to change Load Balancer IP any time without disturbing cluster as each master is listening at

     - 10.10.10.1:6443
     - 10.10.10.2:6443
     - 10.10.10.3:6443
    
  • If any component within cluster want to communicate it use endpoint resolver, if i am correct?

By providing control-plane-endpoint our LB info is tightly coupled with cluster configuration. Can i achieve multi master without specifying control-plane-endoint?

Update1:

In response to @mdaniel If we provide IP of one of Master in --control-plane-endpoint, it become Single point of Failure for adding new control-plane.

Suppose i put 10.10.10.1:6443 as control-plane-endpoint and after that this master is down and i want to add another control-plane it would not be successfull, i tried following

kubeadm join 10.10.10.2:6443 --token ... --discovery-token-ca-cert-hash sha256:... --control-plane --certificate-key ...

it gives following error

[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get https://10.10.10.1:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: dial tcp 10.10.10.1:6443: connect: connection refused

Point is if i explicitly mentioned different CP it should consider that rather then still picking CM from old CP.

Yes i can workaround by changing entries in

  1. kubectl edit cm cluster-info -n kube-public
  2. kubectl edit cm kubeadm-conf -n kube-system

PS: I knew i suppose to use LB but i have specific scenario where i will introduce LB later in my datacenter.

ImranRazaKhan
  • 115
  • 2
  • 13

1 Answers1

1

why kubeadm requires load balancer info?

Because you don't want to have to update every kubeconfig in the universe when your master IP rolls. You are welcome to use CNAME records, an A record with multiple answers, a convoluted shared IP system, or whatever HA solution fits your needs and expertise, but in 99.99% of the cases, having a load balancer in front of the api servers is the solution which introduces the least headache possible.

That's obviously not true in 100% of the cases, as I have had non-zero times where the load balancer health checks failed, sealing off the api servers from the rest of the cluster, which turned a small fire into a raging nuclear one, but in general it's the least painful.

For clarity, the rest of your question about --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" is merely verbiage from the help text, and isn't required. You're welcome to plug 10.10.10.1:6443 into that argument (or any of them, actually), so long as the machine running kubeadm is able to contact that IP on that port, and that your api servers have their IP addresses in the SAN list for their certs.

mdaniel
  • 2,338
  • 1
  • 8
  • 13
  • i have added update1 in question in response – ImranRazaKhan Jun 05 '20 at 11:18
  • 1: connect your works to the loadbalancer, if one master die, the work requests will go other master, 2: kubectl from doesn't be recongured if one master die, 3: you could use haproxy to setup the loadbalancer – c4f4t0r Jun 05 '20 at 12:52