0

In the following scenario, is there a way to determine to which etcd server the Kubernetes-apiserver is communicating with??

  1. Let's say we have 3 Master nodes with an external load balancer and 3 etcd's that are co-located in the same host with etcd running on Master1 node as a leader.
  2. When a kubectl command is executed, the external load balancer routes the traffic to one of the 3 Master nodes in a round-robin way.
  3. Assume that the HTTP request hits the Master3 node.
  4. The question here is, does the kubernetes-apiserver on Master3 node talks to the leader etcd (on Master1 node) to notify about the resource state and then the leader etcd distributes the data with the other two followers?

    (or)

  5. Does the kubernetes-apiserver on Master3 node talks to the etcd running on Master3 node about the resource state to store and notifies the etcd leader?

The line from kubernetes-apiserver.service file:--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 seems like every kubenetes-apiserver running on all 3 Master nodes know about all 3 etcd servers.

  • Venkata from etcd faq, Do clients have to send requests to the etcd leader? Raft is leader-based; the leader handles all client requests which need cluster consensus. However, the client does not need to know which node is the leader. Any request that requires consensus sent to a follower is automatically forwarded to the leader. Requests that do not require consensus (e.g., serialized reads) can be processed by any cluster member., kubernetes can writes to any nodes of the etcd cluster without knows who is the leader – c4f4t0r Jun 13 '19 at 11:57

1 Answers1

0

as per documentation: Best practices for replicating masters for HA clusters

In this scenario when you create separate cluster with its own dedicated etcd database for API server: each server will talk to local etcd and all API servers in the cluster will be available.

In this case the leader of the system sends heartbeats to all followers in order to keep the cluster stable (a qourom of nodes is required to agree cluster's updates). In casse some network or other issues leaderless cluster will be unable to make changes (etcd instance will be unstable and the cluster will be unable to sheduling new pods etc.)

The load balancing in this solution is crucial. In case the controlling master will fails the api goes offline and as the result cluster will be unresponisve to requests, node failures etc. All delays are propagated to the Kubernetes controller.

Additional resources you can find here, here and here:

Hope this help. Please share with your findings.

Mark
  • 304
  • 1
  • 8