0

While trying to configure an experimental Kubernetes cluster (in a few VMs on my laptop) as "high available" I found the advise to do this using the combination of keepalived and haproxy ( https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#options-for-software-load-balancing ).

Looking at the configuration settings I read

${STATE} is MASTER for one and BACKUP for all other hosts, hence the virtual IP will initially be assigned to the MASTER.

${PRIORITY} should be higher on the master than on the backups. Hence 101 and 100 respectively will suffice.

and these settings surprise me. Seemingly I have to choose which of those systems is to be the initial master and I have to "hard" configure this in the nodes themselves.

To me this "high available" setup deviates from the "pet"/"cattle" analogy I find in Kubernetes.

Other systems like for example HBase have a similar setup (one active and multiple standby leaders) and all are configured "identically" (election is done via ZooKeeper).

Is there a way that I can configure Keepalived (for use in Kubernetes) in such a way that all nodes have the same config and it still works correctly?

Niels Basjes
  • 2,176
  • 3
  • 18
  • 26

2 Answers2

2

Kubernetes itself supplies "cattle" services to applications. Although a lot of the "master" kubernetes services are based on the same infrastructure, at some point you need to bootstrap a service with something lower level to get it all started up.

keepalived as configured in the linked kubernetes docco provides a single VRRP virtual IP address as the highly available endpoint shared between the masters.

The nodes all configure the same VRRP IP address (or name) and keepalived moves that address around the masters. The "election" is completed in the keepalived healthcheck and failover logic.

An alternative to this method is to move the load balancing decision out to an external device or the clients. You can run a reverse proxy on each node (like haproxy) that can weight the kube-api servers and complete the healthchecks.

Matt
  • 1,537
  • 8
  • 11
  • I read your answer as a repeat of what the Kubernetes documentation states about keepalived: You have a single MASTER and a bunch of BACKUP nodes. In HBase the active system is elected without the need pre-configure the "initial" master. I would like the same with keepalived. Simply create multiple instances with identical config and let these instances figure it out. How do I do that? Is that possible? – Niels Basjes Aug 26 '20 at 13:06
  • 1
    No, not using keepalived. It's not based on a highly distributed consensus algorithm. I guess because it's not regularly used in systems that would have a quorum of members to decide (like hbase, etcd etc) and needs to work when only a single node is available. – Matt Aug 27 '20 at 01:03
0

I realize this is a stale thread but thought I would chime in anyway as I have ran KeepaliveD with identical config on all nodes.

On Debian we have all nodes initially set to BACKUP and have some sort of healthcheck that will increase priority (e.g. how long KeepaliveD has been running or a health check for the local service that you want HA for...)

VRRP v2, which is what KeepaliveD (at least the versions I have worked with) is running on Debian, has a tie-breaker feature, where if the priority is the same on several nodes, then the "highest IP" wins.

This can cause an initial delay if all nodes start up at the same time but this has been acceptable where we use this.

Hope that helps.

Anders
  • 1