2

I am trying to setup PCS for HAProxy on CentOS7 in an active/active configuration. I've done active/active before, but I am not familiar with constraints and dependency groups.

So far so good:

 2 nodes configured
 4 resources configured

 Online: [ HOST1 HOST2 ]

 Full list of resources:

  Clone Set: VIPHA-clone [VIPHA] (unique)
      VIPHA:0     (ocf::heartbeat:IPaddr2):       Started HOST2
      VIPHA:1     (ocf::heartbeat:IPaddr2):       Started HOST1
  Clone Set: haproxy-clone [haproxy]
      Started: [ HOST2 HOST1 ]

However, now I'd like to add a constraint that HAPRoxy must be running in order for the IP to be served by a host:

pcs constraint order haproxy-clone then VIPHA-clone

The problem with this is, HAProxy will never start because it cannot bind to the IP unless it is started first.

How would I set this up so that:

  1. pcs will take the IP offline on a host if a health check (i.e. haproxy process running) fails?

  2. pcs will only bring the IP up if a health check (i.e. haproxy process running) succeeds?

    • If this isn't possible as described above, start at same time and behave as #1

I appreciate any input. Thank you!

namezero
  • 161
  • 6

2 Answers2

3

I listen on a wildcard in haproxy.cfg

bind *:443

instead of

bind myvip:443

This way haproxy resource can run all the time whether the node has VIP resource or not. If node gets a VIP, haproxy will respond on it immediately.

The obvious side effect is that haproxy listens on all it's IP addresses not only on VIP.

If a port number conflicts (for example I need a differently configured port 443 on another IP or VIP), I define it as bind *:9443 and then put it behind a DNAT.

kubanczyk
  • 13,502
  • 5
  • 40
  • 55
  • Very well yes that does the trick for binding HAPRoxy. Do you know how to STONITH a node or restart haproxy if haproxy crashes? – namezero Oct 29 '18 at 18:20
1

If you are not tied to pacemaker/corosync, the behavior you describe can be achieved with opensvc, using the service configuration file below :

[DEFAULT]
id = 84327b87-13f6-4d32-b90a-a7fad87a8d92
nodes = server1 server2
flex_min_nodes = 2
topology = flex
orchestrate = ha
monitor_action = freezestop

[ip#vip]
ipname@server1 = 192.168.100.240
ipname@server2 = 192.168.100.241
ipdev = br0
monitor = true

[app#haproxy]
type = simple
start = /sbin/haproxy -f /etc/haproxy/haproxy.cfg
restart = 1
monitor = true

Explanations :

[DEFAULT section is the global configuration settings :

  • id = .... unique service id, automatically generated at service creation time (svcmgr -s myservice create, and then svcmgr -s myservice edit config)

  • nodes = server1 server2 mean we are running a 2 nodes opensvc cluster

  • flex_min_nodes = 2 tell that we expect the service run at least 2 instances. In this 2 nodes cluster, we will have 1 instance per node.

  • topology = flex specify that we are running an active/active service topology

  • orchestrate = ha tells that the service have to be automatically managed by the opensvc daemon

  • monitor_action = freezestop is used to force the behaviour when a critical resource goes down (like a haproxy process crash or kill). If this happens, the opensvc daemon has to take a decision. 3 possible parameters :

    • freezestop : the local service instance is stopped, and put in frozen state.
    • reboot : the node is rebooted.
    • crash : the node is crashed.

[ip#vip] : is used to declare the service vip :

  • on server1 the ip 192.168.100.240 will be configured on interface br0 at service start
  • on server2 the ip 192.168.100.241 will be configured on interface br0 at service start
  • monitor = true tells opensvc agent that this resource is critical (if it goes down, trigger the service monitor_action)

[app#haproxy] : describes the application stuff :

  • type = simple specify that the service manage a single process application (non forking daemon)
  • start = /sbin/haproxy -f /etc/haproxy/haproxy.cfg is the command to run when the service start
  • restart = 1 tells the opensvc daemon to try to restart 1 time this resource if it goes down.
  • monitor = true haproxy is a critical resource. If it goes down, try to restart 1 time due to the previous parameter, and if it fails, then trigger the monitor_action

As the service rely on vip, you are not obliged to bind *:443, but only the service vip.

About your question on restart/stonith if haproxy goes down, just put a restart = 1 in the [app#haproxy] to try a restart, and also a monitor_action = crash in the DEFAULT section. This way, 1 restart is tried, and if this does not work, node is crashed.

Hope this helps.

Chaoxiang N
  • 1,218
  • 4
  • 10
  • I like your explanation, but a switch to opensvc first requires extensive testing and familiarization. I will look into this as an option when this comes up for re-evaluation! – namezero Nov 12 '18 at 14:04