If you are not tied to pacemaker/corosync, the behavior you describe can be achieved with opensvc
, using the service configuration file below :
[DEFAULT]
id = 84327b87-13f6-4d32-b90a-a7fad87a8d92
nodes = server1 server2
flex_min_nodes = 2
topology = flex
orchestrate = ha
monitor_action = freezestop
[ip#vip]
ipname@server1 = 192.168.100.240
ipname@server2 = 192.168.100.241
ipdev = br0
monitor = true
[app#haproxy]
type = simple
start = /sbin/haproxy -f /etc/haproxy/haproxy.cfg
restart = 1
monitor = true
Explanations :
[DEFAULT
section is the global configuration settings :
id = ....
unique service id, automatically generated at service creation time (svcmgr -s myservice create
, and then svcmgr -s myservice edit config
)
nodes = server1 server2
mean we are running a 2 nodes opensvc
cluster
flex_min_nodes = 2
tell that we expect the service run at least 2 instances. In this 2 nodes cluster, we will have 1 instance per node.
topology = flex
specify that we are running an active/active service topology
orchestrate = ha
tells that the service have to be automatically managed by the opensvc
daemon
monitor_action = freezestop
is used to force the behaviour when a critical resource goes down (like a haproxy process crash or kill). If this happens, the
opensvc
daemon has to take a decision. 3 possible parameters :
freezestop
: the local service instance is stopped, and put in frozen state.
reboot
: the node is rebooted.
crash
: the node is crashed.
[ip#vip]
: is used to declare the service vip :
- on
server1
the ip 192.168.100.240
will be configured on interface br0
at service start
- on
server2
the ip 192.168.100.241
will be configured on interface br0
at service start
monitor = true
tells opensvc
agent that this resource is critical (if it goes down, trigger the service monitor_action
)
[app#haproxy]
: describes the application stuff :
type = simple
specify that the service manage a single process application (non forking daemon)
start = /sbin/haproxy -f /etc/haproxy/haproxy.cfg
is the command to run when the service start
restart = 1
tells the opensvc
daemon to try to restart 1
time this resource if it goes down.
monitor = true
haproxy is a critical resource. If it goes down, try to restart 1 time due to the previous parameter, and if it fails, then trigger the monitor_action
As the service rely on vip, you are not obliged to bind *:443, but only the service vip.
About your question on restart/stonith if haproxy goes down, just put a restart = 1
in the [app#haproxy]
to try a restart, and also a monitor_action = crash
in the DEFAULT
section. This way, 1 restart is tried, and if this does not work, node is crashed.
Hope this helps.