I have set up a two-node cluster (active/passive) with Corosync/Pacemaker and nginx as a reverse proxy. OS is RHEL7 and the machine has only one network interface at the moment.
I configured two resources:
- cluster-vip for the shared virtual IP
- reverse-proxy for nginx
Here are the declarations of both resources:
pcs resource create cluster-vip ocf:heartbeat:IPaddr2 ip=192.168.0.1 cidr_netmask=24 op monitor interval=30s
pcs resource create reverse-proxy systemd:nginx op monitor interval=5s meta failure-timeout=60s
pcs constraint colocation add reverse-proxy with cluster-vip INFINITY
pcs constraint order cluster-vip then reverse-proxy
Yesterday, I spotted an unexpected behaviour while doing a network capture. When communicating with the clients, the active node uses the virtual IP address (192.168.0.1). When communicating with the web servers located on the internal network, it uses the primary IP address of the interface instead of the vip (192.168.0.2 or 192.168.0.3 depending on the active node).
As a result, I am forced to create two different rules on my firewall (one for node1 and one for node2) instead of just allowing the vip to communicate with the web servers. I plan to add other nodes to the cluster, and it would be convenient not to have to allow every single node one by one and just allow the vip once and for all.
Does this behaviour have a logical explanation? Is there a way to tell pacemaker to only use the vip? And is it good practice? I don't want to do anything stupid, so if you think I should not do that, I would gladly hear why.
Regards