1

i have a specific question, her's my situation :

1- 2 vms with drbd - pacemaker - corosync - NFs - here's my crm configuration :

node san1
node san2
primitive drbd_res1 ocf:linbit:drbd \
    params drbd_resource="res1" \
    op monitor interval="20s"
primitive fs_res1 ocf:heartbeat:Filesystem \
    params device="/dev/drbd0" directory="/mnt/res1" fstype="ext3"
primitive nfs-common lsb:nfs-common
primitive nfs-kernel-server lsb:nfs-kernel-server
group services fs_res1 nfs-kernel-server nfs-common
ms ms_drbd_res1 drbd_res1 \
    meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
location location_on_san1 ms_drbd_res1 100: san1
colocation services_on_drbd inf: services ms_drbd_res1:Master
order services_after_drbd inf: ms_drbd_res1:promote services:start
property $id="cib-bootstrap-options" \
    dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \
    cluster-infrastructure="openais" \
    expected-quorum-votes="2" \
    no-quorum-policy="ignore" \
    stonith-enabled="false"

my issue is : i should mount the exported NFS on the NFS client, but i dont know what ip i'm giving, i was thinking about making a same virtual IP, with 2 machines (using the eth0:x) so if a server does go down i wont have do anything in the client VM,

would that work ?, or i'm completely out of my mind, if not can you give me tip,

i searched for it for like 1 hour in the internet i didnt find anything,

Thank you a lot

1 Answers1

1

Yes, that would and will work. I'm using this technique since years in production setups, not in conjunction with NFS, but with different services. This is the way to go.

  • Have a look at the IPaddr2 resource agent.

  • Using this, you could come up with something like:

    primitive p_nfs_vip ocf:heartbeat:IPaddr2 \
            params ip="<your_ip>" nic="<your_interface>" cidr_netmask="<your_netmask>" \
            op start interval="0s" timeout="60s" \
            op monitor interval="5s" timeout="20s" \
            op stop interval="0s" timeout="60s"
    

    (Exchange the <...> directives with your data, so these match your setup. Tune the interval and timeout directives.)

  • Put this primitive into your services group.

  • You've to make sure that the IP is up before your NFS server starts. Use the order directive for this, like you did already for your services vs. drbd.

  • Bind your NFS server to this IP.

  • Use this IP to connect the clients to the NFS server.

Last but not least:

  • Set up stonith / fencing. This is really really really important! Read this. Money quote:

Fencing is a very important concept in computer clusters for HA (High Availability). Unfortunately, given that fencing does not offer a visible service to users, it is often neglected. [...]

  • This is especially important in setups with shared storage, like yours. Running your cluster without this, you're putting your data at risk.
gxx
  • 5,483
  • 2
  • 21
  • 42