0

I'm trying to set up a mini-cloud with Novell Xen (SLES 10 SP2 so it's Xen 3.2.x).

I have an iSCSI server in the back, using a Starwind 4.1 target. The problem is that I when I write to the iSCSI server with the first host, I can't see the file on the second host and visa versa.

I also tried out Citrix XenServer and that uses LVMoiSCSI, which works without a problem

Can anyone suggest what to do with Novell Xen. I'm not used to working with LVM, so I'd like to try other things before starting with LVM.

Hofa
  • 417
  • 2
  • 9

2 Answers2

4

I have not used Novell Xen or Starwind, I have worked with XenServer as well as the Xen packaged with both Debian and RHEL5. I did not try LVMoiSCSI when I tested XenServer as I didn't have an iSCSI host available at the time. That said from my understanding of iSCSI and LVM I can definitely hazard an educated guess which may help you in isolating the problem.

In my experience iSCSI has been an "one system can mount at a time" unless using a cluster aware filesystem like GFS. In my own Xen servers I use LVM to slice off the RAID array to be used as physical devices by my domUs. I am assuming Citrix has worked this into their LVMoiCSCSI support to do the same thing with an iSCSI volume. I will definitely have to try this out and see if I can't confirm my suspicions.

If Citrix's LVMoiSCSI doesn't do anything special other than treat the iSCSI LUN as a logical volume (LV) (ie- nothing to make it cluster aware) you could attempt to have your Novell Xen systems to do the same thing. LVM is by default cluster aware so each LV created would be able to be separately mounted by different servers while they all have the iSCSI target LUN made accessible.

LVM itself is fairly easy to setup and work with and the commands should be straight forward across any Linux distribution.

The first thing you would want to accomplish, which I'm assuming you've already done, is make sure the server can access the iSCSI LUN and see it as a local SCSI drive.

Once you're able to see the iSCSI drive from at least one of the systems you would want to initialize it as a Physical Volume (PV):

pvcreate /dev/sdX

Of course replace the device with whatever device your system sees the iSCSI LUN as. In my experience this changes from reboot to reboot sometimes.

With your PV initialized it's time to create a Volume Group (VG) and tell it to use your initialized PV:

vgcreate XenVG /dev/sdX

Where XenVG will be your VG name and using the same device you used in the PV initialization step. Now if you run vgdisplay you should see something like:

--- Volume group ---
VG Name               XenVG
System ID             
Format                lvm2
Metadata Areas        1
Metadata Sequence No  9
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                1
Open LV               1
Max PV                0
Cur PV                1
Act PV                1
VG Size               204.72 GB
PE Size               32.00 MB
Total PE              6551
Alloc PE / Size       640 / 20.00 GB
Free  PE / Size       5911 / 184.72 GB
VG UUID               tMHTWV-1dYR-4yB1-tmS5-q1Tk-i3Yx-6l1YLa

This was taken from one of my live Xen servers with a single 20GB LV slice already setup. From this point it's just simply a matter of creating LVs for your domU drives. In the simplest form you can do so as:

lvcreate -L <size> -n <LV name> XenVG

Setting to the desired drive capacity for the domU and I typically set equal to the hostname of the domU I'm creating it for.

Then when you go to install your domU you would specify the disk as /dev/XenVG/<LV name> and Xen will treat it as a physical device.

This should allow you to have the LVs be mounted by different Xen servers from the same iSCSI LUN. You couldn't have the same LV mounted and used on two machines simultaneously; however, if you had some form of HA setup you could have the nodes start/stop the domU on different machines to maintain the virtual servers availability.

There are also many more options to the above LVM commands, I merely gave the simplest forms. I would highly recommend reading the man pages and there are several good LVM HowTos available online as well.

Jeremy Bouse
  • 11,241
  • 2
  • 27
  • 40
1

Thank you very much for taking the time to write down this little tutorial. LVM looks great to use. Though I solved my problem allready, I'll keep this in mind.

The way I did it was just use OCFS2 (Oracle Clustered File System) which is, just like GFS, cluster-aware. This is working for me just fine and as it's just a test setup, I won't be changing it to lvm.

Hofa
  • 417
  • 2
  • 9
  • That would have been the other way I would have mentioned to solve the problem as OCFS2 and GFS are both cluster aware just as LVM is. To make LVM not cluster aware you actually have to set ``--clustered n`` when running the ``vgcreate`` command. – Jeremy Bouse Jun 11 '09 at 13:41
  • If you're anything like me, your 'test setups' tend to hit production. We've seen a huge performance hit as we scaled up on nodes mounting the ocfs2 volume. Note also that expanding the number of nodes that can mount ocfs2 means that you have to shut everything down, unmount the volume, and use tunefs.ocfs2 to expand the number of journals. LVM, and especially cLVM, is very easy to use, is much faster, and I prefer it greatly over ocfs2. – Karl Katzke Aug 05 '09 at 04:30