1

I want a paravirtualized ZFS server for Xenserver 6.1 supporting a 6+ TB zpool.

The old templates for Xenserver 6.0.2 and FreeBSD 9 don't work.

I have been unsuccessful ("Not a Xen-ELF image...") at making my own FBSD9, XS6.1 paravirtual combo, even though I've tried every "step by step" tutorial I've found on the intarwebs. Without PV and Xentools, you are stuck at max 3 VHDs, and with a max VHD of 2TB, I can't make a 6TB zpool since 1 VHD is the VM disk image.

The Solaris 10 template for Xenserver 6.1 is "experimental" and I'm not even sure it would work for us.

ZFS on Linux and ZFS Fuse, while I have tried them both and they work, aren't nearly as fast as FreeBSD's ZFS.

So I ask you this: What is the best option for ZFS on Xenserver 6.1?

Has anyone no-kidding gotten FBSD 9 or 9.1RC fully paravirtualized for Xenserver 6.1? If so: why has no one released a pre-baked virtual appliance for template file?

Thanks all!

user145837
  • 361
  • 5
  • 17
  • Because it's [cleaner to do it with VMWare](http://serverfault.com/questions/398515/hosting-a-zfs-server-as-a-virtual-guest/398579#398579). – ewwhite Dec 24 '12 at 22:19

2 Answers2

1

Hmmmm.

Well, I have an interesting beast built on Citrix XenServer. I used FreeBSD 9.1 x64 with an HVM kernel.

I used passthrough to expose the FC HBA card and an Intel dual port nic to the FreeBSD HVM. The system boots on a small virtual disk provided by the Hypervisor. The rest is installed on the LUNs provided by the san. Thus my zpools look like this:

pool: local state: ONLINE scan: scrub repaired 0 in 0h3m with 0 errors on Mon Feb 11 04:58:53 2013 config:

NAME                     STATE     READ WRITE CKSUM
local                    ONLINE       0     0     0
  raidz1-0               ONLINE       0     0     0
    multipath/DDN-v00p2  ONLINE       0     0     0
    multipath/DDN-v01p2  ONLINE       0     0     0
    multipath/DDN-v02p2  ONLINE       0     0     0

errors: No known data errors

pool: nas state: ONLINE scan: scrub repaired 0 in 2h31m with 0 errors on Sun Feb 10 23:22:57 2013 config:

NAME                   STATE     READ WRITE CKSUM
nas                    ONLINE       0     0     0
  raidz1-0             ONLINE       0     0     0
    multipath/DDN-v03  ONLINE       0     0     0
    multipath/DDN-v04  ONLINE       0     0     0
    multipath/DDN-v05  ONLINE       0     0     0
    multipath/DDN-v06  ONLINE       0     0     0
    multipath/DDN-v07  ONLINE       0     0     0
  raidz1-1             ONLINE       0     0     0
    multipath/DDN-v08  ONLINE       0     0     0
    multipath/DDN-v09  ONLINE       0     0     0
    multipath/DDN-v10  ONLINE       0     0     0
    multipath/DDN-v11  ONLINE       0     0     0
    multipath/DDN-v12  ONLINE       0     0     0
  raidz1-2             ONLINE       0     0     0
    multipath/DDN-v13  ONLINE       0     0     0
    multipath/DDN-v14  ONLINE       0     0     0
    multipath/DDN-v15  ONLINE       0     0     0
    multipath/DDN-v16  ONLINE       0     0     0
    multipath/DDN-v17  ONLINE       0     0     0
  raidz1-3             ONLINE       0     0     0
    multipath/DDN-v18  ONLINE       0     0     0
    multipath/DDN-v19  ONLINE       0     0     0
    multipath/DDN-v20  ONLINE       0     0     0
    multipath/DDN-v21  ONLINE       0     0     0
    multipath/DDN-v22  ONLINE       0     0     0
  raidz1-4             ONLINE       0     0     0
    multipath/DDN-v23  ONLINE       0     0     0
    multipath/DDN-v24  ONLINE       0     0     0
    multipath/DDN-v25  ONLINE       0     0     0
    multipath/DDN-v26  ONLINE       0     0     0
    multipath/DDN-v27  ONLINE       0     0     0

errors: No known data errors

And the NIC's:

xn0: flags=8843 metric 0 mtu 1500 options=503 ether f2:05:91:2c:bb:8a inet 10.1.3.6 netmask 0xffffff00 broadcast 10.1.3.255 inet6 fe80::f005:91ff:fe2c:bb8a%xn0 prefixlen 64 scopeid 0x6 nd6 options=29 media: Ethernet manual status: active

lagg0: flags=8843 metric 0 mtu 1500 options=4019b ether 00:15:17:7d:13:ad inet 10.1.250.5 netmask 0xffffff00 broadcast 10.1.250.255 nd6 options=29 media: Ethernet autoselect status: active laggproto lacp lagghash l2,l3,l4 laggport: em1 flags=1c laggport: em0 flags=1c

Notice the "em's" in the lagg. It's quite fast and works great. Provided you have drives attached to a controller that you can passthrough to the VM, there's no real need to worry about the whole PVM situation.

TMS
  • 26
  • 1
  • wow this is really cool! Can you share some input on how you set this up? Or possibly share an xva? thanks! – benathon Feb 23 '13 at 09:29
  • actually the kernel config file you build with would be probably the most helpful! – benathon Feb 23 '13 at 09:35
  • 1
    KERNCONF=XENHVM. Adjust fstab and change adap<#> to adp<#> (for JUST the xen VDI's). Adjust rc.conf and change virtual NIC from rl (or re) to xn. After kernel install, shutdown vm. Don't reboot. There is a bug somewhere (Xen?? FreeBSD??) with the CD/DVD virt, so it needs to be removed. Instructions are here http://support.citrix.com/article/CTX132411 Once it's removed you are good to start the VM. – TMS Feb 23 '13 at 17:20
  • 1
    Unfortunately, the VM is a production system and it is not mine, so I cannot provide an .xva of it. As well, the real "magic" is the passthrough of the HBA and NIC. Thus, this VM is TIED to the server that the hypervisor runs on. It MIGHT be migrated to an identical server, but the passthrough does limit its flexibility. – TMS Feb 23 '13 at 17:29
  • 1
    Reading your comment above, I agree with "mount local drives as raw. This improves speed and lets you take your zpool outside of XenServer". For my case, the Hitachi SAN is offering RAID0 LUNs to the FreeBSD VM. So, the pool could be exported to any ZFS aware machine/VM with an appropriate FC HBA card. I verified this before building the production VM. – TMS Feb 23 '13 at 17:37
0

No kidding I just wrote a guide on how to do this. https://github.com/esromneb/BMXenServer/wiki/PV-FreeBSD-DomU-Kernel

The trick is to skip pygrub when setting the pv options for the vm. Also included is a torrent with an xva of my working FreeBSD 9.1 install.

Currently I'm working on a FreeNAS build. IMO the best single server solution is to use PV FreeBSD and then mount local drives as raw. This improves speed and lets you take your zpool outside of XenServer and run it anywhere with no hassle.

benathon
  • 472
  • 2
  • 12