2

I have a couple virtualized fileservers running in QEMU/KVM on ProxmoxVE.

The physical host has 4 storage tiers with significant performance variances. They're attached both locally and via NFS.

These will be provided to the fileserver(s) as local disks, abstracted into pools, and handling multiple streams of data for the network. My aim is for this abstraction layer to intelligently pool the tiers.

There's a similar post on the site here: Home-brew automatic tiered storage solutions with Linux? (Memory -> SSD -> HDD -> remote storage) in which the accepted answer was a suggestion to abandon a linux solution for NexentaStor.

I like the idea of running NexentaStor. It almost fits the bill.

NexentaStor provides Hybrid Storage Pools, and I love the idea of checksumming. 16TB without incurring licensing fees is a huge plus as well. After the expense of the hardware, free is about all my budget can handle.

I don't know if zfs pools are adaptive or dynamically allocated based on load, but it becomes irrelevant since NexentaStor doesn't support virtio network or block drivers, which is a must in my environment.

Then I saw a commercial solution called SmartMove: http://www.enigmadata.com/smartmove.html

And it looks like a step in the right direction, but I'm so broke I'd be wasting their time to even ask for a quote, so I'm looking for another option.

I'm after a linux implementation that supports virtio drivers, and I'm at a loss as to which software is up to it.

NginUS
  • 468
  • 1
  • 5
  • 13
  • What storage hardware did you buy? Just out of interest 3Par kit does tiering on a sub-LUN level which just blows my mind - spendy though – Chopper3 Nov 24 '10 at 19:54
  • The slowest of the bunch is a 10TB Drobo in it's proprietary RAID6 equivalent, connected via eSATA to a PowerEdge R210. In the R210 is a pair of 2TB Seagate Barracuda XT in software RAID1 @ 6Gbps & a 32GB X25-E SSD @ 3Gbps. The R210 provides NFS to the master VM host, via directly connected dual 10Gb copper Intel 82598EB NICs. It's local storage is a 32GB X25-E @ 3Gbps + six 1TB Seagate Barracudas on PERC 6/i in RAID10 @ 3Gbps. A tower VM node/workstation on a 1Gb segment has two 150GB Seagate Cheetah 15k SAS on PERC 6 in RAID1 @ 3Gbps & a 64GB X25-E @ 3Gbps. – NginUS Nov 24 '10 at 21:24

2 Answers2

1

One way to get this on a Linux server is by using the flashcache kernel module. This only really gives you one tier, say the SSD on top of the Drobo and/or local discs. I have been using this experimentally over the last few weeks here at home with a 500GB SATA drive and a X25-E SSD to provide a LVM that I then slice up and serve via iSCSI. So far it's been working very well.

You have two methods available with FlashCache: write-through and write-back. Write-back caches writes, but also has a design flaw that they haven't resolved yet that would cause a hard failure of the system to not correctly preserve some data. The write-through has no such issue, but writes are always flushed to the backing disc.

I don't think this would be appropriate for layering on top of NFS though.

A few notes about Flashcache: You have to build it from scratch currently, you have to run a 64-bit kernel (32-bit just doesn't load the module properly), and in my testing so far it's worked great. Again, that's only been around a week or two so far.

Sean Reifschneider
  • 10,370
  • 3
  • 24
  • 28
1

You could try and extend this experimental project on github: https://github.com/tomato42/lvmts

It contains a daemon that detects witch lvm extents are used most and moves those extents up over the tiered storage chain.

P.Péter
  • 499
  • 2
  • 6
  • 17
  • Thank you, thank you! Looks like I won't have to wait for bcache's slow march towards the mainline after all. – Tobu Nov 03 '12 at 22:27