1

At the minute, we host VPSes for many customers using standard RAID10 setups using spindle hard disks.

We are thinking of trying a new experiemnt: SSD based VPSes in RAID10 arrays.

My knowledge isn't very clear on SSDs in high read/write environments such as VPS hosting for multiple customers.

Would SSDs simply fail in this way? I've heard some bad things about SSDs...

Any tips would be appreciated

user9517
  • 114,104
  • 20
  • 206
  • 289
jtnire
  • 777
  • 2
  • 7
  • 15

2 Answers2

4

I've kind of answered some of this question in your OTHER QUESTION on this subject.

For reads SSDs are just great value for money - they're not the quickest (that would be memory or PCIe-based flash) nor are they capacious (such a the 3/4TB SATA disks everyone uses) nor are they the cheapest - but they are great value offering tens of thousands of random read IOPS for only a few hundred dollars each.

What they're no so great for is high (even middling) write applications, basically any given 'cell' (think memory unit) can only be written to a few tens or hundred of thousands of times before it dies. Think about that for a second, pick a bit of SSD 'disk' space, how often would something like a log file or DB write to that per day, without 'wear levelling' this space would be dead very quickly. 'Wear levelling' simply waits until the SSD isn't too busy then moves data from heavily-written-to space to less-written-to space - if it gets the chance anyway, most servers keep their disks pretty busy all the time.

As you can imagine if you have a very read-heavy application then SSDs make a lot of sense, same for laptop/workstation scenarios too where write load is low. For high write tasks you'll just be killing your SSD, even the best/most-expensive, very quickly indeed.

As for using them in R10 or similar, well it always makes sense to locally protect your data and R10 is a great way of doing this.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
1

The technology for SSDs has improved significantly and there are some viable enterprise options ...

This article on "Enterprise Flash Drives" might be worth a read.

You will be paying out your arse for that initial investment though. The costs haven't really gone down since their inception.

Anandtech covers Seagate's enterprise "Pulsar" drive (that sounds like spaceship engine) and the article says that according to the drive's stated lifespan, it'll have about 6 petabytes of writes in it before it kicks the bucket.

Daniel B.
  • 725
  • 7
  • 16
  • The big difference between SSDs and EFDs is that EFDs have massive reserve spaces (I've seen over 300%, so a 100GB drive actually has 400GB+ of total space). This allows for extreme wear leveling, getting around the problem Chopper cites, for a while at least. – Chris S Jun 21 '11 at 12:40
  • Chopper responded to my comment in similar lines in your other question, citing his personal experience of Pulsar drives wearing out in a year's time for high throughput video servers. The 6 PB should be accurate, but whether or not that's adequate for you will depend on how much use the disk will see in a year. – Daniel B. Jun 21 '11 at 14:13
  • Hmm so as a ball park figure, each SSD disk will sustain up to 6PB of writes before it dies? – jtnire Jun 21 '11 at 15:33
  • Mmmmyes-ish. 1: you have to accept Seagate's word and Anand's math. 2: the 6PB is for the 200 GB drive, the 50GB only added up to around 2PB over 5 years. – Daniel B. Jun 21 '11 at 17:23
  • 1
    The rated endurance for SSDs is always quoted in terms of 4k 100% random IOPs. This is a worst case scenario, so that makes sense, but you'll need to take into account your actual usage scenarios. If you know for a fact your usage actually results in large (eg, 256KB+) IO operations, then your SSDs will have much greater endurance. – Daniel Lawson Jun 23 '11 at 02:52