1

I am going to format 2 120GB Intel 320 SSD to be used for a high traffic Drupal server. The server had Xeon E1270 CPU and 32GB RAM. I'm using Debian Squeeze 64 bit. Here are my questions:

  • What file system format suits best in this case: ext3, ext4, xfs or something else?
  • I tend to not use RAID 1. format one disk to be devoted to mysql and the other to the rest of the filesystem. I think this would minimise disk i/o delay and also reduce the write cycles, so enhance the overall life expectancy of disks. How do you evaluate this approach?
alfish
  • 3,027
  • 15
  • 45
  • 68
  • 2
    [This seems relevant.](http://serverfault.com/questions/339128/what-are-the-different-widely-used-raid-levels-and-when-should-i-consider-them) – MDMarra Apr 16 '12 at 13:14
  • 7
    "I tend to not use RAID 1" a.k.a. "I tend not to value my data or its availability" – EEAA Apr 16 '12 at 13:22
  • 1
    @ErikA, the 'a.k.a' is exactly what I put into question. – alfish Apr 16 '12 at 13:43
  • @alfish so you're saying that you don't value the data on your server and you don't value the availability of the server? If that's the case, it changes everything. – MDMarra Apr 16 '12 at 13:53
  • 1
    "How do you evaluate this approach" - With SSD's and what you describe, I'd advise you to use RAID 1, and have a good backup scheme in place. Why would you avoid RAID on a server? – Bart Silverstrim Apr 16 '12 at 14:02

3 Answers3

15

What file system format suits best in this case: ext3, ext4, xfs or something else?

Most likely either ext4 of xfs. Format it each way and test your workload.

I tend to not use RAID 1.

If you don't give a single shit about availability, then fine. If you do, I'd reconsider this approach.

format one disk to be devoted to mysql and the other to the rest of the filesystem. I think this would minimise disk i/o delay and also reduce the write cycles, so enhance the overall life expectancy of disks. How do you evaluate this approach?

If the only server process running on this is mysql, there's not a ton of benefit to running it on a separate disk. If it is a server that runs apache and other processes as well, this makes a bit more sense. There will be slight performance gains by putting it on a separate physical disk, but I honestly would run the disks in RAID 1 ten times out of ten.


Seriously, though. If you care one bit about the users of the server, it's negligent to not run RAID. Think about it like this:

How frequently do you take backups? If it's daily, imagine that a disk dies right before the next backup window. How would your users react to losing a day's worth of work?

Now imagine that it takes you 4-6 hours to restore from backup, test, and bring everything back up. Now your users have lost a days worth of work and haven't been able to use the server for the better part of the day.

Is it really worth the slight bit of extra performance? Probably not.

If you really want to separate your DB, get two more SSDs and run two RAID 1s or a single RAID 10.

MDMarra
  • 100,183
  • 32
  • 195
  • 326
  • 3
    SSD drives are going to fail, especially under high usage. I'd worry slightly less about lengthening their lifespans and more about planning for availability, given the inevitable death of the drive. – Bart Silverstrim Apr 16 '12 at 12:56
  • MDMarra, it is going to be an all-round server, includes web,mail etc. Do you mean 'reliability' by 'availability'? If so what about the argument that spiting writes between two disks may indeed benefit longevity of both disks, hence making the system more reliable? – alfish Apr 16 '12 at 13:06
  • @alfish Splitting the writes to separate spindles will extend the life, but SSDs tend to have higher failure rates, especially consumer-grade ones tossed into a high volume server. You'll most likely experience a disk failure long before you write the thing to death. By not having the disks in RAID, when a disk dies, you are going to lose everything on it. If you don't know what "reliability" and "availability" means in relation to RAID, uptime, and data integrity, then you may want to hire someone with more experience to give you a hand with this. – MDMarra Apr 16 '12 at 13:09
  • Availability meaning you have a disk failure but your server will still be able to serve data until you replace the drive. – Bart Silverstrim Apr 16 '12 at 13:09
  • 3
    SSD's are FAST and EXPENSIVE. But they BURN OUT faster than any magnetic disk, and it's not necessarily going to give warning, and it's not going to be pretty. Attempts you make to impose artificial wear-leveling will most likely not significantly increase their lifetime or reliability. Use RAID to keep the server running, unless you don't care about restoring data from scratch upon failure. With SSD's it could happen two months down the road or 2 years down the road. – Bart Silverstrim Apr 16 '12 at 13:11
  • Budget to have a replacement disk handy as well, since you're on borrowed time with only two disks in a RAID 1 configuration. Make sure backups are standing by. But if you don't care about downtime, use RAID 0 and you'll have plenty of speed. – Bart Silverstrim Apr 16 '12 at 13:12
  • 2
    @alfish Enterprise SSDs are generally pretty reliable, but the Intel 320 is not an enterprise SSD. Consumer SSDs are usually just fine in your notebook, but when you put them in a server, you're asking for trouble. That said, you're asking questions that are outside of the scope of your original question. If you really want answers to them, you should ask about them in a new question. – MDMarra Apr 16 '12 at 13:26
  • 3
    See http://www.codinghorror.com/blog/2011/05/the-hot-crazy-solid-state-drive-scale.html for discussion about failure rates of SSD devices. – Jeff Ferland Apr 16 '12 at 13:33
  • 1
    More discussion about SSD devices: failure rates, statistics, different vendors, etc: http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923.html So, some Intels may be respectably good. – Jeff Ferland Apr 16 '12 at 13:39
  • 1
    SSDs have a limited number of writes they can do before they die. If it's a multi-level cell design (MLC), it'll store more data, but have to write more to service the same workload. There are things that the firmware will do to ensure the maximum possible life of these disks (wear leveling, trim, etc.), but eventually all SSDs fail. When this happens, do you want to have to come back from backups, or just swap in a new SSD and keep going? – Basil Apr 16 '12 at 13:43
  • 1
    @JeffFerland Keep in mind that all of that info is for consumer-grade drives; enterprise SSD storage implements all manner of redundancy internal to the device, and a hefty number of spare flash chips to take the place of those that fail. See [here](http://www.fusionio.com/white-papers/fusion-io-a-new-standard-for-enterprise-class-reliability/), for instance. – Shane Madden Apr 17 '12 at 02:37
  • @BartSilverstrim: Do you have any examples of SSDs actually "burning out" without warning? Modern SSDs publish their wear level via SMART, and are also very conservatively rated in terms of how much data you can actually write to them. I'm assuming we're not talking about a catastrophic controller failure/firmware failure here, the words "burning out" suggest exhausting your p/e cycles. – Daniel Lawson Apr 18 '12 at 21:27
  • @JeffFerland Those codinghorror figures are interesting, but not in any way convincing. He's talking about a population of 8 drives. As a counter example, I've shipped well over a thousand SSDs, a mix of Intel X25-e, X25-M G1 and G2, and Intel 320 series, and we've never seen flash wearout on those, and only seen actual catastrophic failure a handful of times. (So my experience corroborates with the tomshardware discussion). – Daniel Lawson Apr 18 '12 at 21:33
  • @DanielLawson On hand, no. I didn't keep records :-) I've had issues, I've read of others with issues. SSD's don't have the established records that traditional disk tech has, and I'm sure the tech is improving but I still advise redundancy in situations like this. – Bart Silverstrim Apr 18 '12 at 23:22
  • @BartSilverstrim I'd advise redundancy too! I've yet to see or hear of someone who has legitimately hit flash wear out on a modern SSD though (eg, Intel X25-era or later). I'm sure it happens, but I think there is a vast body of misinformation about it on the internet. From my experience, SSDs definitely don't just "burn out" without warning (Assuming you're talking about flash endurance), and even if they do they are designed to fail to a read-only state, which is annoying but doesn't trash your data. SSDs do still fail for other reasons of course, so redundancy is still a good thing :) – Daniel Lawson Apr 18 '12 at 23:31
  • @DanielLawson I think the article on TomsHardware also mentions that part of the issue is that SSD drives tended to simply die out abruptly rather than have a more "graceful" failure that platter drives have, FWIW. I don't have the article right in front of me at the moment though. – Bart Silverstrim Apr 19 '12 at 11:05
  • In the end, SSD tech is still becoming more mainstream, and until they're really common to consumer tech it's going to have some wariness associated. Companies like Apple making them standard on a line of product will boost adoption rates, though, and it's a good thing! – Bart Silverstrim Apr 19 '12 at 11:06
  • This answer suggests that RAID provides backup. I'm surprised nobody said this: "RAID isn't a backup. Backup is backup." If your database gets screwed up, RAID won't save your data. If your SSD fails, a backup will save your data but won't keep the system running uninterrupted. So if you don't care about 24/7 reliability but just want your data safe, go ahead without RAID, but use a cheap HDD for periodic backups (or some kind of database duplication, if that's a thing in MySQL). – sudo Jul 08 '16 at 01:01
  • 1
    @sudo maybe you've misread the answer? I think it's pretty clear that I'm speaking about service availability. The only time I mention backup is to talk about the down time that would be incurred from a SPOF when you have to do a restore. Nowhere do I say that you shouldn't do backups if you have RAID. – MDMarra Jul 08 '16 at 01:55
  • ... Indeed I have. Long day. – sudo Jul 08 '16 at 02:15
3

Not to repeat myself:

and others:

So EXT4 is out of scope.

UPD.: And I'd add another link — consonant to my feeling that COW FSes …

poige
  • 9,171
  • 2
  • 24
  • 50
1

With two disks you can go with either a Raid 1 (Mirror) or Raid 0 (striping). Of course for performance you're gonna pick Raid 0 over Raid 1. There's actually a wiki here that covers all the RAIDs, I'll look for and and link it for you. Be absolutely sure you have some sort of backup system in place to take image seeds of your data. If you lose one disk (in the raid 0) you lose it all.

For the file system you should probably go with EXT4. Here's a link where a few of the linux file systems where benchmarked against each other.

RomeNYRR
  • 1,439
  • 9
  • 16
  • Might want to doublecheck the link for file system benchmarks. When I look I'm getting 404's when I click on the next page of the article. – Bart Silverstrim Apr 16 '12 at 13:02
  • Updated the link – RomeNYRR Apr 16 '12 at 13:10
  • If you use RAID 0 or 1 instead of keeping the disks separate, the system and database will be running on the same disk. IDK if that's desirable. For one, it makes the system very unresponsive when the DB is being used. Maybe it also causes performance problems in the DB. If you've got 2 nice SSDs to run a DB on, I'd also get a cheap HDD just for the OS and users' files instead of making the DB share the SSDs with that stuff. – sudo Jul 08 '16 at 01:05