14

I've been reading lately about write caching, NCQ, firmware bugs, barriers, etc regarding SATA drives, and I'm not sure what's the best setting that would make my data safe in case of a power failure.

From what I understand, NCQ allows the drive to reorder the writes to optimize performance, while keeping the kernel informed on which requests have been physically written.

Write cache makes the drive serve a request much faster, because it doesn't wait for the data to be written to physical disk.

I'm not sure how NCQ and Write cache mix here...

Filesystems, specially journalled ones, need to be sure when a particular request has been written down. Also, user space process use fsync() to force the flush of a particular file. That call to fsync() shouldn't return until the filesystem is sure that the data is written to disk.

There's a feature (FUA, Force Unit Access), which I've seen only on SAS drives, which forces the drive to bypass cache and write directly to disk. For everything else, there's write barriers, which is a mechanism provided by the kernel that can trigger a cache flush on the drive. This forces all the cache to be written down, not just the critical data, thus slowing the whole system if abused, with fsync() for example.

An then there are drives with firmware bugs, or that deliberately lie about when data has been physically written.

Having said this.. there are several ways to setup the drives/filesystems: A) NCQ and Write cache disabled B) Just NCQ enabled C) Just Write cache enabled D) Both NCQ and write cache enabled

I'm asuming barriers are enabled.. BTW, how to check if they are actually enabled?

In case of power loss, while actively writing to the disk, my guess is that option B (NCQ, no cache) is safe, both for filesystem journal and data. There may be a performance penalty.

Option D (NCQ+cache), if using barriers or FUA, would be safe for the filesystem journal and applications that use fsync(). It would be bad for the data that was waiting in the cache, and it's up to the filesystem to detect it (checksuming), and at least the filesystem won't be (hopefully) in an unstable state. Performance-wise, it should be better.

My question, however, stands... Am I missing anything? Is there any other variable to take into account? Is there any tool that could confirm this, and that my drives behave as they should?

julianjm
  • 270
  • 2
  • 5
  • What's the application in your situation? You're overlooking the effect or influence of a RAID controller and its cache on the setup. What operating system are you focusing on as well? Which filesystem are you considering? – ewwhite Dec 25 '12 at 22:43
  • No specific application. I've been using software raid1 for years, but never dig into the problem that write caches represent. Also, having looked into btrfs, for which there is no reliable fsck yet, makes me question what can I do to prevent corruption, if I were to use it. – julianjm Dec 25 '12 at 23:57
  • 1
    Use ZFS on Linux instead and couple with a purpose-built ZIL device. I use the [DDRDrive](http://www.ddrdrive.com/) for ZFS systems :) – ewwhite Dec 26 '12 at 00:01
  • Are you using ZFS with FUSE? – julianjm Dec 26 '12 at 00:12
  • No, I'm using it directly on CentOS 6.3 using RPMs compiled from [ZFSonLinux.org](http://zfsonlinux.org/). It works wonderfully! And I'm a [hardcore ZFS user](http://serverfault.com/search?tab=votes&q=user%3a13325%20zfs) from the Solaris/Nexenta camp. – ewwhite Dec 26 '12 at 00:36
  • 2
    Be sure to get a UPS. – Michael Hampton Dec 26 '12 at 01:51

2 Answers2

12

For straight up Enterprise systems, there is an additional layer in the form of the storage adapter (almost always a RAID card) on which yet another layer of cache exists. There is a lot of abstraction in the storage stack these days, and I went into deep detail in this in a blog series I did on Know your I/O.

RAID cards can bypass on-disk cache, some of which even allow toggling this feature in RAID BIOS. This is one reason why Enterprise disks are Enterprise, thier firmware allows such things that consumer drives (especially 'green' drives) don't. This feature directly addresses the case you're concerned about: power failure with uncomitted writes. The RAID card cache, which should be either battery or flash-backed, will be preserved until power returns and those writes can be recomitted.

Certain enterprise SSDs include an onboard capacitor with enough oomph to commit the onboard cache before fully powering down.

If you're working with a system with disks directly connected to the motherboard there are fewer assurances. Unless the disks themselves have the ability to commit the write-cache, a powerfailure will indeed cause a loss. The filesystem earned a reputation for unreliability due to it's inability to survive just this failure mode; it was designed to run on full up enterprise systems with engineered storage survivability.

However, time has moved on and XFS has been engineered to survive this. The other major Linux filesystems (as well as on Windows) already had engineering to survive this very failure mode. How it's supposed to work is that the lost writes will not show up in the FS journal and it'll know they didn't get comitted, so corruption will be safely detected and worked around.

You do point to the one problem here: disk firmware that lies. In this case the FS journal will have made a wrong assumption versus reality and corruption may not be detected for some time. Parity RAID and mirror RAID can work around this as there should be another comitted copy to pull from. But single disk setups won't have that cross-check, so will actually fault.

You get around the firmware risk by using Enterprise-grade drives that get much more validation (and are tested versus your presumed workload patterns), and designing your storage system so that it can survive such untruths.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
  • I understand that under hardware RAID, it's up to the controller to do the caching (hopefully battery backed), and it's advisable to have the actual disks cache disabled. In my case (didn't mention it) i'm using software raid. It seems that write cache is not recommended as it will cause data loss. Maybe not catastrofic (filesystem corruption), but data loss anyway. I'll refrain, for the time being, from migrating my softraid1+ext4 to btrfs+raid1. :) – julianjm Dec 26 '12 at 00:21
  • RAID does not help with this since the data can just as easily sit in both drives write caches as one drive. – psusi Dec 26 '12 at 03:15
  • @psusi It is not a 100% mitigation, but it does provide *added* protection. It's a timing problem. Individual RAID implementations differ. – sysadmin1138 Dec 26 '12 at 03:27
  • It isn't a mitigation at all. The secondary drive does not matter at all, since in the event of a crash, the primary will be copied back over the secondary to recover. Hence, you are back to whether or not the write made it to the (first) drive or not. – psusi Dec 26 '12 at 03:41
3

The filesystem journal originally waited for the write to the journal to complete before issuing the write to the metadata, assuming that there was no drive write cache. With drive write caching enabled, this assumption is broken and can cause to data loss. Thus, barriers were created. With barriers, the journal can make sure that the write to the journal completes before the write to the metadata, even if the disk is using write caching. At the disk driver layer, the barrier forces a disk cache flush before subsequent IO is sent down, when the drive reports that it has a write cache and it is enabled. Otherwise, this is not needed, so the barrier just prevents issuing of the subsequent IO to the drive until the previous IO has completed. NCQ just means it might have to wait for more than one pending request to complete before issuing more.

psusi
  • 3,247
  • 1
  • 16
  • 9
  • I think barriers protect you from journal corruption (if the filesystem requests so), but I'm not sure about the actual data on the files... Issuing a cache flush after every write would make the write cache useless, wouldn't it? – julianjm Dec 26 '12 at 09:15
  • @julianjm, of course... cached file data is always lost in the event of a crash, with or without NCQ or drive write caches. – psusi Dec 26 '12 at 15:00