10

I'm trying to use Windows Server Backup to backup a RAID array on my new server. But, when I do, I run into this error:

enter image description here

The server is running Windows Server 2012 R2 and the array in question is 20TB in size (with 18TB usable); less than 1TB is currently being used.

I know that in Windows Server 2008, you couldn't backup volumes larger than 2TB due to a limitation in VHD, but that Microsoft has now switched to VHDX, which allows for 64TB volumes to be backed up. I'm also aware than in order to take advantage of this, the drive in question must be GPT.

I have confirmed that my disk is, in fact, GPT.

enter image description here

When I run Windows Server Backup, I am using the "Backup Once" option and backing up to a network drive. I am also using what I believe to be standard settings. But, when I attempt to run the backup, I am presented with the error seen above.

I'm not sure why this is capping out at 16.7TB, since Windows Server Backup can backup volumes up to 64TB. Can anyone give me some insight as to why this may be happening or what I might be doing wrong?

Update: I've received new drives and created the array again but I'm still getting the same error. I can confirm that my cluster count is under 2^32.

enter image description here

I read in this question that apparently Windows backup doesn't support backing up to or from disks that don't have either 512 or 512e byte sectors. Looking at the fileshare I'm attempting to backup to, it uses 4k sectors. Could this be the underlying issue? If it helps, the share that I'm trying to backup to is being hosted on a CentOS server.

Chris Powell
  • 300
  • 1
  • 4
  • 17
  • It is a 'protected' message, not a space message, per se. The 'standard settings' is for a Windows server backup is to use DPM - Data Protection Manager. It appears there is a software limitation when using DPM. You might want to see if the settings will allow a byte-for-byte copy to take place, without so-called 'protection' enabled, assuming you have a way to restore a byte-for-byte copy if you need to. – Andrew S Feb 17 '15 at 19:58
  • 1
    @AndrewS No, that's a message from Windows Server Backup. "Protected" seems to be the new buzzword in backups these days. Even my Avamar (enterprise d2d backup product) dashboard tells me it has X TB of data "protected" for us. – HopelessN00b Feb 17 '15 at 20:21
  • 2
    That's an unfortunate misuse of the word 'backup'. The ITIL gods are getting angry, no doubt. But, as it turns out the FILE SIZE limit on NTFS is 16.7TB, so that is what the problem is - the backup (I am guessing) is one giant file and 16.7TB is the limit for that size. Microsoft and the other vendors can mangle it and call it a 'protection' or any other idiotic marketing slug they want, I'll still call it a 'backup'. – Andrew S Feb 17 '15 at 20:44
  • @AndrewS It's used as a measure of the original data size, before data deduplication and snapshotting and such. [And the file size limit for NTFS on Server 2012 is 256TiB, not 16 TiB](http://en.wikipedia.org/wiki/NTFS). – HopelessN00b Feb 17 '15 at 22:00
  • FWIW: same issue here. Server 2016, 20 and 63 TB drives, 16KB bytes per cluster on the volume, under 2^32 clusters per volume, physical disk 512 bytes sectors, and GPT. vss shadows work without issue, backups get the same error as you. I'm about to give up and write a damn powershell script that takes a snapshot and runs a predetermined script per folder, and for files at root, which will be much more of a pain to manage... – Cookie Monster Sep 27 '19 at 14:03

2 Answers2

8

OK, the reason Windows Server Backup is failing is because of the cluster size you're using on the volume. (And I'll explain exactly why that is at the end, after the important issue of your RAID array being a time bomb.)

But before addressing the backup issue, we need to address the issue with your RAID setup.

Don't use RAID5 with large disks. And don't you use RAID5 with arrays with a lot of members. With only one parity disk, you are virtually certain to run into a (unrecoverable read error) URE or another disk failure with that many large disks, so you have no real redundancy. If you have to use parity RAID, use RAID6, but even then, parity RAID comes with serious drawbacks, so think long and hard before you settle on parity RAID.

I would recommend breaking that 20 TB array down and recreating it in RAID 10. You'll get much better performance and real redundancy for your data. Since you're only using 1 TB anyway, you still have 9 TB left for future growth, and frankly, if you hit that, you need to be looking into a dedicated NAS device or storage server.

Once you get your RAID array into a reasonable state, you will solve this problem as well, because it will be smaller than the 16 TiB it's currently complaining about. But, if you want to know, it's not the size of the array it has a problem with, it's the number of clusters. You need to have less than 2^32 clusters in the volume you're backing up. Change your cluster size from 4 KB to 8 KB and you should be good to go.

To check your cluster size, use:

fsutil fsinfo ntfsinfo F:

And you should get something like the below screenclip.

enter image description here

If you're curious where that 16TiB number comes from, this msdn blog post should clear it up for you.

HopelessN00b
  • 53,385
  • 32
  • 133
  • 208
  • Thank you for your concern about RAID. I tried to convince my boss to let me use RAID6 on it, but was unsuccessful. It's actually in an array of 5TB disks, not 2TB disks (sorry about that, I should have specified). The reason that so little data is being used on it is because we haven't put it into production yet. But, it will eventually be our new NAS. And we also perform backups very often so we can easily recover from a degraded array. So, does that mean that if I recreated the array with a larger stripe size that I would not have this issue? – Chris Powell Feb 17 '15 at 21:10
  • 1
    @ChrisPowell Sorry, I misspoke (mistyped). I meant to say cluster, not stripe. You need to reformat the array, except this time, select 8 KB (or more, if you want) for your cluster size. – HopelessN00b Feb 17 '15 at 23:13
  • 2
    @ChrisPowell Thanks for putting the effort into asking a good question... and one I could answer as well, bonus. :) – HopelessN00b Feb 18 '15 at 02:42
  • 1
    Just an update; you'll be happy to know that I talked to my boss again and I convinced him to let me switch the NAS to RAID6 and upgrade the drives to 6TB. Thank you again for your help. – Chris Powell Mar 05 '15 at 18:05
  • Another update: I just got the drives in, I set up the array and formatted with an 8KB cluster size and I'm still getting this error. Any advice? I checked my total clusters and it's well under 2^32. – Chris Powell Mar 27 '15 at 18:03
  • @ChrisPowell Same number in the error message, too? Could you edit updated information into your answer (`fsutils` output, for example), and let me know? – HopelessN00b Mar 27 '15 at 21:04
  • Unfortunately yes. Same error message. I've added updated info to the question. – Chris Powell Mar 30 '15 at 15:52
  • This might be _a_ constraint, but it's definitely not the only one. I'm running into the same issue, 16KB per cluster and less than 2^2 clusters. – Cookie Monster Sep 27 '19 at 14:21
0

16.7 TB is the file size limit for NTFS file system. The file size limit of NTFS5 is 16 exabyte. Since this is a shared storage drive, it might well be NTFS formatted, not NTFS5 formatted. You will need to check. All of the minuses I am getting are people who assume you are writing to an NTFS5 file system.

HopelessN00b
  • 53,385
  • 32
  • 133
  • 208
Andrew S
  • 510
  • 4
  • 7
  • Minus all you want - this answer is correct – Andrew S Feb 17 '15 at 20:48
  • 1
    WSB won't write a 16 TiB file for ~1 TiB of data to backup, so it's not that. [The actual source of the problem is the NTFS implementational limit of 2^32 -1 clusters, combined with the a 4KB cluster size](http://blogs.msdn.com/b/brendangrant/archive/2009/02/13/the-myth-of-the-16tb-limit.aspx), which has been the default for a very long time. – HopelessN00b Feb 17 '15 at 22:55