7

I'm setting up a new file server with Windows Server 2016 on a machine with 16 GB RAM and ~20 TB of disk. The server is going to be handling files for 15 people, mainly large files used by graphic designers.

This is the first Windows server in the organisation (i.e. there is no existing AD domain to join it to).

There will be two of these servers, each at a different site, replicating files via DFS-R.

Should I set up a DC and File Server running together on the bare metal, or should I use the virtualisation licences that come with Windows Server 2016 Standard to run just Windows Server + Hyper-V on the bare metal and create a separate VM for each of the Domain Controller and the File Server?

I'm aware that 16 GB RAM is not a huge amount, and there's a fair bit of overhead with running 3 copies of Windows vs just one - more RAM is fairly easily sourced though if this is the only limitation. I would reserve 2-4 GB for Hyper-V, 2-4 GB for the DC and 8-12 GB for the File Server.

There are a pair of 1 TB disks mirrored for the boot drive, were I to go the virtualisation route then I would create another partition on the boot drive formatted as ReFS to hold the C: drives for each of the VMs.

There are then 6x 3 TB disks in RAID 5 - again, if I were virtualising things, this would be formatted also as ReFS and one great big virtual disk created for file storage.

Kai Howells
  • 83
  • 1
  • 5

5 Answers5

7

Yes, you should virtualize even considering the overhead. There is no point of running bare-metal server installations nowadays (the only exception is legacy operating systems).

Reconsider using RAID5 on 3TB hard drives, there is a chance you won't survive another long rebuild. Today, RAID5 can be used only with SSDs where it still makes much sense.

Do not use DFS-R. It's inability to replicate open files and awful switch-over logic (DFS-R doesn't know which server has the latest consistent data) can lead to very bad results, especially, in virtualized environments.

Use Storage Replica or StarWind vSAN Free for replication.

Here's an example of using Storage Replica for deploying HA File Server in Stretched Cluster configuration: https://docs.microsoft.com/en-us/windows-server/storage/storage-replica/stretch-cluster-replication-using-shared-storage

And here's an example of how to build active-active HA File Server with StarWind vSAN: https://www.starwindsoftware.com/technical_papers/Microsoft-Hyper-V-2012-R2-Dedicated-SAN-scenario-Basic-2-node-Setup.pdf

Hope it helps.

B.J.Goodman
  • 454
  • 2
  • 6
  • 1
    Actually the other exception is database servers. If you run a non trivial database server, even today, you may want the best IO you can get and a lot of RAM, in which case virutalization is counterproductive. – TomTom Jul 31 '17 at 15:06
  • 1
    I should mention that the two sites are on different continents, with something like 20Mbs connectivity between them - I don't feel that a SAN would be appropriate for this deployment. They're also in different time zones, so as Site A finishes for the day, Site B comes online, so the likelihood of two people editing the same file at the same time is exceedingly low. – Kai Howells Aug 01 '17 at 00:00
  • 1
    @KaiHowells if you still consider DFS-R, read the supported and unsupported scenarios on technet carefully. It is not a good replication choice for a geo distributed file share. Consider the scenario where Excel file was left open overnight and small stuff like that. There is little consistency to be found with DFS-R. – Grigory Sergeev Aug 03 '17 at 11:15
  • 1
    @KaiHowells okay, considering bad WAN connection, you should be still able to use Asynchronous replication which is present in both of the above solutions. However, keep in mind that the initial full-replica seeding may take really long time depending on the amount of data you'd like to replicate. I still can't recommend DFS-R here.. – B.J.Goodman Aug 03 '17 at 15:34
5

1) Virtualize everything. There's no point in running anything bare metal (OK, there are some very niche cases, but your one isn't one of them for sure).

2) You can use Hyper-V as a file server but make sure you have it properly licensed, just using free Hyper-V will require you to at least buy CALs. I'd talk to your Microsoft sales rep with EULA in hands.

BaronSamedi1958
  • 12,510
  • 1
  • 20
  • 46
4

Actually, you could create Free SMB3 File Server on Hyper-V 2016. 2016 Server has been specifically developed and created only for virtual machines. According to Microsoft EULA, it is not recommended to repeat the steps below, because this process is a violation of the license agreement. The reason why we can create SMB File Share on Hyper-V 2016 is simple: all Windows servers require SMB 1/2/3 to work, and Hyper-V 2016 is not an exception. But it does not mean you should create any unsupported Microsoft services on GUI-less Hyper-V 2016.

Source: https://www.starwindsoftware.com/blog/free-smb3-file-server-on-hyper-v-2016

Net Runner
  • 5,626
  • 11
  • 29
  • 2
    This may be useful in a personal situation or a test environment, however I can not legally deploy solutions for my clients that are in violation of the Microsoft EULA. – Kai Howells Aug 03 '17 at 22:14
  • 1
    You don't violate any EULA as long as you buy proper amount of CALs. I'd suggest to talk to your Microsoft sales rep and discuss licensing issues with him. – BaronSamedi1958 Aug 04 '17 at 08:32
2

I would lean towards virtualizing it since it afford you more flexibility. If the hardware gets marginal, or if there's an issue the manufacturer can't/won't resolve, then you can just do an online migration to another hyper-v server.

Really, the only downsides of virtualizing it are:

  • ~4GB RAM less you can use due to overhead
  • Potential to use bad things like snapshots for DFS-R (Don't ever revert to a snapshot while using DFS-R. In fact, forget they exist.)
  • Thanks, this was pretty much my thinking. Good point re: snapshots and dfs-r. I don't believe that snapshots are a good idea on a domain controller either, so I won't be touching them at all... – Kai Howells Jul 28 '17 at 21:25
  • Is that really so bad? I am asking because dfs-r is used in domain controllers for system volumes and domain controllers explicitly support rollback for some time. – TomTom Jul 31 '17 at 15:05
  • DFS-R on 2012+ DCs supports VMGenID and will sync nonauthoritatively once it detects a VMGenID rollback, but I haven't seen anything to suggest that it works for anything but DFS-R for the System Volume on a DC. I believe the general purpose DFS-R may just stop replication if it detects a rollback, and you will have to do a resync, which is a nightmare scenario for some large file count file servers. – Genericname12 Jul 31 '17 at 19:28
-2

In your case, not using virtiualization seems like a waste of resources and also limits how much you can accomplish without needing to purchase additional hardware/servers. Also, DFS Replication requires Active Directory Domain Services so you'll need to create an AD domain.

joeqwerty
  • 108,377
  • 6
  • 80
  • 171
  • 1
    You'll note that I mentioned setting up a domain controller, either on the same machine as the file server were I to go bare metal or in it's own VM where I to virtualise. – Kai Howells Aug 03 '17 at 22:14