5
I'm planning on building my first NAS box and currently I'm considering FreeNAS and ZFS for it. I read up on ZFS and it's feature set sounds interesting, although I will probably only use a fraction of it.
Most guides say that the recommended rule of thumb is that you need 1 GB of (ECC-)RAM for every TB of disk space in your pool. So my question is, what is the actual (expected) impact on ignoring this rule?
Here is a setup of someone who build a 71 TiB NAS with ZFS and 16GB RAM. According to him it run's like a charm. He uses Linux however (if this makes a difference).
So apparently you don't actually need 96 or even 64 Gigs of RAM to run such a large pool. But the rule must be there for a reason. So what happens if you do not have the recommended amount of RAM? Is it just a bit slower or do you run the risk of losing data or accessing your data at a snails pace only?
I realize that this has also a lot to do with the features that will be used, so here are the parameters I'm considering:
- It's a home system
- 16GB ECC RAM (the maximum supported by the setup I have in mind)
- No deduplication, no ZIL, no L2ARC
- Probably with compression enabled
- Will store mostly media files of various sizes
- Will probably run bit torrent or similar services (frequent smaller reads/writes)
- 4 disks, probably 5 TB each
- Actual pool setup will probably be part of another question but I think no RAIDZ (although I would be interested to know if it actually makes a difference in this context), probably two pools with two disks each (for 10TB netto storage), one acting as backup
2Of course you don’t need that much memory. Unless using dedup, that’ll seriously bite you in the butt. Of course, performance might not be optimal. – Daniel B – 2015-10-28T15:36:37.150
Its a recommendation. There are very few hardware configurations that would even support 96 GB of memory. In most cases that requires multiple processor configurations to achieve memory density that large. Even if it was required your system by your own specifications does not support 20 GB of memory. The current 6th generation Intel processors only support 64 GB DDR4. I realize there are systems with several TBs worth of memory that exist, we are talking about consumer hardware, and not huge servers.
– Ramhound – 2015-10-28T15:44:47.773Before somebody says I am wrong. Keep the context of this question in mind and the scope of Superuser in mind. – Ramhound – 2015-10-28T15:47:25.800
@Ramhound The 16 GB limit is the reason I asked the question. I haven't bought it yet so switching to a machine that can support 32 GB would be possible but make the entire thing more expensive. If it would just be about another stick of memory I wouldn't mind. But I don't want to invest the extra money unless it's absolutely necessary. – Sebastian_H – 2015-10-28T21:41:15.987
@DanielB performance might not be optimal - that is the part I'm interested in for an answer. I realize that insufficient memory may cause the system to lose performance. But what kind of scope are we talking about? Are we talking about a "can't always saturate a 1gigabit Ethernet connection" or a "a 64k modem is faster than your system" performance loss? – Sebastian_H – 2015-10-29T06:47:23.460
Considering how a single disk can more or less max out a 1 Gbps connection... ;) I can only relay my experience: 6×3 TB RAIDZ2 runs fine with 8 GiB of RAM, even when other programs are running. – Daniel B – 2015-10-29T07:37:20.387
This article might be a good read on considerations for ZFS Deduplication and how to calculate some things. Also this one. – code_dredd – 2019-09-23T18:00:07.577