0

I'm looking at deploying a server for high bit rate video streams up to 250 Mbps. There are 8 clients in the network which at any given point may be playing any given stream, so a worst case scenario is 8 x 250 Mbps or 2000 Mbps. How would I go about sizing the RAID array correctly, given that I would opt for a RAID5 array? Can I consider this as a strictly sequential I/O workload and would therefore basic calculations suffice? I.e. a RAID5 array of 8 disks with 150 MB/s throughput per disk would yield around 1200 MBps sequential throughput which would be theoretically sufficient. Thanks for your insights.

  • a) 250 Mbps isn't high-bit rate, we do 36Gbps for our UHD streams, b) DON'T use R5, it's essentially dangerous and it's to be avoided, c) the number of streams is irrelevant at this scale as it'll get cached (make sure you don't skimp on memory), it IS sequential but I'd suggest you just buy two decent sized SSD/NVMe drives and R1 them - this will easily do what you need. – Chopper3 Jul 27 '16 at 08:53
  • Thanks for that. So, low bit rate video then :) it's nowhere near your use cases. I was thinking RAID5 as the number of drives is limited but you would suggest RAID10 as a better option then? Or would RAID6 work, as it would survive a double disk fault? The issue with SSDs is that capacity is also an issue, we'd be looking at 16TB+. As to RAM I had opted for 64GB to 128 GB. – siddharta Jul 27 '16 at 10:00
  • Pros only really use R6/60 or R1/10 these days, perhaps one way forward would be if you disk controller allowed for SSD caching - that way you could put a single SSD in front of a R6 array and it'll cache the most frequently used stuff on SSD and leave the lesser-used stuff on disk. Just a thought. – Chopper3 Jul 27 '16 at 10:04
  • I'd be looking at RAID6 then, thanks. Having an SSD cache is a good idea and would also help to buffer the impact of the occasional sequential write operation (basically adding new video files). – siddharta Jul 27 '16 at 10:38
  • I've been designing VoD systems for over a decade now, if you can afford flash storage it just fixes so many issues right away but I know that's not for everyone (yet :) ) - but your load seems pretty light and I'm sure a R6/10 system may do what you need on your own (remember that extra memory for caching ok) but if your controller does allow for caching then I'd jump at that - even if the cache-to-main-storage ratio isn't that much it'll still help. Best of luck with this project Siddharta, come back to me if you need help ok. – Chopper3 Jul 27 '16 at 10:44
  • Thanks! It's a project somewhat outside of my usual area and I'm quite excited. Really appreciate the helpful comments! Am I correct in that CPU requirements would be low and that a single Xeon E5 or even E3 would suffice, paired with 128 GB RAM? Based on your feedback I'm looking at a MegaRAID SAS 9271-8iCC HBA with a single 1TB SSD as cache. What bothers me a bit about RAID6 is the rebuild time for a 4TB disk and its impact on operational performance, so perhaps RAID10 is the better option here. – siddharta Jul 27 '16 at 11:05
  • The only reason you MIGHT need a second CPU is to add the extra PCIe lines if those off a single CPU are saturated - we see this with multiple 10/40Gbps NICs but you're unlikely to see or need this. If you're going with a single CPU then by all means go for an E3, they're great but do limit you to one CPU, it depends on the cost of the servers you're buying really. That controller looks good and a 1:16 SSD-to-HDD ratio will fly, you could got for half that size SSD and still be good. I work almost exclusively with Indian chaps (TCS and Infosys) and love you guys, more than happy to help :) – Chopper3 Jul 27 '16 at 11:20

0 Answers0