3

I've been trying to figure out something.

I have a stack of SAS shelves (attached to a NetApp if that's relevant). Each devices within the shelf is 6G SAS. According to the vendor, the sustained transfer is somewhere around 200-250MB/sec.

So - with 10 shelves of 24 drives - attached 'top and bottom' to separate controllers on my filer head.

What's the fastest rate I could transfer data from my drives?

6G SAS implies 600MB/sec. 2 controllers therefore, 1200MB (in optimal circumstances). Or - about 6 drives worth, out of my 240 spindles. This seems oddly low - am I missing something? Do the SAS controllers have some sort of mulitplicative factor?

Or am I really honestly in a position where I'll never get anywhere near 'max throughput' of the drives in this stack? Certainly it looks like my historic peak has been around 2000MB/sec on 3 controllers. (So ~650MB/sec each).

Still, I suppose it makes the drive utilisation look low....

Sobrique
  • 3,697
  • 2
  • 14
  • 34

2 Answers2

5

Theoretically max throughput of NetApp stack will be 9600 MB/sec. NetApp supports 240 HDDs per stack or 96 SSDs per stack. But it's not a system limit. You can have several stacks in one system. It depends on controllers model.

SAS 6G one lane gives 600 MB/sec. But one SAS 6G port utilises 4 lanes. With NetApp you use 2 ports on each controller for one stack which in sum will give 4 ports.

So theoretical throughput = one lane rate * 4 lanes per port * 4 ports per stack = 600 MB/sec * 4 * 4 = 9600 MB/sec.

When you are talking about performance and utilisation you need to understand that there are two metrics of performance - MB/s and IOPS (actually three, latency also matters).

Different disk types have different IOPS and MB/s performance. NetApp uses these values in documentation:

SAS  10K    140 IOPS  198 MB/sec
SATA 7.2K   75  IOPS  134 MB/sec 

When you look at transfer rate of disk it seems like just 48 SAS HDDs will saturate stack SAS ports. 48 * 198 = 9504 MB/sec. But majority of enterprise applications works with small block size (4KB, 8KB). And they are more sensitive to IOPS performance and latency. So one SAS 10K HDD has ≈140 IOPS. With 8KB block it will be just 8 * 140 = 1120 KB/s of throughput. 240 fully utilised disks will perform only 240 * 1120 KB/s = 262,5 MB/s throughput.

There are some high throughput workloads. For example video surveillance or streaming, data analytics. And for such workloads it may be better to have less disk shelves per stack.

What kind of workload do you have?

Smasher
  • 78
  • 5
  • We're not doing anything excessive enough to warrant it. It was more a theoretical point - it seemed wrong that I have a quad 10G trunk on the front, but with a single stack - so two SAS ports. If they were "just" 6G each, at any rate - which is why I thought I must be missing something. 4 lanes per controller was what I was missing. (I assume your 'theoretical max' is assuming dual filer heads?) – Sobrique Mar 23 '15 at 09:55
  • 1
    4 lanes (6G each) per port, 2 ports per head for stack, 4 ports for dual heads. Network or SAN ports rarely become bottlenecks. – Smasher Mar 23 '15 at 12:59
2

SAS 6 implies not 600mb.

SAS uses 4 channel per cable and a disc can be connected to 2 cables at the same time.

That gives you 4.8 gigabyte/second out of a disc.

TomTom
  • 50,857
  • 7
  • 52
  • 134
  • So would I be right in saying that that 6Gbit x 4 (in this scenario) is the per-controller limit? That seems a more sensible number, given a sustained 200MB/sec transfer from the spindle. – Sobrique Mar 20 '15 at 10:27
  • 1
    If each controller uses one of the cables -then yes. But controllers often have multiple links going out and each cabinet can then have it's own separate limit. I for exampel use Adaptec controllers with 2-4 cables going out of each. So the limit may be per shelve ;) – TomTom Mar 20 '15 at 10:39