Note: This question is real-world, but to analyse it, please note I've begun from a "theoretical" starting point of device and bus capability, which I acknowledge will not usually be at all representative of in-use bandwidth utilisation.
I have an array of 18 x SAS3 mixed 8TB and 10TB enterprise drives, being configured as 6 sets of 3 way mirrors under ZFS (FreeBSD). Currently they are all hanging off a single 24 port HBA (9305-24i).
It's hard to know how many drives work at peak together, but assuming they were all in use for reading, I get the following calculation worst case (may not be realistic?):
SAS3 simplex bandwidth: (12 gbits/sec) x (8/10 encoding) = 1.2 GB/sec raw data max
=> 18 x SAS3 maximum at peak: (1.2 x 18) = 21.6 GB/sec
But PCI-E 3.0 x 8 simplex bandwidth: 7.9 GB/sec
So at a first glance, it seems that the array could be throttled very badly under demand, because the link is limiting the array IO from 21.6 GB/sec down to 7.9 GB/sec each way: a loss of 64% of HDD I/O capability.
On the other hand, the file server is primarily used by 2 end-users: the file server itself which needs to read and write at highest speed as part of its file handling, and any other devices which are linked by 10 GbE, and hence can't consume more than 2 GB/sec simplex even with 2 link aggregation. Therefore potentially it can't use more than a fraction of the PCI-E link speed regardless, in any event.
(Even if I do some file management on the server itself via SSH, 2 GB/sec is still quite a good speed, and I might not complain.)
Also whatever SAS 3 might deliver in theory, 12 gbit = 1.2 GB/sec and even on maximum reading from its internal cache, it seems unlikely an enterprise HDD can utilise SAS bandwidth. SSDs yes, but HDDs? Less likely? Maximum read is usually quoted as around 200 - 300 GB/sec in datasheets.
My question is therefore, given the HBA can provide up to almost 8 GB/sec bandwidth across PCI-E, and the end users can consume at most 2 GB/sec, will there in fact be a throttling effect?
Put another way, does it matter that the disk array in theory is throttled from 22 GB/sec down to 8 GB/sec at the PCIE slot given the end users have a 2 GB/sec aggregated connection? Or will the PCI-E slot limitation still be an issue because the local system at times needs faster I/O than the end-device bandwidth would suggest?
If there is a limitation I can split the disks across 2 HBAs, but I'd like some idea how to assess if there's a real issue, before sacrificing a second PCIE slot to raise the bar on raw disk IO.