I'm not sure if this is the correct forum to ask, but here goes...
My employer decided to install a SAN. For various irrelevant reasons, this never actually happened. But I did some research to find out what a "SAN" is, and... I'm baffled. It looks like such an absurdly bad idea that I can only conclude that I've misunderstood how it works.
As best as I can tell, SAN stands for Storage Area Network, and it basically means that you connect your servers to your disks using an ordinary IP network. I am utterly stunned that any sane person would think this is a good idea.
So I have my server connected to its disks with an Ultra-320 SCSI link. That's 320 MB/s of bandwidth shared between all the disks on that server. And then I rip them off the SCSI link and plug them into a 100 Mbit/s Ethernet network with its piffling 12.5 MB/s of theoretical bandwidth. That's before you take into account any routing delays, IP overhead, and perhaps packet collisions. (The latter can usually be avoided.)
320 MB/s verses 12.5 MB/s. That's, let me see, roughly 25x slower. On paper. Before we add IP overhead. (Presumably SCSI has its own command overheads, but I'm guessing a SAN probably just tunnels SCSI commands over IP, rather than implementing a completely new disk protocol over IP.)
Now, with each server having a dedicated SCSI link, that means every time I add another server, I'm adding more bandwidth (and more disks). But with a shared SAN, every time I add a server I'm taking bandwidth away from the existing servers. The thing now gets slower as I add more hardware, not faster.
Additionally, SAN technology is apparently extremely expensive. And it seems reasonable to presume that setting up an entire IP network is vastly more complicated than just plugging a few drives into a cable.
So, these are the drawbacks of using a SAN - massively increased cost, massively decreased performance, loss of scaling, increased complexity, and more possible points for the system to fail at. So what are the advantages?
The one I keep hearing is that it makes it easier to add disks, or to move a disk from one server to another. Which sounds logical enough - presumably with a SAN you just gotta push a few buttons and the drive now belongs to a different server. That's a heck of a lot simpler than physically moving the drive (depending on exactly how your drives are connected).
On the other hand, in 10 years of working here, I have needed to change disks... let me count... twice. So it's an event that happens roughly once every 5 years. So you're telling me that once every 5 years, the SAN is going to save me 5 minutes of work? And every second of every day it's going to make stuff 25x slower? And this is a good tradeoff?
I guess if I was in charge of some huge datacenter with thousands of servers, keeping track of that much disk might be difficult, and having a SAN might make sense. Heck, if the servers are all virtualised, they'll all be as slow as hell anyway, so maybe the SAN won't even matter.
However, this does not match my situation at all. I have two servers with three disks each. It's not as if managing all that stuff is "difficult".
In short, no matter which way I look at this, it looks extremely stupid. In fact, it looks so obviously stupid that nobody would spend time on R&D making something so stupid. As I said above, this can only mean that I'm misunderstanding something somewhere - because nobody would do something this dumb.
Can anyone explain to me what I'm not seeing here?