I am trying to determine my 'best choice' for a filesystem to use for a shared storage device that will be mounted via iSCSI across an indeterminate number of servers.
Setup:
- 27TB Synology RS2212+ array, iSCSI LUN/target that allows multiple sessions
- 10-20 CentOS-based linux boxes, primarily webservers
- Shared storage will host static web content (media, primarily images)
Essentially, I need to be able to mount this large shared volume across many webservers, and the number will hopefully continue to grow over time. We have been using NFS in the past, but performance issues are forcing us to look into other methods. (read: NFS tuning feels like black magic sometimes, particularly when dealing with millions of small images).
Typically there shouldn't be an issue with write collisions on the device, since there are only a few central machines that have the ability to change the content, but I know that if we're mounting them as such, we need some method to lock the file while one is working with it so that we don't end up with corruption. In the past, we relied on NFS to handle this. So now I am looking at cluster-aware filesystems (unless I'm missing something, hence this post).
So far I've found 2 main choices for this, but I'm not certain they are a great fit:
RHEL Clustering and GFS2 -- seems like the natural fit for my environment, but it does make me a bit wary to feel 'locked into' a distro in this manner. Would force me to come up with other options down the line if I need to add servers with a different flavor. Not a show-stopper but on my mind. The biggest concern is reading repeatedly from RHEL docs that their cluster only supports 16 nodes. If that is the case, it definitely won't scale well enough for me. Is this accurate or am I reading it wrong?
OCFS - Oracle's cluster file system also gets a lot of attention when I google, but I don't know much about it. The most troublesome aspect of it is that I would have to run their Unbreakable Enterprise Kernel which would cause a lot of disruption in moving all of my servers to that. Again, not a show-stopper, but I need compelling evidence to go down that path, particular when trying out this methodology.
Am I missing something? Is there a better approach that I should be using? I've even considered changing the architecture completely to allow a few "front-end" servers to mount the iSCSI partition and then do NFS sharing from them as needed, and/or use an nginx reverse proxy to hand out the media to the webservers.
Got any clever ideas that YOU would trust using in this scenario?