We are currently evaluating hardware and topology solutions for a new environment using GFS+iSCSI and would like some suggestions/tips. We have deployed a similar solution in the past where all hosts accessing the GFS nodes were the GFS nodes themselves. The new topology would be to separate the GFS nodes from the clients accessing them.
A basic diagram would look like:
GFS_client <-> gigE <-> GFS nodes <-> gigE <-> iSCSI SAN device
- Is this the most optimal way to setup GFS+iSCSI?
- Do you have suggestions on hardware for the GFS nodes themselves(ie - CPU or memory heavy)?
- Do you have suggestions on tweaks/config settings to increase performance of the GFS nodes?
- Currently we are using 3-4 gigE connections per host for performance and redundancy. At this point does 10gigE or fiber become more attractive for cost/scaling?