2

Our lab has a cluster with

  • 70 compute nodes
  • 4 IO nodes
  • Infiniband QDR interconnection
  • 12T disk array accessed using IB SRP protocol

The major application is debugging and running MPI-based parallel scientific program. The clients/compute-nodes will write several Gigabytes data (in total) simultaneously every several minutes.

The filesystem used to be Lustre, for the similarity to mainstream super computer center. But the installation is too complicated and the maintainability is so awfull.

So is there any easy-to-use 'Small Scale' distributed network file system? Or, is NFS OK for this scenario?

Francium
  • 21
  • 3
  • 1
    Not 100% in agreement with regards to the closure of this question as I think you're looking for a broader solution than just 'buy this'. Anyway NFS is perfectly acceptable if each node is writing into separate files, where it's not ideal at that scale if when every node is writing into the same file/s - i.e. lots of locking going on. So have a look at your system to see how this behaviour works out. – Chopper3 Apr 28 '16 at 12:09
  • 1
    I don't think it should have been closed. It may be a narrow fit, but surely there's someone who knows a bit of this information. – ewwhite Apr 28 '16 at 13:06
  • It's an interesting question, but it's still a shopping question (and has even attracted a spam answer, as these things are wont to do). – womble Jul 15 '17 at 06:01

0 Answers0