I work part time for a small private school. The 24 node computer lab kept having hardware failures (mostly drives and cooling fans) so I turned it into a Linux based thin client network. Although the workstations now boot from the network, most still have working hard drives. They also use only a fraction of their computing power to run an x server.
I'm looking for ways to but these computing resources to good use. Each workstation has a 40GB HDD, a Pentium 4 processor and 256M RAM.
I've considered:
- Installing a fault tolerant distributed file system on each of the workstations. This would make use of both the hard drive space and computing resources of each workstation and yet continued hardware failures would have minimal impact.
- Removing the hard drives and putting them in a couple of file servers. Running a distributed computing client on the workstations to take advantage of free CPU cycles. Ok, though I'm sure to find a place for a few more file servers, I'll admit I don't really have any application in mind for a distributed processing environment.
If you think the first idea has merit, I'd be interested in any information you can give on the various distributed file systems available. I did a bit of searching but couldn't find one that really fit the situation. I'm looking for redundancy and fault tolerance but it needs to have support for user and group level access restrictions as well.
Any other suggestions would be appreciated as well.