The speed degradation is to be expected as the number of files being accessed simultaneously increases. Hard disk drives do not like to be accessed in parallel: every time the read/write head needs to switch cylinders you lose several milliseconds. Even if two files are on the same cylinder, or even the same track, you may still have to wait a rotation to move from one to another. If you measure drive performance in megabits per second, expect that to drop exponentially as parallel access increases.
fsck
will not help with this: it only repairs damage to the directory structure, it does not perform any optimization.
The ideal solution would be switching to solid-state storage since that does not have any of the physical limitations of spinning platters. But that's probably cost-prohibitive.
The next best would be to use a RAID optimized for parallel access. Keep in mind that RAIDs can be configured for many different performance profiles, so you will need to take some time to learn the settings of any given RAID hardware and drivers.
You may be able to reduce the problem using aggressive filesystem caching. If your system has sufficient RAM, linux should be doing this fairly well already. Run a program like top
to see how much free RAM there is. But if the most commonly used files do not fit in RAM (or any RAM you are likely to acquire), this won't really help.
A poor-man's work-around would be to split your files across several different physical hard-drives (not just different partitions on the same drive). That's not really a long-term scaleable solution and would end up costing you more than a decent RAID. But it might be a quick fix if you have drives lying around.
For any solution involving hard disk drives, make sure they have a fast rotation speed and low seek latency.
I have written an article with some general background on hard-drive performance here:
UNIX Tips - Filesystems