Summary of My Need
We put a large amount of files on a filesystem for analysis at a later time. We can't control how many files we're going to have, and this one box needs access to all of them.
Unchangeable Limitations
- I can't change the inode limit. It's ext4, and it's the default 4 billionish
- There will always be a lot of files. The question isn't how to reduce the number of files; it's how to circumvent the 4Bn inode limit.
- I can't use network storage. This box lives in a data center and due to the staggering amount of existing data throughput, network storage is not an option.
My Ideas
- I could mount a file as a loopback device in the location where we're placing these files.
- Pro: Simple to implement
- Con: Another layer of complexity, but a pretty thin one.
- XFS. No inode limit.
- Pro: This obviously just erases the problem.
- Con: Not sure how much flexibility I'll have in making this change to a production system.
My Question
What are some other stragies for circumventing this hard limitation? Are there other benefits/drawbacks to the approaches I've mentioned?