I am going to be testing 'xfs_repair' on some large file systems ( around 50 TB ) as in the past the memory usage is high. While I could test the program only on file systems which were correct it would be good to test them on a corrupt system.
So what would be the best way to corrupt a file system. Extra credit if the method repeatedly gives the same corruption every time....
To give people an idea of what I mean in 2006 ish
"To successfully check or run repair on a multi-terabyte filesystem, you need:
- a 64bit machine
- a 64bit xfs _ repair/xfs _ check binary
- ~2GB RAM per terabyte of filesystem
- 100-200MB of RAM per million inodes in the filesystem.
xfs_repair will usually use less memory than this, but these numbers give you a ballpark figure for what a large filesystem that is > 80% full can require to repair.
FWIW, last time this came up internally, the 29TB filesystem in question took ~75GB of RAM+swap to repair."