I have large numbers of small log files that are essentially write-only, unless I have to look at them for some reason. Right now, they accumulate in day-specific subdirectories in a logging folder (e.g. 2018-12-29
for yesterday, 2018-12-30
for today, etc.) and I end up tar
/bzip2
'ing them up later into single files-per-day.
That's not terribly convenient for me, and I was thinking that if I could create a compressed filesystem for each day, I could write directly to those filesystems, use less disk space and not have to "go back" and compress each directory into a tarball. It also makes inspecting individual files later easier because I could mount the filesystem and use it however -- use grep, find, less, etc. rather than trying to use tar
to stream the data through some command pipeline.
I know I can create a loopback device of arbitrary size, but I have to know that size in advance and if I guess "too high" I end up wasting disk space with unused space and if I choose "too low", I'll run out of disk space and my software will fail (or at the very least complain very loudly).
I know I can create a sparse file, but I'm not exactly sure how that will interact with a filesystem such as extNfs or other filesystems available on Linux; it may end up expanding far larger than necessary due to backup superblocks and stuff like that.
Is there a way to create a loop-device that can take up a minimal amount of physical space on the disk?