10

I have large static content that I have to deliver via a Linux-based webserver. It is a set of over one million small, gzip files. 90% of the files are less than 1K and the remaining files are at most 50K. In the future, this could grow to over 10 million gzip files.

Should I put this content in a file structure or should I consider putting all this content in a database? If it is in a file structure, can I use large directories or should I consider smaller directories?

I was told a file structure would be faster for delivery, but on the other side, I know that the files will take a lot of space on the disk, since files blocks will be more than 1K.

What is the best strategy regarding delivery performance?

UPDATE

For the records, I have performed a test under Windows 7, with half-million files:

enter image description here

Jérôme Verstrynge
  • 4,747
  • 7
  • 23
  • 34

4 Answers4

6

I would guess that a FS structure would be faster, but you will need a good directory structure to avoid having directories with a very large number of files.

I wouldn't worry too much about lost disk space. As an example, at 16K block size, you will loose 15GB of space in the worst case where you need one additional block for every single file. With todays disk sizes, that's nothing and you can adapt the parameters of your file system for your specific need.

Sven
  • 97,248
  • 13
  • 177
  • 225
5

If you choose the file structure option, one thing you can do to improve disk I/O performance at least to some degree is to mount the partition with noatime+nodiratime unless you must have them. They are not really important at all so I recommend doing that. Maybe you can also use a solid-state drive.

öde
  • 167
  • 4
4

I think the correct answer here depends on how the files will be indexed... what determines when a given file is selected for delivery.

If you're already making a database query to determine your file name, you may very well find that you are better off keeping the file right there in the db record, you may find the best results from tweaking some paging settings in your database of choice and then storing the files in the db (ex: larger pages to account for all the blob records), or you may find that you're still better off using the file system.

The database option has a little better chance to work out because, with a million records, it's probable that each file is not equally likely to be queried. If you're in a situation where one file may be queried several times in a row, or nearly in a row, the database can act as a de facto cache for recently retrieved files, in which case you'll often have your file result already loaded to memory. You may need to carefully tune the internals of your database engine to get the behavior you want.

But the main thing to take away from my answer is that you don't really know what will work best until you try it with some representative test data and measure the results.

Joel Coel
  • 12,910
  • 13
  • 61
  • 99
1

With modern filesystems it shouldn't be much of a problem. I've tested XFS with 1 billion files in the same directory, and I'm pretty sure ext4 will do fine too (as long as the filesystem itself is not too big). Have enough memory to cache the directory entries; bigger processor cache will help a lot too.

wazoox
  • 6,782
  • 4
  • 30
  • 62
  • 2
    EXT file systems are not coping very well with high file count in same dir; especially not with default directory_index settings . Didn't test XFS with such high file count in same dir but I'm pretty certain EXT will not work with anything remotely close to 1 billion in same dir. – Hrvoje Špoljar Mar 04 '12 at 22:10
  • 1
    I heard reiserfs is good for small files, but then I also heard the guy who maintains the software is in prison(!) so the near future of reiserfs is pretty uncertain. I'd personally go for EXT4, and XFS as a second choice. Isn't XFS best for large files? – öde Mar 05 '12 at 17:01
  • It used to be, but if you're running a fresh kernel (3.0 and higher) it works fine for small files too. – wazoox Mar 05 '12 at 21:53