20

I'm curious if anyone out there has done any experimentation or comparisons between file systems and database performance. On Linux, I'm wondering what is the optimal file system for a postgres database. Also, what settings (inode, etc) are ideal for it? Is this something that may drastically differ based on the data in the database?

If you are looking for a question relating to general filesystem / database performance, this post has some good information.

However, I would like to get as much advice on insert performance opposed to read performance as possible. Thanks for all of the great answers!

Elijah
  • 527
  • 2
  • 7
  • 17
  • 7
    The best filesystem would be more memory? ;) – Oskar Duveborn May 29 '09 at 09:08
  • 2
    +1 for Oskar. We just went from a server config where RAM was ~33% of the total size of the DB to a new machine where total RAM was greater than the size of the DB. Now we can cache the entire DB in memory. Our slowest SQL query is now 2 orders of magnitude faster. – KevinRae Jul 08 '09 at 23:34

7 Answers7

15

Buy a copy of "postgresql high performance" by Greg Smith. It is a great Book and two or more chapters are about Disk Hardware and filesystems. You will learn a lot.

In short: there is no short answer.

But i will try to summerize:

  • don't use ext2 until you know what you are doing.
  • with ext3 beware of checkpoint spikes because of fsync calls, see page 113 and 82 and 79
  • use ext4 or xfs
  • there are other options

But as you are really asking yourself what FS to use, you should read the book!

Janning
  • 1,191
  • 1
  • 19
  • 35
  • 4
    Agreed, this is the sort of topic Greg covers very well. There is a sample chapter at https://www.packtpub.com/sites/default/files/0301OS-Chapter-2-Database-Hardware.pdf if you would like to evaulate before borrowing or buying the book. – sciurus Apr 30 '11 at 01:37
  • 1
    Funny, when I was having this problem, the book didn't exist. Now, I'm really grateful for effort Greg put into that book. – Elijah May 31 '11 at 16:57
  • 1
    I bought another copy just to honour this great work :-) – Janning Nov 24 '11 at 07:40
7

First of all, you want a reliable filesystem first, and a fast one second. Which rules out some options...

Performance testing shows that often XFS gives the best performance. There are some stability issues with it once you reach disk-very-close-to-full scenarios, but as long as you monitor for that not happening, it'll give you slightly better performance.

In theory you don't need a journaling filesystem for the pg_xlog directory, but the difference in speed is usually so small it's just not worth it. For the data directory, you really should always have a metadata journaling filesystem.

Magnus Hagander
  • 2,227
  • 15
  • 9
  • 4
    You might want to /not/ use XFS to store a database, namely because it will (when needed) zero out blocks that it can't recover. – Avery Payne Jun 26 '09 at 21:02
4

Database management systems implement their own journalling through the database logs, so installing such a DBMS on a journalled file system degrades performance through two mechanisms:

  1. Redundant journalling increases the amount of disk activity

  2. Physical disk layout can be fragmented (although some journalling file systems do have mechanisms to clean this up).

  3. Lots of disk activity can fill up the journal, causing spurious 'disk full' conditions.

I have seen an instance some years ago where this was done on LFS file system on a Baan installation on a HP/UX box. The system had persistent performance and data corruption issues that went undiagnosed until someone worked out that the file systems were formatted with LFS.

Volumes holding database files will normally have a small number of large files. DBMS servers will normally have a setting that configures how many blocks are read in a single I/O. Smaller numbers would be appropriate for high volume transaction processing systems as they would minimise caching of redundant data. Larger numbers would be appropriate for systems such as data warehouses that did a lot of sequetial reads. If possible, tune your file system allocation block size to be the same size as the multi-block read that the DBMS is set to.

Some database management systems can work off raw disk partitions. This gives varying degrees of performance gain, typically less so on a modern system with lots of memory. On older systems with less space to cache file system metadata the savings on disk I/O were quite significant. Raw partitions make the system harder to manage, but provide the best performance available.

RAID-5 volumes incur more write overhead than RAID-10 volumes, so a busy database with lots of write traffic will perform better (often much better) on a RAID-10. Logs should be put physically separate disk volumes to the data. If your database is large and mostly read only (e.g. a data warehouse) there may be a case to put it on RAID-5 volumes if this does not unduly slow down the load process.

Write-back caching on a controller can give you a performance win at the expense of creating some (reasonably unlikely but possible) failure modes where data could be corrupted. The biggest performance win for this is on highly random access loads. If you want to do this, consider putting the logs on a separate controller and disabling write-back caching on the log volumes. The logs will then have better data integrity and a single failure cannot take out both the log and data volumes. This allows you to restore from a backup and roll forward from the logs.

  • 1
    Journalling *data* degrades performance; journalling metadata should have at worst minimal impact, and most likely, almost none. Not journalling metadata is unadvisable. – niXar May 29 '09 at 09:40
  • I think you misunderstood the article. Any filesystem at all has filesystem metadata and any disk traffic will involve reading or writing this. Modern computers usually have enough RAM to easily cache this file system metadata but older machines didn't. This meant that disk accesses incurred significant additional I/O overhead (the oft-quoted figure for Oracle was a 30% performance hit over raw partitions) for reading or updating the file system's metadata. On a modern system with more RAM the file system metadata is more likely to be cached, so the overhead is lower. – ConcernedOfTunbridgeWells May 29 '09 at 10:56
  • This contains some good general advice, but I downvoted because it also contains information that is irrelevant or incorrect for postgresql and modern journaled filesystems. – sciurus Apr 30 '11 at 01:49
4

I did such a detailed report but it is only in French. If you read french or are happy with automatic translation tools... You can reuse the methodology and run it for yourself.

Executive summary: I used pgbench. The Linux I/O scheduler has very little importance for performances and the filesystem only a little. So, if you are in a hurry, just choose the default. I choosed JFS.

bortzmeyer
  • 3,903
  • 1
  • 20
  • 24
3

Filesystem is only part of the problem. You can get significant performance boost by changing your IO scheduler. Fortunately this is fairly easy to test as you can change the IO scheduler on the fly. I'd suggest trying each one for a couple of days under typical load and see which gives the best performance.

David Pashley
  • 23,151
  • 2
  • 41
  • 71
2

I did some testing a few months ago:

I had a small test program that created 50 threads, where every thread inserted 1000 (or if it was 10000) rows into the same table.

  • With the database on EXT3 and a 4 disk RAID5 it took 50 seconds.
  • With the table on ramdisk (using tablespace) it still took 50 seconds. The reason it wasn't faster is that everything is logged in the pg_xlog directory that where still on the same RAID 5.
  • I moved the pg_xlog to a 4 disk RAID0 (stripe) and the same program run in 40 seconds.
  • For testing purposes I moved the pg_xlog to the ramdisk and had everything else on the EXT3 4 disk RAID. The program was finished after less than 5 seconds.

But having the pg___xlog on a software ramdisk is not an option: If you lose the contents of the pg_xlog directory postgres will not start. (But there exists hardware ramdisks with battery backup that might be of interest.)

IMHO: Use the filesytem you are most comfortable with for the database files. Move the pg_xlog (with a symlink, see documentation) to the fastest possible device you have.

some
  • 176
  • 5
0

I've seen to have remembered that a tweaked FreeBSD will give you a bit more performance as opposed to other OS. Though I am sure that this info is outdated and probably a myth in the first place. But you can try it out nevertheless, see this guideline for the kernel settings: http://developer.postgresql.org/pgdocs/postgres/kernel-resources.html