Database management systems implement their own journalling through the database logs, so installing such a DBMS on a journalled file system degrades performance through two mechanisms:
Redundant journalling increases the amount of disk activity
Physical disk layout can be fragmented (although some journalling file systems do have mechanisms to clean this up).
Lots of disk activity can fill up the journal, causing spurious 'disk full' conditions.
I have seen an instance some years ago where this was done on LFS file system on a Baan installation on a HP/UX box. The system had persistent performance and data corruption issues that went undiagnosed until someone worked out that the file systems were formatted with LFS.
Volumes holding database files will normally have a small number of large files. DBMS servers will normally have a setting that configures how many blocks are read in a single I/O. Smaller numbers would be appropriate for high volume transaction processing systems as they would minimise caching of redundant data. Larger numbers would be appropriate for systems such as data warehouses that did a lot of sequetial reads. If possible, tune your file system allocation block size to be the same size as the multi-block read that the DBMS is set to.
Some database management systems can work off raw disk partitions. This gives varying degrees of performance gain, typically less so on a modern system with lots of memory. On older systems with less space to cache file system metadata the savings on disk I/O were quite significant. Raw partitions make the system harder to manage, but provide the best performance available.
RAID-5 volumes incur more write overhead than RAID-10 volumes, so a busy database with lots of write traffic will perform better (often much better) on a RAID-10. Logs should be put physically separate disk volumes to the data. If your database is large and mostly read only (e.g. a data warehouse) there may be a case to put it on RAID-5 volumes if this does not unduly slow down the load process.
Write-back caching on a controller can give you a performance win at the expense of creating some (reasonably unlikely but possible) failure modes where data could be corrupted. The biggest performance win for this is on highly random access loads. If you want to do this, consider putting the logs on a separate controller and disabling write-back caching on the log volumes. The logs will then have better data integrity and a single failure cannot take out both the log and data volumes. This allows you to restore from a backup and roll forward from the logs.