The operations you describe give some key hints as to what the ideal file-system needs to be able to do:
- Massively random r/w accesses during the build process.
- Many, many files getting updated in short order, so fast meta-data operations are critical.
- Efficient handling of many small files on possibly very file-heavy file-systems.
- Mature enough not to risk data-loss in infrequent and obscure edge-cases.
Btrfs and Ext4 are three of the above, and the fourth is questionable. Ext4 is probably mature enough for that, but btrfs isn't done baking yet. noatime
helps make the meta-data operations more efficient, but when you're creating a bunch of new files, you still need meta-data ops to be screamingly fast.
That's when underlying storage starts becoming a factor. XFS meta-data operations tend to concentrate in a few blocks, which can strain operations. The Ext-style filesystems are better about getting the meta-data closer to the data its describing. However, if your storage is sufficiently abstract (you're running in a VPS, or attached to a SAN) it doesn't matter significantly.
Each filesystem has little speedups that can be done to eek out a few more percentage points. How performant the underlying storage is will greatly impact how much gain you'll see.
In storage parlance, if you have enough I/O Operation overhead in your storage, filesystem inefficiencies start to not matter so much. If you use a SSD for your build partition, filesystem choice is less important than what you're more comfortable working with.