Okay, let's take the easy question here. SSD's are particularly excellent at random reads. So long as you are reading a full block, it doesn't matter if you are doing so to a block immediately before or 'halfway across the disk'. It makes no practical difference. In fact, you can't even tell. Your operating system may think it is storing data in sequential locations on the SSD, but the SSD itself may well map them to opposite sides of the 'storage'. So, on SSDs, you practically don't need to worry about defragmentation of files. Except. I'll get back to this.
Okay, so let's return to regular mechanical disks. These are much faster at sequential reads than they are at random reads. Much, much faster.
Now, if I'm playing Half Life 3 and it has to load data files, my computer is going to have a much easier time if those data files are defragmented and stored in close proximity. Roughly (very roughly), defragmenting converts random reads to sequential reads.
You are granting that it's obviously faster to have the map data file and the character data file defragmented and right next to each other, than having the map data file scattered all over the disk and the character data file also scattered all over the disk.
But... you posit a rather strange scenario. You are suggesting, for example, that Half Life 3 needs to load PART of the map data file and PART of the character data file. For example, it only needs the first 10% of the map data file and the middle 10% of the character data file.
In that case, the optimum way to store the data would be to store the first 10% of the map data followed immediately by the middle 10% of the character data. As you aren't loading anything else (in this contrived example), it doesn't matter where anything else is.
So yes. In this specific case, it'd be helpful if the data was fragmented.
Now, returning to the SSD. It turns out that SSDs have to read and write a page at a time. The exact page size depends on the SSD, but may be 2 KB, 4 KB, 8 KB, 16 KB, or some other size. My point here is that, if the SSD page size is 16 KB and that's the minimum size you can load, a situation where the 10% of the map data and the 10% of the character data that you need are both in the same block, well, that's going to be faster. It's faster to load one block than to load two.
So. Yes, there are some circumstances where purposely fragmenting the data speeds up your access. But it's hard to imagine why you'd ever try to optimise for this case. Indeed, most of the time, you want to load all of a file, not just the first 10%. And modern operating systems cache files anyway, so there's a decent chance when you move from one map location to another in Half Life 3, the map data is already in the file system cache and you don't have to actually load anything from disk anyway.
One interesting option is the SSHD's, the hybrid drives. These are (practically) combinations of mechanical drives with an SSD cache. How is that relevant here? Well, roughly speaking, hybrid drives will move frequently-accessed content into a faster storage area, moving the data from your spinny disk to the SSD part. If you always loaded the first 10% of the map data and the middle 10% of the character data and never loaded anything else, and if the hybrid drive's algorithm was good, that data would end up on the faster SSD part. To some extent, then, this accomplishes what you are setting out to do. Note that the regular file system cache accomplishes the same thing, only the effect is probably faster but only lasts until you reboot.
TL;DNR: Yeah. But seriously, it's all but guaranteed to be a worthless optimisation.
No; purposefully fragmenting the data would make it slower not faster. Why don't you just remove the fragmentation on the used files? Most software designed to run a defragmentation routine allow you to select which files it will be ran on. – Ramhound – 2015-10-23T21:08:46.113
3This is an incredibly rare case where I disagree with @Ramhound. As I write my answer, I face the possibility that I may be about to look stupid. – ChrisInEdmonton – 2015-10-23T21:17:02.167
Wouldn't disk cache solve the problem quite easily? – some user – 2015-10-23T21:24:29.273
@Ramhound I strongly suspect that defragmentation is the to-go way to solve this only because nobody cared to measure precisely which data is used most often to optimize it the way I describe. And it's understandable because it may vary between systems and of course usage, in addition to being a pretty tricky algo. Still, I'd like to know if there would be any additional overheads I might not have thought of. – user1306322 – 2015-10-23T21:24:51.307
@user1306322 - If you remove the fragmentation from those files that are not used often and are not updated that often, then they will remain in that state, only files that are updated or changed become fragmented. – Ramhound – 2015-10-23T21:39:02.533
Turns out, my 'disagreement' with @Ramhound is purely pedantic. Practically speaking, we are both taking the same position I think. :) – ChrisInEdmonton – 2015-10-23T21:43:21.317