In the absence of specific data about your use of your specific system, the optimal partitioning scheme to adopt would be a single partition. To get best performance make the partition at the outer edge of the disk, make it no larger than needed for the files and leave the rest of the disk unused.
If you make multiple partitions you run the risk of decreasing performance as the disk is forced to make head movements between groups of files at the start of each partition and to maintain multiple filesystem metadata.
In theory you can optimise the placement of frequently accessed files, but partitioning is an extremely crude way to achieve this and if done without careful gathering of statistics is likely to fail to achieve a benefit. For example, on my PC I suspect the most used files are the registry and in Chrome's cache-directory. I think constructing a partitioning scheme around that might be difficult, the most used files may be scattered in disparate folders.
Update
As MSalters commented, Designers of filesystems like NTFS, EXT4 etc go to considerable lengths to optimise their performance. Though of course they also place a high value on reliability and resilience which mean making trade-offs that affect performance.
Opinion: As with so many things it is therefore often counter-productive for end-users to try to second guess the decisions made by operating system developers. For most of us it may be best to configure systems the way we believe OS designers expect most people to do. In other words set things up in the simplest and most straightforward way, accept most of the defaults suggested by the operating system installer. Only if your use-case is very unusual and performance critical might it be worthwhile tuning the installation manually. For example, if I were asked to build a commercial cluster of dedicated Oracle DBMS servers, rather than worry about raw vs cooked filesystems I'd probably just use Oracle's Linux distro and expect it to do the right thing. If serious money was involved I would pay an Oracle consultant to make sure the right configuration options were selected. For the average desktop PC this should be completely unnecessary.
1Not great and not always the same. The way data is stored physically on sniping drives does not relate to how partitions are made. You think that on a 100GB HDD if you make partition 0-90GB the 90-100GB you would think that the 90-100GB partition is faster because its on the outside ring of the platter - but that is not always the case - besides - in todays speeds is 5mb/s going to load your game twice as fast.. or maybe 0.5seconds faster. You want speed. Store once to SSD and laod form ther.. now thats 100X faster! – Piotr Kula – 2012-01-06T10:43:36.647
By 'sniping drives' ppumkin means 'standard' platter based hard drives, great term! :) – HaydnWVN – 2012-01-06T11:08:50.610
1I would argue that "back in the day" as you claim there was ever a performance gain. As you don't even mention a timeframe ( or provide a source ) I am going to just file your statement as an old gray beard tale ( similar to an old wife's tale ). – Ramhound – 2012-01-06T13:08:54.497
@Ramhound - yes, it was meant as half humor. It could be that no consideration has ever changed and that any performance gain was always pure fiction. Or perhaps when computers were much slower any performance gain was worth it and there was a truth to this, once... It's hard to judge looking back and I was winking to the readers :) – Jonathan – 2012-01-07T07:42:59.737