Nearly 2 decades ago would have been dominated by the range of Windows 98 through XP, including NT4 and 2000 on the workstation/server side.
All hard drives would also be PATA or SCSI cabled magnetic storage, as SSDs cost more than the computer, and SATA did not exist.
As WooShell's answer says, the lower logical sectors on the drive (outside of platter) tend to be the fastest. My 1TB WDC Velociraptor drives start out at 215MB/s, but drops down to 125MB/s at the outer sectors, a 40% drop. And this is a 2.5" drive platter drive, so most 3.5" drives generally see an ever larger drop in performance, greater than 50%. This is the primary reason for keeping the main partition small, but it only applies where the partition is small relative to the size of the drive.
The other main reason to keep the partition small was if you were using FAT32 as the file system, which did not support partitions larger than 32GB. If you were using NTFS, partitions up to 2TB were supported prior to Windows 2000, then up to 256TB.
If your partition was too small relative to the amount of data that would be written, it is easier to get fragmented, and more difficult to defragment. Of you can just straight up run out of space like what happened to you. If you had too many files relative to the partition and cluster sizes, managing the file table could be problematic, and it could affect performance. If you are using dynamic volumes for redundancy, keeping the redundant volumes as small as necessary will save space on the other disks.
Today things are different, client storage is dominated by flash SSDs or flash accelerated magnetic drives. Storage is generally plentiful, and it is easy to add more to a workstation, whereas in the PATA days, you might have only had a single unused drive connection for additional storage devices.
So is this still a good idea, or does it have any benefit? That depends on the data you keep and how you manage it. My workstation C: is only 80GB, but the computer itself has well over 12TB of storage, spread across multiple drives. Each partition only contains a certain type of data, and the cluster size is matched to both the data type and the partition size, which keeps fragmentation near 0, and keeps the MFT from being unreasonably large.
The downsize is that there is unused space, but the performance increase more than compensates, and if I want more storage I add more drives. C: contains the operating system and frequently used applications. P: contains less commonly used applications, and is a 128GB SSD with a lower write durability rating than C:. T: is on a smaller SLC SSD, and contains user and operating system temporary files, including the browser cache. Video and audio files go on magnetic storage, as does virtual machine images, backups, and archived data, these generally have 16KB or larger cluster sizes, and read/writes are dominated by sequential access. I run defrag only once a year on partitions with high write volume, and it takes about 10 minutes to do the whole system.
My laptop only has a single 128GB SSD and a different use case, so I cannot do the same thing, but I still separate into 3 partitions, C: (80GB os and programs), T: (8GB temp), and F: (24 GB user files), which does a good job of controlling fragmentation without wasting space, and the laptop will be replaced long before I run out space. It also makes it much easier to backup, as F: contains the only important data that changes regularly.
2What do you mean by "size of main partition small (like 100 GB)"? 100GB is not "small" by most measures? Do you mean "smaller than the whole drive"? And by "main partition" do you mean the partition that houses the filesystem for Windows drive C:? And what is the screen shot you posted supposed to explain? It just shows a full drive... Please edit to clarify. – sleske – 2018-07-18T07:02:31.050
10100 GB may not seem small but these days big software fill this up quite quickly as in my case. Main partition = Primary Partition = Boot Partition. Screenshot show my primary partitian (drive c:) and see that only 24 MB is left. My D: is 90 GB free and E: is 183 GB free. – hk_ – 2018-07-18T07:05:56.497
1I took the liberty of editing your question to clarify. In the future, it's best if you directly edit your question to add information - comments may be removed, and not everyone will read them all. – sleske – 2018-07-18T07:29:15.247
Related question: Will you install software on the same partition as Windows system?
– sleske – 2018-07-18T07:30:53.51729I would argue these 'experts' were wrong then and are wrong now. I'm sure there are cases where a small C: drive might be/have been useful, but for the majority of normal users (including developers), the default of having a single C: partition of as large as possible is the best thing to do precisely because of the problem you have. – Neil – 2018-07-18T11:55:44.280
1Keep in mind that you're not always able to change the drive a program is installed to (e.g. MS Office) and you might need more space than initially planed for. – Nijin22 – 2018-07-18T12:58:11.743
"Even if I install software in D: drive, part of it is always copied to C:" - This is certainly not universal - it's mostly an issue when installing shared components like .Net Framework. But those are shared, and therefore copied only when the first installer needs it. – MSalters – 2018-07-18T14:45:58.997
1In some servers the primary partition is best kept small, but mostly because data is usually best stored elsewhere, and space used for the OS is space you cannot use for data. But on user computers adding secondary partitions and trying to force user data to stay on those secondary partitions is unnecessary and introduces too much complexity with too little derived benefit. – music2myear – 2018-07-18T18:59:05.967
The one benefit you mentioned is not true today. Today's hard drives never develop bad sectors (that it tells you about) it reallocates sectors without telling you. A vast majority of drive failures will be complete - not just one partition. To mitigate that concern - make backups not partitions. As 3.5 decade hard-drive user, I too have lots of OLD-pain. Old-pain rarely applies to new tech - you must let it go. Let it go. Don't hold-back anymore. Let it go. – DanO – 2018-07-19T16:14:29.217
1For me, it's just that C: can be reinstalled, where everything on D: needs to be backed up – Mawg says reinstate Monica – 2018-07-20T07:18:23.583
1@hk_: We have some computers with 100 GB drives that do very little, but merely having an Office 2013 on it has basically filled those drives due to all the updates that have been released. Upgrading to Office 2016 freed up several GB of space ... for now. Bloat is the norm now. Size accordingly. – GuitarPicker – 2018-07-20T16:11:36.380