Is there a reason to keep Windows' primary partition / drive C: small?

71

7

In my jobs almost two decades ago, IT experts would keep the size of Windows' main partition (C drive) extremely small compared to the other partitions. They would argued this runs PC at optimum speed without slowing down.

But the downside of it is, C: drive easily fills up if kept small, and soon you can't install new software as it runs out of space. Even if I install software in D: drive, part of it is always copied to C: which fills it up.

My question is this practice still good? Why it is done. What are its main advantage if any? One obvious one is if primary partition crashes, your data is safe in secondary.

The reason I am asking this question is because I am trying to update Visual Studio and I can't because I have only 24MB left in primary partition.

hk_

Posted 2018-07-18T06:53:48.823

Reputation: 1 878

2What do you mean by "size of main partition small (like 100 GB)"? 100GB is not "small" by most measures? Do you mean "smaller than the whole drive"? And by "main partition" do you mean the partition that houses the filesystem for Windows drive C:? And what is the screen shot you posted supposed to explain? It just shows a full drive... Please edit to clarify. – sleske – 2018-07-18T07:02:31.050

10100 GB may not seem small but these days big software fill this up quite quickly as in my case. Main partition = Primary Partition = Boot Partition. Screenshot show my primary partitian (drive c:) and see that only 24 MB is left. My D: is 90 GB free and E: is 183 GB free. – hk_ – 2018-07-18T07:05:56.497

1I took the liberty of editing your question to clarify. In the future, it's best if you directly edit your question to add information - comments may be removed, and not everyone will read them all. – sleske – 2018-07-18T07:29:15.247

Related question: Will you install software on the same partition as Windows system?

– sleske – 2018-07-18T07:30:53.517

29I would argue these 'experts' were wrong then and are wrong now. I'm sure there are cases where a small C: drive might be/have been useful, but for the majority of normal users (including developers), the default of having a single C: partition of as large as possible is the best thing to do precisely because of the problem you have. – Neil – 2018-07-18T11:55:44.280

1Keep in mind that you're not always able to change the drive a program is installed to (e.g. MS Office) and you might need more space than initially planed for. – Nijin22 – 2018-07-18T12:58:11.743

"Even if I install software in D: drive, part of it is always copied to C:" - This is certainly not universal - it's mostly an issue when installing shared components like .Net Framework. But those are shared, and therefore copied only when the first installer needs it. – MSalters – 2018-07-18T14:45:58.997

1In some servers the primary partition is best kept small, but mostly because data is usually best stored elsewhere, and space used for the OS is space you cannot use for data. But on user computers adding secondary partitions and trying to force user data to stay on those secondary partitions is unnecessary and introduces too much complexity with too little derived benefit. – music2myear – 2018-07-18T18:59:05.967

The one benefit you mentioned is not true today. Today's hard drives never develop bad sectors (that it tells you about) it reallocates sectors without telling you. A vast majority of drive failures will be complete - not just one partition. To mitigate that concern - make backups not partitions. As 3.5 decade hard-drive user, I too have lots of OLD-pain. Old-pain rarely applies to new tech - you must let it go. Let it go. Don't hold-back anymore. Let it go. – DanO – 2018-07-19T16:14:29.217

1For me, it's just that C: can be reinstalled, where everything on D: needs to be backed up – Mawg says reinstate Monica – 2018-07-20T07:18:23.583

1@hk_: We have some computers with 100 GB drives that do very little, but merely having an Office 2013 on it has basically filled those drives due to all the updates that have been released. Upgrading to Office 2016 freed up several GB of space ... for now. Bloat is the norm now. Size accordingly. – GuitarPicker – 2018-07-20T16:11:36.380

Answers

90

In my jobs almost two decades ago, IT experts would keep the size of Windows' main partition (C drive) extremely small compared to the other partitions. They would argued this runs PC at optimum speed without slowing down. [...] My question is this practice still good?

In general: No.

In older Windows versions, there were performance problems with large drives (more accurately: with large filesystems), mainly because the FAT filesystem used by Windows did not support large filesystems well. However, all modern Windows installations use NTFS instead, which solved these problems. See for example Does NTFS performance degrade significantly in volumes larger than five or six TB?, which explains that even terabyte-sized partitions are not usually a problem.

Nowadays, there is generally no reason not to use a single, large C: partition. Microsoft's own installer defaults to creating a single, large C: drive. If there were good reasons to create a separate data partition, the installer would offer it - why should Microsoft let you install Windows in a way that creates problems?

The main reason against multiple drives is that it increases complexity - which is always bad in IT. It creates new problems, such as:

  • you need to decide which files to put onto which drive (and change settings appropriately, click stuff in installers etc.)
  • some (badly written) software may not like not being put onto a drive different than C:
  • you can end up with too little free space on one partition, while the other still has free space, which can be difficult to fix

There are some special cases where multiple partitions make still make sense:

  • If you want to dual-boot, you (usually) need separate partitions for each OS install (but still only one partition per install).
  • If you have more than one drive (particularly drives with different characteristics, such as SSD & HD), you may want to pick and choose what goes where - in that case it can make sense to e.g. put drive C: on the SSD and D: on the HD.

To address some arguments often raised in favor of small/separate partitions:

  • small partitions are easier to backup

You should really back up all your data anyway, to splitting it across partitions does not really help. Also, if you really need to do it, all backup software I know lets you selectively back up a part of a partition.

  • if one partition is damaged, the other partition may still be ok

While this is theoretically true, there is no guarantee damage will nicely limit itself to one partition (and it's even harder to check to make sure of this in case of problems), so this provides only limited guarantee. Plus, if you have good, redundant backups, the added safety is usually to small to be worth the bother. And if you don't have backups, you have much bigger problems...

  • if you put all user data on a data partition, you can wipe and reinstall / not backup the OS partition because there is no user data there

While this may be true in theory, in practice many programs will write settings and other important data to drive C: (because they are unfortunately hardcoded to do that, or because you accidentally forgot to change their settings). Therefore IMHO it is very risky to rely on this. Plus, you need good backups anyway (see above), so after reinstallation you can restore the backups, which will give you the same result (just more safely). Modern Windows versions already keep user data in a separate directory (user profile directory), so selectively restoring is possible.


See also Will you install software on the same partition as Windows system? for more information.

sleske

Posted 2018-07-18T06:53:48.823

Reputation: 19 887

I think I would stick with this answer bc 1) my other windows 10 laptop with SSD is super fast (one drive) and is not slowing a bit. 2) Adding partitions increases complexity which becomes problem down the road. – hk_ – 2018-07-18T08:08:54.790

3multiple partitions also makes sense if you want to move user data off of C:\ uncoupling it from (m)any issues with the OS. You could then say, install another OS in its place and your user data remains safe (as long as the user partition isn't deleted) – Baldrickk – 2018-07-18T09:27:21.673

@Baldrickk: I don't think this is realistic; thanks for pointing it out, updated my answer. – sleske – 2018-07-18T10:01:57.660

1You may want to perform an image backup for your system partition while a simple data backup for all other partitions is sufficient. – Thomas – 2018-07-18T12:59:02.737

26The rest of your answer is good, but "the installer lets you do it" is not a valid way to check if something is good or not. Why would they do it? Forgetfulness, stupidity, laziness, ignorance, technical limitations, middle management... There are a million reasons the installer might do the wrong thing by default, let alone just letting you do the wrong thing. That applies doubly to a system as complex as Windows. – Fund Monica's Lawsuit – 2018-07-18T16:24:46.843

A very minor benefit to splitting up a drive into multiple partitions can arise if the other partition is only used for "write-once" data (installed programs outside of the OS, downloaded media, etc.), while the C drive holds all your caches, log files, repeatedly written user profile data, etc. The write-once drive doesn't require defragmentation as often, while the "constant writes" C drive has less unchanging data that it might have to move out of the way when defragmenting. – ShadowRanger – 2018-07-18T16:37:00.570

In theory, all that write-once stuff could eventually be in blocks that never require eviction during defrag, but I'm not confident the defragger knows the difference between frequently written and never written files; if it defragments such that there is an unchanging 1 GB file, then a 10 MB log file, then another 1 GB file (with no gaps between them), when the log expands it has to either move elsewhere (and the defragger then has to move down the 1 GB file during free space consolidation) or the 1 GB file must be moved out of the way to make room for the rest of the log. – ShadowRanger – 2018-07-18T16:39:26.783

Again, this is a very minor consideration; defragmentation time is hardly a major concern. Since it regularly occurs in the background, the user rarely notices, fragmentation rarely gets bad enough to affect performance before the next scheduled run, etc. The D drive of big static stuff optimizes for a case that mostly doesn't matter, but it is an optimization of sorts. – ShadowRanger – 2018-07-18T16:42:17.513

2I remember WinXP and Win 7 allowing the user to create multiple partitions and set the size of partitions during installation. I don't remember if this is the case anymore, though. MS is known for making bad decisions, seemingly on a whim, and supporting them for years before making another bad decision to "fix" the (non)issue. Also, technical/art users that need to manipulate large files locally often need more drives to handle the work. And: "increases complexity - which is always bad in IT", is only half right, in general. My reasoning is off topic, so I'll leave it be. – computercarguy – 2018-07-18T18:38:06.327

1I'd probably point out that "older Windows versions" in this case means OSes from 20 years ago, and superseded 15 years ago. Those older versions won't even run on modern hardware, let alone perform well. – Bob – 2018-07-19T06:06:34.340

@computercarguy That's always been the case (even on Windows 95, while DOS came with the tool to do it), and still is. As for needing more drives - that's very different from creating multiple partitions on the same drive. And also comes with its own set of pros and cons. – Luaan – 2018-07-19T06:37:24.257

I'd add that keeping partition under the TB helps handling full backups in a reasonable time frame (when the backup time goes over 6 hours, life becomes to be hard) – Tensibai – 2018-07-19T08:10:58.357

@NicHartley: I didn't write that "the installer lets you do it", but that it is the default. Absent more detailed information, you can usually assume that the default is a sensible choice for most typical situations. Yes, there may be other reasons for a default (saving money, marketing), but it is an indication. – sleske – 2018-07-19T08:57:12.497

If you have multiple OS it makes sense to make a data partition that is used to transfer files between/make files accessible from the different OS – Jungkook – 2018-07-19T12:05:38.550

2"why should Microsoft let you install Windows in a way that creates problems?" Because you assume Microsoft always let you, by default, use the best and optimized option? – Jonathan Drapeau – 2018-07-19T13:10:51.717

1@sleske "why should Microsoft let you install Windows in a way that creates problems?" -- And, again, the exact same logic applies to defaults, as I said in my original comment. Windows is an incredibly complex piece of software with a long, complicated history; there's no guarantee that somewhere along the way someone messed something up and now it's technically infeasible to implement whatever restrictions would be necessary. And, frankly, look at it -- there are so many corner cases and odd leftovers in Windows that it's silly to assume there are none in the installer. – Fund Monica's Lawsuit – 2018-07-19T16:14:11.557

1I have used multiple partitions, including a small dedicated Windows partition, out of habit for years, and I can confirm: recent versions of Windows have been decidedly antagonistic towards that choice. – KRyan – 2018-07-19T17:04:50.443

ockquote>

if one partition is damaged, the other partition may still be ok: back in the day, the main 'damage' was the del *.* user, not disk fault damage. Obv backups, but again, 'back in the day'. :-)

– mcalex – 2018-07-19T20:40:30.517

"...If there were good reasons to create a separate data partition, the installer would offer it..." Keeping a separate data partition for the main user folders would be preferred, making backups & restoring more efficient. The main data folders from %UserProfile% can be moved to a separate partition via the Folder Properties Location tab or via regedit: HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders. It wouldn't be recommended, and would never be advised, for any other folders (%ProgramData%, %ProgramFiles%, etc), which could be moved via symbolic links. – JW0914 – 2018-07-20T15:15:01.873

"why should Microsoft let you install Windows in a way that creates problems?" -- why shouldn't it? As long as it's your problems and not theirs... – ivan_pozdeev – 2018-08-02T07:14:34.833

24

The historical reason for this practice is most likely rooted in the performance properties of rotating magnetic HDDs. The area on spinning disks with the highest sequential access speed are the outermost sectors (near the start of the drive).

If you use the whole drive for your operating system, sooner or later (through updates etc) your OS files would be spread out all over the disk surface. So, to make sure that the OS files physically stay in the fastest disk area, you would create a small system partition at the beginning of the drive, and spread the rest of the drive in as many data partitions as you like.

Seek latency also partly depends on how far the heads have to move, so keeping all the small files somewhat near each other also has an advantage on rotational drives.

This practice has lost all its reason with the advent of SSD storage.

WooShell

Posted 2018-07-18T06:53:48.823

Reputation: 429

1Besides losing traction due to SSDs, newer SATA and SASS drives spin much faster (7200 and +10k rpm vs 5400 rpm) and have faster seek times than old drives. While there's still minimal truth to the "speed issue", not many people using today's hardware would actually notice a speed increase using a small partition without benchmarking software. – computercarguy – 2018-07-18T18:04:30.420

6"the highest access speed are the innermost sectors.", "So, to make sure that the OS files physically stay in the fastest disk area, you would create a small system partition at the beginning of the drive" -- No, you have it backwards, just like this question: https://superuser.com/questions/643013/are-partitions-to-the-inner-outer-edge-significantly-faster. Also your claim of "historical reason" is dubious. Until ATAPI drives started using zone bit recording, all prior HDDs used constant angular recording, so there was no speed difference to speak of between cylinders. – sawdust – 2018-07-18T19:34:47.670

3@sawdust, the historical reason was not read latency, but seek latency. The FAT was stored at the lowest track address and so the disk head would have to return there at least every time a file was created or extended. With 1980s disks, the performance difference was measurable. One of the design features of HPFS (ancestor of NTFS) was to put the MFT in the middle of the partition to reduce average seek times. Of course, once you put multiple partitions on a disk, you have multiple FAT / MFT zones so seek is no longer optimized. – grahamj42 – 2018-07-18T21:19:55.763

@grahamj42 -- No you're still wrong. The seek time is purely a (non-linear) function of the seek distance, which is measured in the number of cylinders. Seek time is completely unrelated to inner versus outer cylinders as you claim. Not sure what you mean by "read latency". FYI I have first-hand experience with disk drive internals, disk controllers, and disk drive firmware, i.e. this knowledge was not learned from just reading. And I was professionally active before and after this "history" you refer to was being made. – sawdust – 2018-07-19T01:10:31.013

@sawdust: I think graham's point was that making a small OS partition kept most of the data that's frequently seeked to in better proximity, for lower average seek distance exactly like you're saying. (I edited this answer to fix the issues raised in comments, instead of just complaining about it along with everyone else :P) – Peter Cordes – 2018-07-19T01:11:37.760

@PeterCordes -- Your edits do remove some of the misconceptions. But there's still a bogus claim that there's a special " fastest disk area". With constant angular-speed recording, the HDD has no such "area". – sawdust – 2018-07-19T01:20:45.993

@sawdust: Ok, so that part of the reasoning doesn't apply for ancient drives like that, but it applies to all rotational HDDs now. This answer is talking about reasons it might still apply, not just why this advice originally existed (seek latency, and later in the Win9x days: VFAT sucks with large filesystems where it has to use a large cluster size). The question is basically asking if the advice still applies today, so discussing modern HDDs makes sense to me. (Perhaps you're objecting to the phrase "classic HDDs". I think they meant modern rotational HDDs; fixed that too.) – Peter Cordes – 2018-07-19T01:31:04.097

1@sawdust - I'm sorry that you misunderstood me and the size of a comment restricted what I could write. On a FAT partition, the head must reposition to the FAT every time it's updated, or if the FAT sector to be read is no longer cached, therefore the probability of finding the head in an outer cylinder is much higher than finding it in an inner cylinder, which means the average seek time is less. I am talking about the stepper-motor drives of the early 80s (the ST-412 had 85ms average seek time but 3ms for one cylinder). A modern drive is about 10x faster. – grahamj42 – 2018-07-19T14:18:06.440

@sawdust You have more data per track on the outside and the least on the innermost tracks. The statistical "average seek distance" is not in number of tracks, but number of gigabytes, which means for outside tracks the average seek distance is lower. – gnasher729 – 2018-07-21T17:30:49.457

5

Is there a reason to keep Windows' primary partition / drive C: small?

Here are a few reasons to do that:

  1. All system files and the OS itself are on the primary partition. It is better to keep those files seperated from other software, personal data and files, simply because constantly meddling in the bootable partition and mixing your files there might occasionally lead to mistakes, like deleting system files or folders by accident. Organization is important. This is why the size of the primary partition is low -- to discourage users from dumping all their data in there.
  2. Backups - it's a lot easier, faster, and effective to backup and recover a smaller partition than a bigger one, depending on the purpose of the system. As noted by @computercarguy in the comments, it is better to backup specific folders and files, than backing up a whole partition, unless needed.
  3. It could improve performance, however, in a hardly noticeable manner. On NTFS filesystems, there are the so-called Master File Tables on each partition, and it contains meta-data about all the files on the partition:

    Describes all files on the volume, including file names, timestamps, stream names, and lists of cluster numbers where data streams reside, indexes, security identifiers, and file attributes like "read only", "compressed", "encrypted", etc.

This might introduce an advantage, though unnoticeable, thus this could be ignored, as it really doesn't make a difference. @WooShell's answer is more related to the performance issue, even though it still is neglectable.

Another thing to note, is that in case of having an SSD + HDD, it is way better to store your OS on the SSD and all your personal files/data on the HDD. You most likely wouldn't need the performance boost from having an SSD for most of your personal files and consumer-grade solid state drives usually do not have much space on them, so you'd rather not try to fill it up with personal files.

Can someone explain why this practice is done and is it still valid?

Described some of the reasons why it is done. And yes, it is still valid, though not a good practice anymore as it seems. The most notable downsides are that end-users will have to keep track on where applications suggest to install their files and change that location (possible during almost any software installation, especially if expert/advanced install is an option) so the bootable partition doesn't fill up, as the OS does need to update at times, and another downside is that when copying files from one partition to another, it actually needs to copy them, while if they were in the same partition, it just updates the MFT and the meta-data, does not need to write the whole files again.

Some of these unfortunately can introduce more problems:

  1. It does increase the complexity of the structure, which makes it harder and more time-consuming to manage.
  2. Some applications still write files/meta-data to the system partition (file associations, context menus, etc..), even if installed in another partition, thus this makes it harder to backup and might introduce failures in syncing between partitions. (thanks to @Bob's comment)

To avoid the problem you're having, you need to:

  1. Always try to install applications on the other partitions (change the default installation location).
  2. Make sure to install only important software in your bootable partition. Other not-so-needed and unimportant software should be kept outside of it.

I am also not saying that having multiple partitions with a small primary one is the best idea. It all depends on the purpose of the system, and although it introduce a better way to organize your files, it comes with its downsides, which on Windows systems in the current days, are more than the pros.

Note: And as you've mentioned yourself, it does keep the data that is in separate partitions safe in case of a failure of the bootable partition occurs.

Fanatique

Posted 2018-07-18T06:53:48.823

Reputation: 3 475

1

"Therefore, a smaller partition with a smaller Master File Table will perform faster lookups and will improve the performance of a hard drive." Citation required - https://serverfault.com/questions/222110/does-ntfs-performance-degrade-significantly-in-volumes-larger-than-five-or-six-t for example indicates that even huge volumes are no problem nowadays.

– sleske – 2018-07-18T08:05:53.673

Added a reference and also reworded to make it fully correct. I do not see how that is an example, so I'll stick to my reference. – Fanatique – 2018-07-18T08:12:48.923

1"Of course, a lot of software developers have the bad practice to suggest end-users to install their application in the primary partition. But generally, it is not a good idea." - citation needed – gronostaj – 2018-07-18T08:30:10.797

5"As Master File Tables store information about all files on a partition, when performing any actions on any of the files on that partition, it looks through the whole table." Oh, it absolutely does not. Such a statement can only be made in near-complete ignorance of how file systems work and how disk storage organization is done in general. If this were true then the performance of any operation involving the MFT would degrade linearly (or worse!) with the size of the MFT, and that flatly does not happen. – Jamie Hanrahan – 2018-07-18T10:14:26.330

@Fanatique I have upvoted your answer at least it is the traditional answer which hits all the points, but I wonder if those points are true anymore by today's standards. Thx – hk_ – 2018-07-18T10:51:58.590

This answer is a mix of historical and current reasons to use the small partitions, and most of them aren't valid. Backups should be done only on user data, not the whole HD, and most (if not all) current backup software will allow you to specify which folders to backup, even which file types to backup. Organization is definitely key, but most OSs automatically separate user files from OS files by supplying a User folder. On a home based system, I'd say have multiple partitions, but not on a business system. Too much overhead for the IT staff, which is probably overworked anyway. – computercarguy – 2018-07-18T18:13:00.357

1"since the OS does not perform as many read/write operations as other software". If your OS is Windows, it's performing a lot of R/W operations on the install disk. Lots of them. If you're using IE or Edge, it's installed as part of Windows (don't think you can specify where it's installed, though I may be wrong), and that's caching a ton of things as I'm writing this comment. – FreeMan – 2018-07-18T19:09:19.100

@FreeMan Browser caches are always user-specific and go inside the user profile. The user profile does default to the system partition but can be easily moved if desired. That said, there's hardly ever a good reason to do so. – Bob – 2018-07-19T06:11:31.823

2"It is better to keep those [OS] files seperated from other software" — No. The problem with this approach is a lot of software maintains installation status (and uninstallers) with the OS. A lot of software also registers themselves with various parts of the OS - file associations, context menu entries, preview renderers, etc.. Installing the software itself in a separate partition does not prevent these; it merely makes them spread across two partitions. Which is, if anything, worse than keeping them together — you've now added the risk of the two partitions becoming out of sync in backups. – Bob – 2018-07-19T06:17:17.050

@Bob And worse, applications that try to avoid that problem and re-register everything on startup keep the "everything runs as administrator" approach alive. Installation and normal program operation are two separate things and should be treated that way. While a lot of software can keep operating if you reinstall the OS under their hands, it's also a great way to introduce subtle issues to the whole system. Installing on system drive by default is a default for a reason. As for moving user profiles - easy in theory, but every time I tried, it just introduced loads of issues. Not worth it. – Luaan – 2018-07-19T06:48:25.510

1@computercarguy I agree with you, it is too much overhead for IT staff. But despite having a User directory, in the ~10 years of experience with managing/fixing other people's machines, I've never ever seen a Windows user put all their personal files and data in the User directory. But you're right that Windows introduces a way to do that and it is not a subject to the topic whether users really do that or not. – Fanatique – 2018-07-19T07:59:28.567

@Bob Thank you for the comment. It is true for me that after 19+ years of using Windows, I've never had my personal files and data on the system partition and always used multiple partitions. I have never come across the problem you're explaining. Those file associations, they don't care whether the app is installed or not. During installation, the software creates those file associations, and it does not matter anymore to the software whether they exist or not, they're taken care of by the OS. Nonetheless, it is possible, so I've referred to your comment in my answer. – Fanatique – 2018-07-19T08:04:54.190

@Fanatique - it is possible for to have a C: partition with just boot files and Win 7 and/or Win 10 installed on other partitions, without them changing their partition letters also to C:, so Win 7 could be on D; Win 10 on E:, ... . What I did was install Win 7 Pro 64 bit from XP X64, in which case it won't change any partition letters. For Win 10, I installed a second (purhased) copy of Win 7 in the same manner, and then "upgraded" to Win 10 (where again it won't change partition letter). Microsoft has a simpler method to accomplish this, but they seem unwilling to explain how. – rcgldr – 2018-07-19T08:52:29.987

@Fanatique - continuing, most apps will default the install partition based on the environment variable "homedrive" which will be the partition letter for Win 7 or Win 10 OS. I have yet to see any app that defaults to C: instead of "homedrive", but such an app could exist. Most, but not all apps will let you change which partition they will install into. Side note - Win XP or Win XP X64 install used a default partition lettering scheme (based on BIOS) and did not change their install partition letters to C. – rcgldr – 2018-07-19T08:55:10.273

I am surprised that nobody mentioned here to use portable apps on a separate partition. In that way, you can easily backup your programs and do not need to worry about registry settings, uninstaller, etc. The only real downside might be, that there is no portable version of the program you want to use available. – kristjan – 2018-07-19T21:21:57.167

4

I'm a software developer, but also have spent time doing "regular" / back-office IT work. I typically keep the OS and applications on drive C:, and my personal files on drive D:. These don't necessarily need to be separate physical drives, but currently I am using a relatively small SSD as my "system" drive (C:) and a "traditional" disk drive (i.e. with rotating magnetic platters) as my "home" drive (D:).

All filesystems are subject to fragmentation. With SSDs this is basically a non-issue, but it is still an issue with traditional disk drives.

I have found that fragmentation can significantly degrade system performance. For example, I've found that a full build of a large software project improved by over 50% after defragmenting my drive -- and the build in question took the better part of an hour, so this was not a trivial difference.

Keeping my personal files on a separate volume makes, I have found:

  • the system volume doesn't get fragmented nearly as quickly (or severely);
  • it is much faster to defragment the two separate volumes than a single volume with everything on it -- each volume takes 20%-25% as long as the combined volume would.

I've observed this on several generations of PCs, with several versions of Windows.

(As a commenter pointed out, this also tends to facilitate making backups.)

I should note that the development tools I use tend to generate a large number of temporary files, which seem to be a significant contributor to the fragmentation issue. So the severity of this issue will vary according to the software you use; you may not notice a difference, or as much of one. (But there are other activities -- for example video / audio composition and editing -- which are I/O intensive, and depending on the software used, may generate large numbers of temporary / intermediate files. My point being, don't write this off as something that only affects one class of users.)

Caveat: with newer versions of Windows (from 8 onward), this has become much more difficult, because user folders on a volume other than C: are no longer officially supported. I can tell you that I was unable to perform an in-place upgrade from Windows 7 to Windows 10, but YMMV (there are a number of different ways to [re]locate a user folder, I don't know which are affected).

One additional note: if you maintain two separate volumes on a traditional drive, you may want to set up a page file on the D: volume. For the reasons described in WooShell's answer, this will reduce seek time when writing to the page file.

David

Posted 2018-07-18T06:53:48.823

Reputation: 391

For me, it's just that C: can be reinstalled, where everything on D: needs to be backed up – Mawg says reinstate Monica – 2018-07-20T07:18:13.407

With SSDs, metadata fragmentation is still an issue 'cuz it means more read/write commands for the same job. Though the impact is still incomparably smaller than for rotational drives. – ivan_pozdeev – 2018-07-20T15:27:06.310

4

Short answer: Not any more.

In my experience (20+ years of IT adminship work), the primary reason for this practice (others are listed below) is that users basically didn't trust Windows with their data and hard drive space.

Windows has long been notoriously bad at staying stable over time, cleaning after itself, keeping the system partition healthy and providing convenient access to user data on it. So users preferred to reject the filesystem hierarchy that Windows provided and roll their own outside of it. The system partition also acted as a ghetto to deny Windows the means to wreak havoc outside of its confines.

  • There are lots of products, including those from Microsoft, that don't uninstall cleanly and/or cause compatibility and stability issues (the most prominent manifestation is leftover files and registry entries all around and DLL Hell in all of its incarnations). Many files created by the OS are not cleaned up afterwards (logs, Windows updates etc), leading to the OS taking up more and more space as time goes. In Windows 95 and even XP era, advice went as far as suggesting a clean reinstall of the OS once in a while. Reinstalling the OS required an ability to guarantee wiping the OS and its partition (to also clean up any bogus data in the filesystem) -- impossible without multiple partitions. And splitting the drive without losing data is only possible with specialized programs (which may have their own nasty surprises like bailing out and leaving data in an unusable state upon encountering a bad sector). Various "clean up" programs alleviated the problem, but, their logic being based on reverse engineering and observed behaviour, were even more likely to cause a major malfunction that would force a reinstall (e.g. the RegClean utility by MS itself was called off after Office 2007 release that broke assumptions about the registry that it was based on). The fact that many programs saved their data into arbitrary places made separating user and OS data even harder, making users install programs outside of the OS hierarchy as well.
    • Microsoft tried a number of ways to enhance stability, with varying degrees of success (shared DLLs, Windows File Protection and its successor TrustedInstaller, Side-by-side subsystem, a separate repository for .NET modules with storage structure that prevents version and vendor conflicts). The latest versions of Windows Installer even have rudimentary dependency checking (probably the last major package manager in general use to include that feature).
    • With regard to 3rd-party software compliance to best practices, they maneuvered between maintaining compatibility with sloppily-written but sufficiently used software (otherwise, its users would not upgrade to a new Windows version) -- which lead to a mind-bogging amount of kludges and workarounds in the OS, including undocumented API behavior, live patching of 3rd-party programs to fix bugs in them and a few levels of registry and filesystem virtualization -- and between forcing 3rd-party vendors into compliance with measures like a certification logo program and a driver signing program (made compulsory starting with Vista).
  • User data being buried under a long path under the user's profile made it inconvenient to browse for and specify paths to it. The paths also used long names, had spaces (a bane of command shells everywhere) and national characters (a major problem for programming languages except very recent ones that have comprehensive Unicode support) and were locale-specific (!) and unobtainable without winapi access (!!) (killing any internationalization efforts in scripts), all of which didn't help matters, either.
    So having your data in the root dir of a separate drive was seen as a more convenient data structure than what Windows provided.
    • This was only fixed in very recent Windows releases. Paths themselves were fixed in Vista, compacting long names, eliminating spaces and localized names. The browsing problem was fixed in Win7 that provided Start Menu entries for both the root of the user profile and most other directories under it and things like persistent "Favorite" folders in file selection dialogs, with sensible defaults like Downloads, to save the need to browse for them each time.
  • All in all, MS' efforts bore fruit in the end. Roughtly since Win7, the OS, stock and 3rd-party software, including cleanup utilities, are stable and well-behaved enough, and HDDs large enough, for the OS to not require reinstallation for the entirety of a typical workstation's life. And the stock hierarchy is usable and accessible enough to actually accept and use it in day-to-day practice.

Secondary reasons are:

  • Early software (filesystem and partitioning support in BIOS and OSes) were lagging behind hard drives in supporting large volumes of data, necessitating splitting a hard drive into parts to be able to use its full capacity.
    • This was primarily an issue in DOS and Windows 95 times. With the advent of FAT32 (Windows 98) and NTFS (Windows NT 3.1), the problem was largely solved for the time being.
    • The 2TB barrier that emerged recently was fixed by the recent generation of filesystems (ext4 and recent versions of NTFS), GPT and 4k disks.
  • Various attempts to optimize performance. Rotational hard drives are slightly (about 1.5 times) faster at reading data from outer tracks (which map to the starting sectors) than the inner, suggesting locating frequently-accessed files like OS libraries and pagefile near the start of the disk.
    • Since user data is also accessed very often and head repositioning has an even larger impact on performance, outside of very specific workloads, the improvement in real-life use is marginal at best.
  • Multiple physical disks. This is a non-typical setup for a workstation since a modern HDD is often sufficiently large by itself and laptops don't even have space for a 2nd HDD. Most if not all stations I've seen with this setup are desktops that (re)use older HDDs that are still operational and add up to the necessary size -- otherwise, either a RAID should be used, or one of the drives should hold backups and not be in regular use.
    • This is probably the sole case where one gets a real gain from splitting system and data into separate volumes: since they are physically on different hardware, they can be accessed in parallel (unless it's two PATA drives on the same cable) and there's no performance hit on head repositioning when switching between them.
      • To reuse the Windows directory structure, I typicaly move C:\Users to the data drive. Moving just a single profile or even just Documents, Downloads and Desktop proved to be inferior 'cuz other parts of the profile and Public can also grow uncontrollably (see the "separate configuration and data" setup below).
    • Though the disks can be consolidated into a spanned volume, I don't use or recommend this because Dynamic Volumes are a proprietary technology that 3rd-party tools have trouble working with and because if any of the drives fails, the entire volume is lost.
  • An M.2 SSD + HDD.
    • In this case, I rather recommend using SSD solely as a cache: this way, you get the benefit of an SSD for your entire array of data rather than just some arbitrary part of it, and what is accelerated is determined automagically by what you actually access in practice.
    • In any case, this setup in a laptop is inferior to just a single SSD 'cuz HDDs are also intolerant to external shock and vibration which are very real occurrences for laptops.
  • Dual boot scenarios. Generally, two OSes can't coexist on a single partition. This is the only scenario that I know of that warrants multiple partitions on a workstation. And use cases for that are vanishingly rare nowadays anyway because every workstation is now powerful enough to run VMs.
  • On servers, there are a number of other valid scenarios -- but none of them applies to Super User's domain.
    • E.g. one can separate persistent data (programs and configuration) from changing data (app data and logs) to prevent a runaway app from breaking the entire system. There are also various special needs (e.g. in an embedded system, persistent data often resides on a EEPROM while work data on a RAM drive). Linux's Filesystem Hierarchy Standard lends itself nicely to tweaks of this kind.

ivan_pozdeev

Posted 2018-07-18T06:53:48.823

Reputation: 1 468

3

Nearly 2 decades ago would have been dominated by the range of Windows 98 through XP, including NT4 and 2000 on the workstation/server side.

All hard drives would also be PATA or SCSI cabled magnetic storage, as SSDs cost more than the computer, and SATA did not exist.

As WooShell's answer says, the lower logical sectors on the drive (outside of platter) tend to be the fastest. My 1TB WDC Velociraptor drives start out at 215MB/s, but drops down to 125MB/s at the outer sectors, a 40% drop. And this is a 2.5" drive platter drive, so most 3.5" drives generally see an ever larger drop in performance, greater than 50%. This is the primary reason for keeping the main partition small, but it only applies where the partition is small relative to the size of the drive.

The other main reason to keep the partition small was if you were using FAT32 as the file system, which did not support partitions larger than 32GB. If you were using NTFS, partitions up to 2TB were supported prior to Windows 2000, then up to 256TB.

If your partition was too small relative to the amount of data that would be written, it is easier to get fragmented, and more difficult to defragment. Of you can just straight up run out of space like what happened to you. If you had too many files relative to the partition and cluster sizes, managing the file table could be problematic, and it could affect performance. If you are using dynamic volumes for redundancy, keeping the redundant volumes as small as necessary will save space on the other disks.

Today things are different, client storage is dominated by flash SSDs or flash accelerated magnetic drives. Storage is generally plentiful, and it is easy to add more to a workstation, whereas in the PATA days, you might have only had a single unused drive connection for additional storage devices.

So is this still a good idea, or does it have any benefit? That depends on the data you keep and how you manage it. My workstation C: is only 80GB, but the computer itself has well over 12TB of storage, spread across multiple drives. Each partition only contains a certain type of data, and the cluster size is matched to both the data type and the partition size, which keeps fragmentation near 0, and keeps the MFT from being unreasonably large.

The downsize is that there is unused space, but the performance increase more than compensates, and if I want more storage I add more drives. C: contains the operating system and frequently used applications. P: contains less commonly used applications, and is a 128GB SSD with a lower write durability rating than C:. T: is on a smaller SLC SSD, and contains user and operating system temporary files, including the browser cache. Video and audio files go on magnetic storage, as does virtual machine images, backups, and archived data, these generally have 16KB or larger cluster sizes, and read/writes are dominated by sequential access. I run defrag only once a year on partitions with high write volume, and it takes about 10 minutes to do the whole system.

My laptop only has a single 128GB SSD and a different use case, so I cannot do the same thing, but I still separate into 3 partitions, C: (80GB os and programs), T: (8GB temp), and F: (24 GB user files), which does a good job of controlling fragmentation without wasting space, and the laptop will be replaced long before I run out space. It also makes it much easier to backup, as F: contains the only important data that changes regularly.

Richie Frame

Posted 2018-07-18T06:53:48.823

Reputation: 1 555

However my question says, our IT expert "still" do it as I am encountering the same problem right now. – hk_ – 2018-07-19T05:41:18.953

Note that since Vista, Windows automatically keeps the often accessed system files at the outer sectors (if possible) anyway. This is one of the main reasons why startup speeds increased so much - the whole boot sequence including startup applications usually reads sequentially from the drive at the place where it's the fastest. Unless you botched the partitions, of course :P SSDs still help immensely, though - even just for their higher throughput. – Luaan – 2018-07-19T06:55:20.100

2@hk_ just because your IT expert is still doing things out of habit doesn't mean that the foundational truths for that habit from 20 or 15 (or even 5) years ago are still true today. If you were British and continued to drive on the left side of the road out of habit when you crossed to continental Europe, you would be in a world of hurt. i.e. "because I've always done it that way" isn't a good reason to continue to do something. – FreeMan – 2018-07-19T15:01:32.480

@FreeMan I am not supporting it. I had doubts hence my question. – hk_ – 2018-07-19T15:11:29.723

3

I used to do some IT work, and here is what I know and remember.

In the past, as others have said there was a real benefit to having a small C partition on the start of the disk. Even today in some lower end laptops this could still be true. Essentially by having a smaller partition, you have less fragmentation and by keeping it at the start of the disk you have better seek and thus read times. This is still valid today with laptops (usually) and slower "green" hard drives.

Another great benefit that I still use today is having "data" and "os" on separate drives, or if I can't manage that separate partitions. There is no real speed increase if using SSD, or even faster magnetic drives, but there is a huge "easy fix" option when the OS eventually tanks. Just swap the drive or re-ghost that partition. The user's data is intact. When properly set up, between a D: drive and "Roaming profiles" reinstalling windows is a 5-minute non-issue. It makes it a good step one for a level 1 tech.

coteyr

Posted 2018-07-18T06:53:48.823

Reputation: 150

I'm not sure "a good step one for a Level 1 Tech" would be to re-install Windows, regardless of how quick it is. I think re-installing the Operating System should almost always be an option of last-resort. So many weird little things might break if you remove the OS installation under all the installed applications and replace it with another (albeit very similar) one. For instance, what happens to the application-specific registry entries that aren't in HKCU? – Aaron M. Eshbach – 2018-07-20T18:27:10.260

1They get wiped away too, meaning you also get to rule out config issues. – coteyr – 2018-07-20T22:59:03.020

2

I'm wondering if your decades old IT department was concerned about backup. Since C: is a boot/OS partition, it would be typical to use some type of image backup, but for a data / program partition, an incremental file + folder backup could be used. Reducing the space used in the C: partition would reduce the time and space needed to backup a system.


A comment on my personal usage of the C: partition. I have a multi-boot system including Win 7 and Win 10 and I don't have any OS on the C: partition, just the boot files. I use Windows system image backup for both Win 7 and Win 10, and Windows system image backup always includes the C: (boot) partition, in addition to the Win 7 or Win 10 partition, so this is another scenario where reducing the amount of data and programs on the C: partition reduces the time and space needed for a system image backup (or restore if needed).


I'm leaving this section in my answer because of the comments below.

Since my system is multi-boot, rebooting into a different OS makes backup of data / program partitions simpler since there's no activity on the partition(s) while they are being backed up. I wrote a simple backup program that does a folder + file copy along with security and reparse info, but it doesn't quite work for Win 7 or Win 10 OS partitions, so I'm using system image backup for C;, Win 7 and Win 10 OS partitions.

rcgldr

Posted 2018-07-18T06:53:48.823

Reputation: 187

3...what? There's no shortage of tools and methods to back up the system partition. Also, your home-grown backup system will likely fail for anything complex because it doesn't handle the common cases of hard-linked files (SxS relies heavily on this), junction points (again, core OS relies on this), partially-written files (VSS handles this), locked files (again, VSS), etc.. – Bob – 2018-07-19T06:22:18.070

I can probably name a half a dozen different software packages - veem, macrium, acronis, symentec ghost... that do this. Most of these use inbuilt mechanisms in the background too. – Journeyman Geek – 2018-07-19T06:54:06.407

I run veeam, backed up to an external drive but run daily. There's a few more moving parts for backup backups, but you certainly don't need to manually defrag, or backup offline. – Journeyman Geek – 2018-07-19T07:42:15.637

As far as online full-partition backups, just about everything uses VSS to get a point-in-time snapshot (otherwise you run into issues when you back up, say, a database file that's partially written, then backup the database journal after the file is finished reading). Disk2vhd is the most trivial example; more full-featured suites like Acronis will additionally handle differential/incremental images.

– Bob – 2018-07-19T07:45:38.620

@Bob - I only run my program after rebooting into a different OS, so any partition being backed up is not active, avoiding the point in time snap shot issue of trying to backup an active partition. It similar to booting from a CD-ROM to run an disk imaging utility to "clone" a hard drive (which I did a while back when I replaced all my hard drives). – rcgldr – 2018-07-19T07:58:48.540

Where I work, we limit partition sizes on most of our computers due to storage of backups. We backup almost everyone's computer to network storage for quick recovery, but if we give them the whole drive to play with, we inevitably get some iPhone backups and home videos stored on the drive which double or triple the space used. Now we set the C: drive at a reasonable size, and partition the rest off if needed for them to store what they want with the understanding that it doesn't get backed up. – GuitarPicker – 2018-07-20T16:17:35.713

2

Here is one reason, but I don't believe it is a valid reason for today's (modern) computers.

This goes back to Windows 95/98 and XT. It probably doesn't apply to Vista and later, but it was a hardware limitation so running a newer OS on old hardware would still have to deal with the limitation.

I believe the limitation was 2gb, but there could have been a 1gb limitation (or perhaps others) at an earlier time.

The issue was (something like) this: the BOOT partition had to be within the first 2gb (perhaps 1gb earlier) of the physical space on the drive. It could have been that 1) the START of the BOOT partition had to be within the bounds of the limit, or, 2) the ENTIRE boot partition had to be within the bounds of the limit. It's possible that at various times, each of those cases applied, but if #2 applied, it was probably short lived, so I'll assume it's #1.

So, with #1, the START of the BOOT partition had to be within the first 2gb of physical space. This would not preclude making 1 big partition for the Boot/OS. But, the issue was dual/multi boot. If there seemed to ever be possible to want to dual/multi boot the drive, there had to be space available below the 2gb mark to create other bootable partitions on the drive. Since it may not be known at install time if the drive would ever need another boot partition, say, Linix, or some bootable debug/troubleshoot/recover partition, it was often recommended (and often without knowing why) to install on a "small" OS boot partition.

Kevin Fegan

Posted 2018-07-18T06:53:48.823

Reputation: 4 077

2

No, not with Windows and its major software suites insisting on ties to System: despite installing them to Programs:. (It's an institutionalized necessity the way most OSes are built.) A Data: volume makes sense, but a separate removable drive for your data (or NAS, or selective or incremental backups to such a removable drive) makes even more sense.

Partitioning for multi-OS systems also makes sense, but each partition forces you to select a hard upper storage limit. Generally it's better with separate drives even in this case.

And today, Virtual Machines and Cloud drives supplement many of these choices.

Henrik Erlandsson

Posted 2018-07-18T06:53:48.823

Reputation: 299

2

There is one particular reason — using volume snapshots.

A volume snapshot is a backup of the whole partition. When you restore from such kind of backup, you rewrite the whole partition, effectively rolling back the system to the previous state.

A system administrator might create such snapshots on a regular basis in preparation for any kind of software failures. They can even store them on another partition of the same drive. That's why you want the system partition to be relatively small.

When using this scheme, users are encouraged to store their data at the network drive. In case of any software problems a system administrator can just rollback the system to the working state. That would be extremely time-efficient comparing to manually investigating the problem reasons and fixing it.

enkryptor

Posted 2018-07-18T06:53:48.823

Reputation: 655

0

I have been programming for nearly half a century. Another reply says historical and another long reply says Multiple physical disks.

I want to emphasize that multiple physical disks is most likely what began the recommendation. More than half a century ago, back when there were no such things as partitions, it was extremely common to use a separate physical drive for the system. The primary reason for that is the physical movement of the heads and the spinning of the drives. Those advantages do not exist for partitions when the physical drive is used often for other things.

Also note that Unix separates the system and the data into separate partitions. There are many good reasons to do that, as explained in many other answers, but for performance, separate physical drives is the primary justification.

user34660

Posted 2018-07-18T06:53:48.823

Reputation: 99

-1

The reason we used to make 2 partitions was due to viruses. Some viruses used to overwrite the boot sector and the beginning of the disk.

Backups on users computers used to boil down to a copy of the whole program onto a floppy disk. (In actuality a non-backup)

So when a virus "ate up" the beginning of the disk, usually only the system had to be reinstalled.

And if there were no backups, then the recovery of data was easier if the second partition was intact.

So if you have backups, this reason is not valid.

Robert Andrzejuk

Posted 2018-07-18T06:53:48.823

Reputation: 180