46

I have often wondered why there is such a passion for partitioning drives, especially on Unixy OSes (/usr, /var, et al). This does not seem to be a common theme with Windows installations.

It seems that partitioning greatly increases the likelihood of filling one partition while others have a great deal of free space. Obviously this can be prevented by careful design and planning, but things can change. I've experienced this on machines many times, mostly on ones setup by others, or by the default install settings of the OS in question.

Another argument I've heard is that it simplifies backup. How does it simplify backup? I've also heard that it improves reliability. Again, how?

Almost 100% of the problems I have encountered with disk storage is with physical failure of the disk. Could it be argued that partitioning can potentially accelerate hardware failure, because of the thrashing a disk does when moving or copying data from one partition to another on the same disk?

I'm not trying to rock the boat too much, I would just like to see justification for an age-old admin practice.

  • In Infrastructure As A Service cloud, partitions are moot because drives (usually called volumes) can be attached so flexibly. – Skaperen Feb 16 '13 at 06:20

12 Answers12

41
  • Faster fsck. Lets say your system fails, for some reason and when it reboots it needs to run an fsck. With a really large partition that fsck can take forever and nothing on the system will work until the fsck of the entire system is done. If you partition the system so the root partition is pretty small, then you may be able to get the system up and some of the basic services running while you wait for the fsck of the larger volumes to complete.
    • if your system has small drives, or there is only one service on the system then this may not really matter.
    • With journaled file-systems this may not matter most of the time, but occasionally even with a journaled file-system you have to run a full fsck.
  • Improved security because you can mount a fs read-only.
    • For example nobody should need to write to /usr during normal usage. So why not just mount the filesystem so it is read-only. Having filesystems read-only when they don't need to be written to will prevent some script-kiddy attacks, and may prevent you from destroying things when you don't mean too.
    • This may make maintaining the system more difficult, since you'll need to remount it as read-write when you need to apply updates.
  • Improved performance, functionality for a specific service/usage.
    • Some filesystems are more appropriate for specific services/applications, or they allow your to configure the filesystem so that it operates better in some cases. Maybe you have a filesystem with lots of small files and you need more inodes. Or maybe you need to store large a few large files, virtual disk images.

I don't think setting up lots of partitions is something you should do for every system. Personally, on most of Linux my servers I just setup one big partition. Since most of my systems have smallish drives and are single purpose and serving some infrastructure-role (dns, dhcp, firewall, router, etc). On my file servers I do setup partitions to separate the data from the system.

Could it be argued that partitioning can potentially accelerate hardware failure, because of the thrashing a disk does when moving or copying data from one partition to another on the same disk?

I highly doubt a well partitioned system would have any increased likely-hood of failure.

Zoredache
  • 128,755
  • 40
  • 271
  • 413
  • 9
    +1 Both security and disaster preparedness is done in layers. I like to think of partitions similar to bulkheads in a ship. They are there so that if something catastrophic happens in one segment of the ship the ship as a whole isn't necessarily at risk. So when file system corruption happens the damage is somewhat contained. Likewise from a security point of view if /homes is mounted as noexec it can prevent certain types of attacks if a users account is compromised by a weak password for instance. – 3dinfluence Sep 01 '09 at 19:42
  • 1
    There are known exploits working _only_ on partitions mounted with noexec or ro. real security is not aimed by partitioning, but partitioning can help in damage limiting (cfr. logs flood attack for example) – drAlberT Sep 02 '09 at 08:55
19

One reason to keep /home/ seperate is you can reinstall the operating system and never worry about losing user data. Beyond that, theres a lot of security to be had in mounting everything either read only, or noexec. If users can't run code anywhere that they can write code, it's one less attack vector.

I'd only bother with that on a public machine though, as the downside of running out of disk space in one partition but having it in another is a serious annoyance. There are ways to work around this like doing software raid or ZFS where you should be able to dynamically resize partitions easily, but I have no experience with them.

semi
  • 726
  • 3
  • 7
  • 15
  • the separation of /home sounds like more of a desktop thing. On servers data is frequently in some other location like /var/www, /srv. I would argue your idea should be more about separating user-data wherever it is stored, not just if it is in /home. – Zoredache Sep 01 '09 at 19:32
  • on machines used for scientific processing its common for a significant number of people to have shell accounts. In this case a separate /home partition can quite useful. – jay_dubya Sep 02 '09 at 00:17
  • If you've got a significant number of machines, it's frequent that /home is an NFS mount from a fileserver, which logically has /home on its own partition, because of the aforementioned issues, too. – Matt Simmons Sep 02 '09 at 01:08
  • Users will often use all available space and this can be catastrophic to the services provided by a server. Only allowing them write access to a separate partition protects the server from failing due to a full root partition. Putting your log files on a seperate partition does the same. – Chris Nava Sep 02 '09 at 05:00
  • I think disk quotas and log rotations are a better solution to the space problem, but seperate partitions do make a nice last resort safety mechanism – semi Sep 02 '09 at 14:35
  • @Zoredache Separation of the OS is the issue. Separate it from all else. Yeah, for desktops separating it from /home might make sense. On a server, it depends on where the data is vs. the OS. In the cloud, launch images are generally the OS, and extra volumes/drives are attached (and usually sans partitions). – Skaperen Feb 16 '13 at 06:17
13
  • Simplifying backup

You can make backups (via dumpfs or similar) of things you want, not things you don't. dump(1) is a better backup system than tar(1).

  • Filling up partitions

That's an argument for partitioning as well. Users filling up their homedirs doesn't wreck the server, take the web server down, keep logs from happening, keep root from logging in, etc.

It also allows you to more transparently move a section of your data (say, /home) onto another disk: copy it over, mount it. If you're using something that allows shadow copies / snapshots/ whatever, you can even do that live.

Bill Weiss
  • 10,782
  • 3
  • 37
  • 65
9

I have always been taught to keep /var on a separate partition so if you get a out of control log file you will clog up a single partition not the entire drive. If it on the same space as the rest of the system and you 100% fill your entire disc, it can crash out and make for a nasty restore.

Skaughty
  • 733
  • 1
  • 5
  • 12
  • 1
    I can count on one hand over my 15-year (what? say it ain't so!) career the number of times I've lost a system because a run-away log file (or whatever) brought down a machine. On the other hand, I can count on one hand the number of times so far this year I've been stymied because someone decided /var would never need to be larger than 1GB and been wrong, which left me looking at '00s of free GB in /opt, all of which might as well have been on the moon for all the good it does me. (I'm opposed to partitioning. :) – David Mackintosh Sep 01 '09 at 23:10
  • I agree with David, having had exactly the same experiences with both Linux and Windows machines. Unless there is an overwhelming reason to do otherwise each disk (or raid array) gets one partition. – John Gardeniers Sep 02 '09 at 02:18
  • Well, this week it finnaly happened on a Windows system. McAfee put out a big update. We had about 5-10 systems that didn't like it. The anti-virus would try to install, create a 35MB log file, and try again. 1,500 times later I had 0KB free and a system that couldn't be logged in to. – Skaughty Oct 06 '09 at 13:57
8

All of the arguments that Zoredache puts forward are valid; one might quibble with the details a bit (having a machine up faster so you can do other things while fsck'ing other file systems doesn't do you much good if the system's reason for existing in the first place is on those other filesystems); however they are all a bit of justification-after-the-fact.

In the really old-school days, you didn't have filesystems on separate partitions -- you had them on separate disks, because disks were really small. Think 10MB.(1) So you had a tiny / partition, a /var disk, a /usr disk, a /tmp disk, and a /home disk. If you needed more space, you bought another disk.

Then "big" 50MB disks started costing less than the moon program, and suddenly it became possible to put an entire system on one disk with a usable amount of user space.

Still, with the disk sizes being small compared to what it was possible for the computer to generate, isolating /var and /opt and /home so that filling one didn't bring down the computer was still a good idea.

Today, in an enterprise situation, I don't partition the OSs. Data gets partitioned off, especially if it is user-generated; but frequently that's because it's on high-speed and/or redundant disk arrays of some kind. However /var and /usr all live in the same partition as /.

In a home environment, same thing -- /home should probably be on a separate disk/array, so that one can install/upgrade/break/fix whatever OS flavors are desired.

The reason for this is because no matter how big you guess your /var or /usr or whatever tree might get -- you'll either be hilariously wrong or you'll ridiculously over-commit. One of my old(er)-school collegues swears by partitioning, and I always get grief from him when he ends up sitting through a 180-day-fsck on a system I've created. But I can count on one hand over my entire career the number of times something's filled up / and brought down the system, while I can count one hand the number of times so far this year that I've been staring at a system where someone's decided /var would never need to be more than (say) 1GB and been wrong, leaving me staring at a full /var and '00s of free GB elsewhere on the system, all of which might as well have been on the moon for all the good they do me.

In today's world of big disks, I don't see that there's any real reason to partition the OS tree. User data, yes. But separate partitions for /var and /usr and /var/spool etc etc etc? No.


(1) = and I know just by picking that size, I'm going to get someone in the comments saying 10MB? Luxury. Why our disks were merely...

David Mackintosh
  • 14,223
  • 6
  • 46
  • 77
  • Actually 10 MB was the size of the disk the first time I ever heard someone "XXX FooBytes: more memory than I'll *ever* need!". Though I'm a youngster, so that was circa 1982. Other values have been 40 MB, 320 MB, 2.5 GB, 40 GB, ... the data expands to fill the available storage. – dmckee --- ex-moderator kitten Sep 02 '09 at 00:44
5

In reply to :

It seems that partitioning greatly increases the likelihood of filling one partition while others have a great deal of free space.

On a Linux machine, LVM (logical volume management) is used to prevent this. Most filesystems allow resizing (some even online). I create different partitions for different uses and format them to different filesystems (ie: xfs for large download files that I can quickly delete). Need more space? Mount a new drive, move the data to it, then mount it where the data used to be. It completely seamless to users and applications.

With LVM, you can add disks or partitions into the volume group, then create logical volumes in that group. If you leave free space in the volume group, you can then grow partitions that are filling up. If the filesystem supports it (ext3, ext4, reiserfs) you can shrink a partition that you've over allocated.

For example: make a boot partition on /dev/sda1 make a second (unformatted) partition /dev/sda2

pvcreate /dev/sda2 # add the partition to LVM
vgcreate vg /dev/sda2 # create a volume group with sda2 in it
lvcreate -n root -L5G vg
lvcreate -n home -L10G vg
lvcreate -n downloads -L100G vg

mkfs.ext3 /dev/vg/root
mkfs.ext4 /dev/vg/home
mkfs.xfs /dev/vg/downloads

mount /dev/vg/root /
mount /dev/vg/home /home
mount /dev/vg/downloads /downloads

When you need more space on /downloads (while filesystem is mounted):

lvresize -L+50G /dev/vg/downloads
xfs_growfs /dev/vg/downloads

And you now have a 150GB download partition. Similar for home. In fact I just resized an ext4 lvm "partition" today. On the other hand, logical volumes aren't really partitions and what you say of partitions being the wrong size jives with my personal experience (more trouble than they're worth).

Swoogan
  • 2,007
  • 1
  • 13
  • 21
4

The traditional Unix partitioning scheme is definately an old school practice that isn't as useful as it once was. Back in the day when Unix system uptime was measured in years, and you had dozens of hundreds of users futzing around with shells, mounting /usr as read-only was a useful way to protect the system. Now re-mounting filesystems to patch seems more labor-intensive and not so useful.

At my university back in the good old days, the Unix clusters had read-only filesystems with the standard unix tools, and add-on applications were in /usr/local, which was an NFS and later an AFS filesystem. Part of that was convenience... who wanted to recompile software on a dozen boxes in the cluster when you could run apps over a high-speed, 4Mb or 10Mb network? Today, with decent package managers and lots of cheap disk, it isn't that big of a deal.

I think thought processes started to change for me on Sun boxes with Veritas Volume Manager back around 1999, which reduced the pain threshold for moving disks around considerably.

Today, when I think partitioning, I'm thinking in terms of data protection and performance. Illustrative example:

  • Tier 1 SAN is very fast, very available (5 9's), replicated, and very expensive. Mission critical Databases or transaction logs live there.
  • Tier 2 SAN is fast, available (4 9's), expensive. Applications or lower priority data lives here.
  • Tier 3 SAN is available (4 9's), cheap. Stuff that isn't performance sensitive lives there.

These considerations apply to Windows as well. We have an SCCM server that manages around 40k clients. The database and logs are on mega-buck IBM DS8000 disk. The software packages are on an EMC Celerra with large, slow SATA disks that cost 60% less per GB.

duffbeer703
  • 20,077
  • 4
  • 30
  • 39
2

(Assuming a single large disk is available,) I put home and var on separate partitions to control the "out of control [user|log file] filling up all the space" problem, and to allow easy OS upgrades without touching home, but leave the rest together.

On older hardware it was sometime necessary to have a separate boot partition in insure that the kernel image was accessible to the boot loader.

2

I understand that this question is not OS-specific, right?

Under Windows, I tend to give all my machines as few partitions as possible, but no less than two - SYSTEM and DATA. If the machine has two physical disks, then one (smaller) will be SYSTEM, the other DATA. If there is just one disk, I split it in two partitions.

The reason for that is just one - when I need to reinstall the machine (and there will be such a time), I don't have to worry about the contents of the SYSTEM partition - I just do a full format on it and a clean install. This of course means that my Documents (and preferrably Desktop too) has to be mapped to a folder on DATA, but that's easy to do, especially on Vista and later.

I've also tried making more partitons (like GAMES, MUSIC, MOVIES, etc.) but that only resulted in some of them overflowing into others, creating more mess than order.

Vilx-
  • 791
  • 4
  • 13
  • 25
2

You mention one disk filling while the other has free space -- that's one of the reasons I partition -- because I can ensure that certain partitions don't fill up. Although, the way that quotas used to be managed, you'd have to assign all of the users a 0 quota on the other partitions, just to make sure that they didn't start hiding files away if they managed to find a directory they could write to.

As for simplifying backup -- if I know what the max size of each partition will be, I can make sure that it's a size that neatly fits onto a single tape, and can be completed in a fixed amount of time.

As for reliability, the only thing I can think of is monitoring -- I can more easily see when a given partition's growing more than it should be, and give me reason to look into it.

... now, all of that being said, we're far from the days of each user being given their little 20MB quota on a shared machine. Some of the old habits don't make sense -- but when you have a process go crazy and fill /var, which in turn fills /, and things grind to a halt, it's not all that bad of protection to have on production machines.

For home, I have partitions, but it's just to make it easier to manage the installed OSes.

Joe H.
  • 1,897
  • 12
  • 12
1

Personally I only use partitioning on my kid's computers. I create a large partition for the OS and a small partition for an image of the OS partition so that when the machine gets gunked up I can quickly restore it from the image.

In an enterprise environment I've never heard a compelling argument for partitioning.

joeqwerty
  • 108,377
  • 6
  • 80
  • 171
1

Everything in moderation is a good thing - it can be a good tool to isolate problems when there is a fault - such as disk filling up or filesystem corruption.

Don't mix it up with hardware failure - those should be handled by hardware redundancy (RAID)

Having said that, filesystems these days fails less often - ome even do online integrity checks (like ZFS). So hopefully offline fsck will go away at some point...

On the other hand, over-doing this only means more work (for you and your team) at the end - just do it in moderation and when it make sense...

Lester Cheung
  • 659
  • 4
  • 11