15

In 2013, does it make sense to still have multiple mount points on a new Linux image, or does allocating all space to / make more sense?

I'd prefer to avoid the reboot required to increase the size of a mount point. I'd also prefer to monitor a single mount's space. I'd rather know the entire server is above 70% drive space usage, vs dealing with individual mount points.

Jeremy Mullin
  • 303
  • 1
  • 5
  • Why do you have to reboot to increase the size of a mount point? I think all the common filesystems support online expand at this point. – derobert Aug 19 '13 at 20:54

5 Answers5

17

Sure it's still useful. You don't want a runaway process to fill a log and cause / to go full disk. Also, if you're using something like LVM you can do online expansion of volumes.

With many VMs, you're going to want to separate IO anyway. You're going to probably want your databases on separate spindles and the only way to accomplish that is to have a separate mount point for your database's location. Databases aside, it makes for more granular flexibility down the road if you outgrow your original design.

So, in short, yes there are still good reasons for doing this in 2013.

MDMarra
  • 100,183
  • 32
  • 195
  • 326
  • Won't the machine still crash even if /var (or /tmp) goes full anyway? – onionjake Aug 13 '13 at 03:14
  • 2
    @onionjake No, not necessarily. But they will crash if `/` fills up. – ewwhite Aug 13 '13 at 07:56
  • Thanks for the note about the logs getting out of control. These particular VMs use a SAN so I believe IO is already distributed and isn't a concern for me in this particular situation. – Jeremy Mullin Aug 13 '13 at 17:15
  • Further to this point, if you want a single VM to span multiple VMFS pools in ESXi, you'll have to use multiple virtual disks (which appear as physical disks to the VM). You can still combine these into one mount point with LVM if you really want, but that's bad practice, in my opinion. – Paul Gear Aug 14 '13 at 00:32
5

Nowadays, I would not use too many separate mounts, but probably a few key ones would be helpful in system administration.

Just 2 or 3, esp. with one that varies in size. This depends on what you are using. I would say just / (relatively stable) and /var (changing). Depending on the os and disk geometry, /boot may also be needed. /tmp is likely a tmpfs mount set up by the installer.

The changing (/var mostly, but could be just /var/log and /var/lib/mysql etc.) volumes are usually what you need to worry about and plan for expansion. So if possible, use lvm etc. to make resizing easier.

johnshen64
  • 5,747
  • 23
  • 17
  • 1
    I personally use LVM and boot must be on its own partition, not part of a volume group I believe (if you use grub legacy). –  Aug 12 '13 at 20:38
4

Yes, I still use multiple partitions on virtual machines and mountpoints for monitoring, security and maintenance requirements.

I'm not a fan of single or limited mountpoint virtual machines (unless they're throwaway machines). I treat VMs the same way I treat physical servers. Aligning partitions with some of the Linux Filesystem Hierarchy Standard still makes sense in terms of logical separation of executables, data partitions, temp and log storage. This also eases system repair. This is especially true with virtual machines and servers derived from a template.

(BTW, I don't like LVM on virtual machines either... Plan better!!)

In my systems, I try to do the following:

  • / is typically small and does not grow much.
  • /boot is predictable in size and the growth is controlled by the frequency of kernel updates.
  • /tmp is application and environment dependent, but can be sized appropriately. Monitoring it separately helps meter abnormal behavior and protects the rest of the system.
  • /usr Should be predictable, containing executables, etc.
  • /var grows, but the amount of data churn can be smaller. Nice to be able to meter it separately.
  • And a growth partition. In this case, it's /data, but if this were a database system, it may be /var/lib/mysql or /var/lib/pgsql... Note that it's a different block device, /dev/sdb. This is simply another VMDK on this virtual machine, so it can be resized independently of the VMDK containing the real OS partitions.

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              12G  2.5G  8.8G  23% /
tmpfs                 7.8G     0  7.8G   0% /dev/shm
/dev/sda1             291M  131M  145M  48% /boot
/dev/sda7             2.0G   68M  1.9G   4% /tmp
/dev/sda3             9.9G  3.5G  5.9G  38% /usr
/dev/sda6             6.0G  892M  4.8G  16% /var
/dev/sdb1             360G  271G   90G  76% /data

Separation of some of these partitions makes it far easier to identify trends and detect anomalous behavior; e.g. 4GB core dumps in /var, a process that exhausts /tmp,

Normal enter image description here

Abnormal. The sudden rise in /var would not have been easy to detect if one large / partition were used. enter image description here


Recently, I've had to apply a cocktail of filesystem mount parameters and attributes (nodev,nosuid,noexec,noatime,nobarrier) for a security-hardened VM template. The partitioning was an absolute requirement for this because some partitions required specific settings that could not be applied globally. Another data point.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
2

Sure the multiple mount points still have their advantages, a virtualized server or not.

But with virtualization you probably also use virtual machines templates, right? And your monitoring system, such as Nagios (with NConf?) also supports templates? If so, then you need to go through this mental mount point fight only once.

Back to topic.

I used to split my systems this way: /, /home, /usr, /var, /tmp (and possibly some other mount point for data), but that was overkill and a hassle. Nowadays a simple OS image with only /, perhaps with a separate /var is a way to go for me; then if a virtual server needs more storage for data, then I give a yet another disk image for it and mount it wherever needed.

Janne Pikkarainen
  • 31,454
  • 4
  • 56
  • 78
  • How do you detect problems in say, `/opt` or `/tmp` under a single partition setup? – ewwhite Aug 13 '13 at 07:48
  • If a server starts to eat up its disk space rapidly, something like `du -m --max-depth=4 / | sort -nr | head -n 30 | less` is surprisingly effective. And in an controlled. monitored environment, how many potential places you have for this kind of stuff, anyway? `/var/log`, `/tmp`, `/opt/*/log`, perhaps something else? Not too hard. – Janne Pikkarainen Aug 13 '13 at 07:56
1

For file servers, I also tend to mount the /home volume on its own partition/disk, and use the noexec option when mounting it. Paranoia, but prevents the users from executing files from within their home folders.

As well, I tend to put the /boot volume on a RAID 1 mirror across all drives, but again, old practice I follow that I don't see a downside to yet

Canadian Luke
  • 885
  • 14
  • 41
  • 1
    The question was about virtual servers, so the bit about /boot on RAID 1 doesn't apply. But it's definitely a good idea on physical servers. – Paul Gear Aug 14 '13 at 00:30