0

VPS setup is as follows

NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                    11:0    1 1024M  0 rom
vda                   253:0    0   60G  0 disk
├─vda1                253:1    0  9.8G  0 part /
└─vda2                253:2    0 50.2G  0 part
  └─VolGroup1-LogVol1 252:0    0 50.2G  0 lvm  /mnt/lvm1
vdb                   253:16   0   10G  0 disk
Disk /dev/vda: 60 GiB, 64424509440 bytes, 125829120 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: XXXXXXXX
.
Device     Boot    Start       End   Sectors  Size Id Type
/dev/vda1  *        2048  20482047  20480000  9.8G 83 Linux
/dev/vda2       20482048 125829119 105347072 50.2G 83 Linux
.
Disk /dev/vdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXX
.
Device     Start      End  Sectors Size Type
/dev/vdb1   2048 20969471 20967424  10G Linux filesystem
.
Disk /dev/mapper/VolGroup1-LogVol1: 50.2 GiB, 53934555136 bytes, 105340928 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

/dev/vdb/ is single block storage volume at initially 10GB size. I will add more space to this block storage later. Or could add more block storage (/dev/vdc, /dev/vdc, etc).

I need to mount it to /mnt/lvm1. The applications using this folder will need more an more space and I can't make them use multiple folders.

What is the optimal setup to keep adding space to a single mountpoint? Of course I can extend VolGroup1-LogVol1 onto /dev/vdb1, but are there other ways to do this which might be easier to manage? This could be in the form of a different PV/VG/LV setup and/or using multiple block storages.

Gaia
  • 1,777
  • 4
  • 32
  • 58

1 Answers1

1

There is no one optimal way to do this. There are however several ways that work better than others depending on your scenario.

In general, avoid as many abstraction layers as you can. If you're going to use the entire disk for LVM and nothing else, it doesn't make any sense to put a partition table on it - so eliminate that layer and make /dev/vdb an LVM physical volume on its own. This also makes resizing this device much easier and safer in the future, as you won't also have to resize a partition every time. Besides, LVM is like an advanced partition table anyways.

If this block device is being provided by something like EBS, then that volume can be expanded while online. Most other block device targets from various providers can be expanded online as well. Making LVM register this expanded volume takes only a single command (provided you're not using a partition table):

pvresize /dev/vdb

After that re-detection of physical volume capacity, you'll see new storage sizes reflected in LVM that will be immediately available to use. You can then freely use the expanded space by extending your LVs or adding new ones.

Adding more capacity by adding physical volumes as a method of expansion works, but it's best to avoid this if you can. Managing many physical volumes rather than one large one can be annoying to troubleshoot, especially when having to do things like globally filter multipath volumes, manage remote storage targets themselves, or determine if a given physical volume is giving a volume group issues.

However, in an environment where it's not easy or impossible to resize existing backing storage targets that are providing said PVs, it's easier to just use LVMs ability to aggregate block devices in a volume group and add more devices - this is usually the case when using "bare" hard drives, for example.

Spooler
  • 7,016
  • 16
  • 29
  • I agree with your reasoning, given *my specific needs*, I am now likely to provision LVM directly on the raw block device. But for the sake of making your answer more complete, I would like to point out to the 3 top answers on this question that advise against doing it this way: https://serverfault.com/questions/439022/does-lvm-need-a-partition-table – Gaia May 14 '18 at 23:18
  • 1
    Advise against it they may, but those are not very good reasons for using partition tables underneath LVM. If it's to avoid operator error, an operator should never make profound changes to block devices of any kind without looking at what they contain first. If an operator thinks your volumes are corrupt just because they don't have partition tables on them, what is that person doing managing an LVM system in the first place? Similarly, if a piece of software makes the same wild assertions with no data, then everything it regarding that must be taken with a grain of salt at least. – Spooler May 14 '18 at 23:34
  • should the part of VolGroup1-LogVol1 which is in the main drive (/dev/vda2) also be on raw blocks or is it ok for it to be in a partition and /dev/vdb/ alone be on raw blocks? – Gaia May 15 '18 at 15:09
  • 1
    That's a case that requires the use of a partition table. You need it for booting. There's nothing wrong with using that partition as a PV, as you're probably never going to need to resize it. All I'm really saying is that it's silly to use one when it's not required. – Spooler May 15 '18 at 15:13