1

I have two somewhat related LVM questions regarding extending LVM managed volumes.

Firstly, from (edit: old) documentation that I've read about LVM and the relationship between Volume Groups and Physical Extents, if you want to grow a VG to larger than 256GB you must have a PE size larger than 4MB. An example is from this article where it displays a message saying "maximum logical volume size is 255.99 Gigabyte" with 4MB PEs. However, I would expect this to mean that a VG would not allow you to define/add more than 256GB worth of volumes into it if the PE size is 4MB, and would throw an error or at least a warning if you attempted it. I bring this up because I have a VM which shows this for one of its volume groups (excerpted):

VG Size               499.50 GiB
PE Size               4.00 MiB
Total PE              127872
Alloc PE / Size       127872 / 499.50 GiB
Free  PE / Size       0 / 0   

The VG consists of two Physical Volumes, one 100GB (/dev/sda2) and the other 400GB (/dev/sda3). How is it possible that I have successfully(?) defined 500GB of space in this VG with only a 4MB PE size? I have yet to do a practical test of trying to fill up the mounted logical volumes to see if I can indeed store 500GB, but unless something has changed in how LVM operates will the logical volumes just "stop writing" data once it hits 256GB of utilized space despite it showing a remaining 244GB? Can I successfully change the VG's PE size (in-place) to 8MB regardless?

Secondly, if I would prefer extending an LVM partition/physical volume to grow to utilize added space, when I increase a vmdk hard drive size, rather than creating a new physical volume and adding it to the VG (as this excellent article covers for VMWare) then would I use pvresize? My constraints would be to not destroy any existing data and simply add space to the volume, which that VMWare KB article accomplishes, whereas I have doubts after reading this SE question as to whether you can merely extend the volume or if you must "delete and create a larger one".

Since the VMWare article relies on the ability to add a primary partition you obviously can only follow that procedure up to the maximum 4 primary partitions, after which you either can no longer grow your volumes or I assume you must use something like pvresize (which I have no idea how to use properly/am hesitant to try for fear of destroying existing data). Any pointers on whether pvresize can do what I am looking for?

SeligkeitIstInGott
  • 149
  • 2
  • 5
  • 18

1 Answers1

3

Answer to 1st question

Please consider that the article you're basing all of your considerations is VERY outdated! Even though a clear date is not reported in the HOWTO page you linked, giving a look to the code related to mentioned HOWTO, you can see that it refers to kernel 2.3.99, back on year 2000! 15 years ago!

This is absolutely in-line with my direct experience, where I had NO problem at all having multi-terabyte PVs, with 4MB PE. Here follows an output of a running system: a RAID5 VG (vg_raid) built on-top of 5 x 2TB S-ATA disk and serving a single 7.28 TB LV (lv_raid):

[root@nocdump ~]# cat /etc/centos-release 
CentOS release 6.5 (Final)

[root@nocdump ~]# uname -r
2.6.32-431.17.1.el6.x86_64

[root@nocdump ~]# rpm -q lvm2
lvm2-2.02.100-8.el6.x86_64

[root@nocdump ~]# vgdisplay 
  --- Volume group ---
  VG Name               vg_raid
  System ID             
  Format                lvm2
  Metadata Areas        5
  Metadata Sequence No  19
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                5
  Act PV                5
  VG Size               9,10 TiB
  PE Size               4,00 MiB
  Total PE              2384655
  Alloc PE / Size       2384655 / 9,10 TiB
  Free  PE / Size       0 / 0   
  VG UUID               rXke5K-2NOo-5jwR-74LT-hw3L-6XcW-ikyDp0    

[root@nocdump ~]# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               vg_raid
  PV Size               1,82 TiB / not usable 4,00 MiB
  Allocatable           yes (but full)
  PE Size               4,00 MiB
  Total PE              476931
  Free PE               0
  Allocated PE          476931
  PV UUID               7ToSLb-H9Of-unDk-Yt22-upwi-qkVE-ZiEKo2

  --- Physical volume ---
  PV Name               /dev/sdc1
  VG Name               vg_raid
  PV Size               1,82 TiB / not usable 4,00 MiB
  Allocatable           yes (but full)
  PE Size               4,00 MiB
  Total PE              476931
  Free PE               0
  Allocated PE          476931
  PV UUID               PaUyX1-jykz-B2Tp-KE2M-9VaT-E4uY-iv8ppi

  --- Physical volume ---
  PV Name               /dev/sdd1
  VG Name               vg_raid
  PV Size               1,82 TiB / not usable 4,00 MiB
  Allocatable           yes (but full)
  PE Size               4,00 MiB
  Total PE              476931
  Free PE               0
  Allocated PE          476931
  PV UUID               DCag4w-CWbp-bUUI-7S24-JCFL-NlUK-Vgskab

  --- Physical volume ---
  PV Name               /dev/sde1
  VG Name               vg_raid
  PV Size               1,82 TiB / not usable 4,00 MiB
  Allocatable           yes (but full)
  PE Size               4,00 MiB
  Total PE              476931
  Free PE               0
  Allocated PE          476931
  PV UUID               3GW2LM-b01Y-oIgd-DHJf-Or0a-fys2-wLesSX

  --- Physical volume ---
  PV Name               /dev/sdf1
  VG Name               vg_raid
  PV Size               1,82 TiB / not usable 4,00 MiB
  Allocatable           yes (but full)
  PE Size               4,00 MiB
  Total PE              476931
  Free PE               0
  Allocated PE          476931
  PV UUID               fxd1rG-E9RA-2WsN-hLrG-6IgP-lZTE-0U52Ge


[root@nocdump ~]# lvdisplay /dev/vg_raid/lv_raid
  --- Logical volume ---
  LV Path                /dev/vg_raid/lv_raid
  LV Name                lv_raid
  VG Name                vg_raid
  LV UUID                fRzAnT-BQZf-J1oc-nOK9-BC10-S7w1-zoEv2s
  LV Write Access        read/write
  LV Creation host, time nocdump, 2014-05-23 00:17:02 +0200
  LV Status              available
  # open                 1
  LV Size                7,28 TiB
  Current LE             1907720
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1280
  Block device           253:15

Answer to 2nd question

You say: "...I have doubts ... as to whether you can merely extend the volume or if you must "delete and create a larger one..."

Let's clarify some preliminary concepts:

  1. Let's suppose you're under normal conditions: you have an HDD with common msdos partitions. As such, you have at most 4 primary partitions;

  2. Let's suppose your LVM Physical Volume is defined on top of one of the above 4 primary partitions and that such LVM partition (type 8e) span up to the maximum available space of the disk (in other words, there is no other partition, in between the LVM-Type-8e one and the end of the disk)

So, based on the above, you've an HDD similar to this:

[root@nocdump ~]# parted /dev/sda print
Modello: ATA ST2000DM001-1E61 (scsi)
Disco /dev/sda: 2000GB
Dimensione del settore (logica/fisica): 512B/4096B
Tabella delle partizioni: msdos

Numero  Inizio  Fine    Dimensione  Tipo     File system  Flag
 1      1049kB  525MB   524MB       primary  ext4         avvio
 2      525MB   1999GB  1998GB      primary               lvm

In such a condition:

  1. if you are running out of space, you have to consider that:

    3a) what's going to be filled is your filesystem (the "thing" commonly referred with EXT3, EXT4, NTFS, FAT32, HFS, etc.);

    3b) your filesystem is encapsulated within a device. In non-LVM scenario, such device is tipically a "partition". But in your case, as we're on LVM, such device is a LVM Logical Volume

    3c) a Logical Volume is contained within a LVM Volume Group

    3d) a Volume Group is made up of Physical Volumes;

    3e) a Physical Volume is built on top of a "device" (the same device referred at step 3b in non-LVM scenario) and in your case, such device is one primary-partition (/dev/sda2 in my case, above)

so:

  1. in your case, the steps needed to enlarge the filesystem are:

    i) enlarge the physical disk, by adding "unpartitioned space" at the end of the disk;

    ii) give LVM the option to use such unpartitioned space. This can be achieved in two different modes:

    iii/1) enlarging the LVM-type-8e partition, so that it will end at the new end of the disk (this is only possibile if the existing partition is the last one on the disk. Hence the requirement at point 2 above)

    iii/2) assign the unpartitioned space to a new primary partition, to be assigned type 8e. This will be a new LVM-Physical_Volume, useful to enlarge the Volume_Group;

As you seem to be interested in point iii/1, I'll focus on it. So... how to enlarge an existing primary partition?

WARNING!: this is going to be risky! Make sure you've a proper backup of your data and, also, a proper disaster-recovery plan. If you don't understand the risks you're running, don't proceed!

The answer is quite simple. As a primary-partition is referenced with a START_CYLINDER and END_CYLINDER within the disk Partition-Table, you simply need to modity the END_CYLINDER. Current value for START and END cylinder can be retrieved with a "fdisk -lu /dev/sda", as in:

[root@nocdump ~]# fdisk -lu /dev/sda

Disco /dev/sda: 2000.4 GB, 2000398934016 byte
[...]
Sector size (logical/physical): 512 bytes / 4096 bytes
[...]
Dispositivo Boot      Start         End      Blocks   Id  System
[...]
/dev/sda2         1026048  3904294911  1951634432   8e  Linux LVM

so my /dev/sda2 starts at 1026048 and ends at 3904294911. Now, you can simply DELETE the /dev/sda2 partition and CREATE a new partition, starting at 1026048 and ending at... the new end of the enlarged drive. Don't forget to assign type 8e to such partition and, obviously, don't forget to save changes. if you don't feel "confortable" with this process --as it's risky-- your only option is iii/2

  • iv) now that you've an enlarged partition, you've to convince your OS to reload the partition table. In the SF question you mentioned, there are plenty of details. As a worst case scenario, you will need to reboot your server

  • v) now that you've an enlarged partition, and it's correctly recognized by the OS, you've a new problem: you've an LVM-Physical_Volume that knows to be of a certain size (the original size), and such size is smaller than the underlying 8e-Physical partition (that you've just enlarged). You can check such mismatch by yourself with:

  • pvdisplay : that shows the original, smaller, size;
  • lvmdiskscan -l : that shows the new, bigger, size. Once above numbers are known, you can re-set the PV metadata to the new, bigger, size with: "pvresize" and its "setphysicalvolumesize" parameter, like in:

    pvresize --setphysicalvolumesize 49.51G /dev/sda2

Please note that I strongly suggest to use a slighly smaller value, in pvresize, than the one showed by lvmdiskscan. This 'cause if you erroneuosly set the PV size to a bigger value than the physical available space, very bad things can happen (like suspended LVs, unable to get back on-line!). So if you have a 49.52G physical partition, you should set physical volume size to 49.51G.

  • vi) now that you have an enlarged PV, you've free space available in your VG and... such free space can be allocated to your LVs. So...

  • vii) you can extend your LV with lvextend command. In my case, where I decide to allocate 100% of the free space (in the VG), I used: lvextend -l +100%FREE /dev/vg_raid/lv_raid

We've mostly done. Now we have an extended LV that, unfortunatly, still have, inside, a "smaller" filesystem. If you rely on EXT3 or EXT4, chances are high you can use resize2fs. From its man-page:

"The resize2fs program will resize ext2, ext3, or ext4 file systems. [...] If the filesystem is mounted, it can be used to expand the size of the mounted filesystem, assuming the kernel supports on-line resizing."

The amount of time resize2fs will take to accomplish the resize will depend from multiple factors: the size of the filesystem and the I/O activity running on it are the most significant ones.

That's all.

Again: be careful in doing all the above and, if you slightly think you're running any kind of risk, please, don't proceed! Don't blame me if something will go wrong!

Damiano Verzulli
  • 3,948
  • 1
  • 20
  • 30
  • Awesome. Count on google searches to prioritize outdated information (although certainly I should have checked the date). Well that's comforting though. I was wondering why it didn't error. So that certainly addresses the first question. How about pvresize to extend an existing PV versus making a new PV on an additional primary partition and adding it to the VG (per the VMWare article I linked)? Thanks! – SeligkeitIstInGott Jun 01 '15 at 20:27
  • 1
    I'm editing my answer, adding the second part. Please wait... :-) – Damiano Verzulli Jun 01 '15 at 20:33
  • Excellent and detailed information on the resize of an existing partition and subsequent LVM steps. I will indeed use it with caution, but with your step by step explanation I feel confident that I now know what the process entails. Thank you for taking the time to detail that. – SeligkeitIstInGott Jun 02 '15 at 20:07