1

I have a Volume Group which consist two physical volume of size 10TB each SSD & NLSAS (yes one very slow and other very fast) so total 20TB.

  1. I created Volume backupvg with the 10TB SSD & formatted the lv with (ext4) (PV0)
  2. While formatting Nlsas (ext4) (it consisted some previous data) I had too many problem with related infrastructure and do not want to do it again . Just imagine a company all infra is down for 24hrs during the format.
  3. After formatting i added the Physical volume NL-SAS 10TB to the VOL GROUP backupvg (PV1)
  4. Now when i tried to resize this LV i hit the limit of ext4 32bit that is 16TB max. So only resize2fs /dev/backupvg/backuplv01 16777216M worked
  5. After some research about the solution and replicated the same on differnet infra and found that it will again create issue such as number 2 (SLOW NL-SAS & infra affection) So thought of not to convert to 64bit.

Now my only option is to add additional 6.xxTB SSD which is max i got and transfer the PE to it from the NLSAS and remove the NLSAS from the LV & VOLUME GROUP and convert the existing to XFS (after some time as i do not want to format number 2).

Now i want to know how will i know how much ACTUAL data is used by the NLSAS and how much by the SSD in the LV/VG which i think is giving issue due to 32bit.

  • The 16tb is almost 98% full

pvdisplay -m

--- Physical volume ---
  PV Name               /dev/mapper/SSD1
  VG Name               backupvg
  PV Size               <10.00 TiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2621439
  Free PE               0
  Allocated PE          2621439

  --- Physical Segments ---
  Physical extent 0 to 2621438:
    Logical volume      /dev/backupvg/backuplv01
    Logical extents     0 to 2621438

  --- Physical volume ---
  PV Name               /dev/mapper/NLSAS1
  VG Name               backupvg
  PV Size               <10.00 TiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2621439
  Free PE               0
  Allocated PE          2621439

  --- Physical Segments ---
  Physical extent 0 to 2621438:
    Logical volume      /dev/backupvg/backuplv01
    Logical extents     2621439 to 5242877

vgdisplay backupvg

  --- Volume group ---
  VG Name               backupvg
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               <20.00 TiB
  PE Size               4.00 MiB
  Total PE              5242878
  Alloc PE / Size       5242878 / <20.00 TiB
  Free  PE / Size       0 / 0

lvdisplay /dev/backupvg/backuplv01

  --- Logical volume ---
  LV Path                /dev/backupvg/backuplv01
  LV Name                backuplv01
  VG Name                backupvg
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                <20.00 TiB
  Current LE             5242878
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:57

Kindly suggest how to break the limit of 16TB but not format/convert NLSAS to 64bit. And how to check which PV is used 6TB and which is used full 10TB. If PV NLSAS is used 6TB and 4tb is not used than i can attach 6tb ssd lun than how to proceed with the movement from PV NLSAS 6tb data to SSD 6TB DATA

update As requested

pvdisplay

 --- Physical volume ---
  PV Name               /dev/mapper/LUN_10TB_SSD1
  VG Name               backupvg
  PV Size               <10.00 TiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2621439
  Free PE               0
  Allocated PE          2621439
  PV UUID               OjUFfu-*removed*

  --- Physical volume ---
  PV Name               /dev/mapper/LUN_10TB_NLSAS1
  VG Name               backupvg
  PV Size               <10.00 TiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2621439
  Free PE               0
  Allocated PE          2621439
  PV UUID               57YeFo-*removed*

vgdisplay

  --- Volume group ---
  VG Name               backupvg
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               <20.00 TiB
  PE Size               4.00 MiB
  Total PE              5242878
  Alloc PE / Size       5242878 / <20.00 TiB
  Free  PE / Size       0 / 0
  VG UUID               0m9T8d-bSa7-*removed*

lvdisplay

--- Logical volume ---
  LV Path                /dev/backupvg/backuplv01
  LV Name                backuplv01
  VG Name                backupvg
  LV UUID                DT0rXQ-*removed*
  LV Write Access        read/write
  LV Creation host, time hostnameremoved, 2020-02-01 22:02:51 +0400
  LV Status              available
  # open                 1
  LV Size                <20.00 TiB
  Current LE             5242878
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:57
  • A few questions first. 1. Who sells a 10TB SSD? I know of 9.2TB ones and larger but I've never come across a 10TB one and I'm interested. 2. What are you trying to achieve with all this? Are you just after a 20TB concatenated mixed-format volume? If so then this site is NOT the place for you, we're a site for professional designers and sysadmins, we build things that are designed for availability first and foremost and this idea, if it's what you're trying to do, will be deeply fragile, it's when not if you lose your data... – Chopper3 Apr 07 '20 at 16:31
  • 3. Why was your company's IT down for a day to format a disk? Again this feels like almost any answer you give will be one that would be frowned on my all or most IT pro's who use this site. – Chopper3 Apr 07 '20 at 16:31
  • @Chopper3 1) It is a SAN LUN, 2) I just want one mount point for Backup and due to 32bit ext4 i am stuck. All this is due to that only. 3) NL-SAS the lun was shared with hyper-v ennvironment. And when i tried to format the NL-SAS the command got stuck for almost 18hrs. Due to which other infra which were having vm's got issues. They restarted, Hyper-v was juggling everything of HA node..it was a nightmare. As soon as the command returned the hyper-v stabilized. Believe it or not we have the rca for the same. – user1486241 Apr 08 '20 at 01:04
  • I just want to know which physical HDD (from SSD & NLSAS) is been used by LVM and how much it is been used. So that i can attach additional lun and transfer the PE and remove the NLSAS LUN from the Volume group. – user1486241 Apr 08 '20 at 01:07
  • Ok, the 10TB SSD thing makes sense, the NL-SAS bit sounds like quite a mistake, and if you want to know that LVM stuff then please show us the outputs of pvdisplay, vgdisplay and lvdisplay please. – Chopper3 Apr 08 '20 at 09:26
  • can you just do the vgdisplay and lvdisplay without arguments please – Chopper3 Apr 08 '20 at 15:41
  • As requested i updated the ticket. – user1486241 Apr 09 '20 at 06:01

0 Answers0