2

From here, how I can resize the space in /usr, /var and /home partitions without lossing data, if is posible, and in normal mode (no recovery mode), since is a remote server. I seen other posts (How to resize RAID1 array with mdadm?, Linux: create software RAID 1 from partition with data) and docs but I am no sure about the process. Thanks.

rdw@u18702824:~$ df -h
  Filesystem             Size  Used Avail Use% Mounted on
  udev                   7.8G  4.0K  7.8G   1% /dev
  tmpfs                  1.6G  1.4M  1.6G   1% /run
  /dev/md1               4.0G  3.4G  549M  87% /
  none                   4.0K     0  4.0K   0% /sys/fs/cgroup
  none                   5.0M     0  5.0M   0% /run/lock
  none                   7.8G  8.0K  7.8G   1% /run/shm
  none                   100M   28K  100M   1% /run/user
  /dev/mapper/vg00-usr   4.8G  4.8G     0 100% /usr
  /dev/mapper/vg00-var   4.8G  2.5G  2.1G  55% /var
  /dev/mapper/vg00-home  4.8G  2.9G  1.8G  62% /home

======================================================

rdw@u18702824:~$ sudo mdadm --detail /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Mon Feb  6 14:19:22 2017
     Raid Level : raid1
     Array Size : 4194240 (4.00 GiB 4.29 GB)
  Used Dev Size : 4194240 (4.00 GiB 4.29 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Thu Feb 23 12:10:34 2017
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 3562dace:6f38a4cf:1f51fb89:78ee93fe
         Events : 0.72

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

======================================================

rdw@u18702824:~$ sudo mdadm --detail /dev/md3
/dev/md3:
        Version : 0.90
  Creation Time : Mon Feb  6 14:19:23 2017
     Raid Level : raid1
     Array Size : 1458846016 (1391.26 GiB 1493.86 GB)
  Used Dev Size : 1458846016 (1391.26 GiB 1493.86 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 3
    Persistence : Superblock is persistent

    Update Time : Thu Feb 23 12:10:46 2017
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 52d90469:78a9a458:1f51fb89:78ee93fe
         Events : 0.1464

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3

======================================================

rdw@u18702824:~$ sudo lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg00/usr
  LV Name                usr
  VG Name                vg00
  LV UUID                dwihnp-aXSl-rCly-MvlH-FoxI-hDrv-mDnVNJ
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

  --- Logical volume ---
  LV Path                /dev/vg00/var
  LV Name                var
  VG Name                vg00
  LV UUID                I5eIwR-dunS-3ua2-IrSw-3C30-cxOS-zLj3a4
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1

  --- Logical volume ---
  LV Path                /dev/vg00/home
  LV Name                home
  VG Name                vg00
  LV UUID                4tYJyU-wlnF-qERG-95Wt-2rR4-Gyfs-NofCZd
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2

======================================================  

rdw@u18702824:~$ sudo vgdisplay
  --- Volume group ---
  VG Name               vg00
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.36 TiB
  PE Size               4.00 MiB
  Total PE              356163
  Alloc PE / Size       3840 / 15.00 GiB
  Free  PE / Size       352323 / 1.34 TiB
  VG UUID               av08Kn-EzMV-2mie-HE97-cHcr-oL1x-qmYMz6

======================================================

rdw@u18702824:~$ sudo lvscan
  ACTIVE            '/dev/vg00/usr' [5.00 GiB] inherit
  ACTIVE            '/dev/vg00/var' [5.00 GiB] inherit
  ACTIVE            '/dev/vg00/home' [5.00 GiB] inherit

rdw@u18702824:~$ sudo lvmdiskscan
  /dev/ram0      [      64.00 MiB]
  /dev/vg00/usr  [       5.00 GiB]
  /dev/ram1      [      64.00 MiB]
  /dev/md1       [       4.00 GiB]
  /dev/vg00/var  [       5.00 GiB]
  /dev/ram2      [      64.00 MiB]
  /dev/sda2      [       2.00 GiB]
  /dev/vg00/home [       5.00 GiB]
  /dev/ram3      [      64.00 MiB]
  /dev/md3       [       1.36 TiB] LVM physical volume
  /dev/ram4      [      64.00 MiB]
  /dev/ram5      [      64.00 MiB]
  /dev/ram6      [      64.00 MiB]
  /dev/ram7      [      64.00 MiB]
  /dev/ram8      [      64.00 MiB]
  /dev/ram9      [      64.00 MiB]
  /dev/ram10     [      64.00 MiB]
  /dev/ram11     [      64.00 MiB]
  /dev/ram12     [      64.00 MiB]
  /dev/ram13     [      64.00 MiB]
  /dev/ram14     [      64.00 MiB]
  /dev/ram15     [      64.00 MiB]
  /dev/sdb2      [       2.00 GiB]
  3 disks
  19 partitions
  0 LVM physical volume whole disks
  1 LVM physical volume

======================================================

rdw@u18702824:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb1[1] sda1[0]
      4194240 blocks [2/2] [UU]

md3 : active raid1 sdb3[1] sda3[0]
      1458846016 blocks [2/2] [UU]

unused devices: <none>
Héctor
  • 23
  • 2
  • I’d suggest `lsblk -o +FSTYPE` to show most of the relevant information in a concise form. you should specify the type of filesystem you’re using. Most filesystems need to be unmounted before you can shrink them; some even can’t be shrinked at all and some even can’t be extended. – user2233709 Mar 15 '17 at 22:04

1 Answers1

2

So your volume group has over 1TB of space available.

rdw@u18702824:~$ sudo vgdisplay
  --- Volume group ---
  VG Name               vg00
  ...
  VG Size               1.36 TiB
  PE Size               4.00 MiB
  Total PE              356163
  Alloc PE / Size       3840 / 15.00 GiB
  Free  PE / Size       352323 / 1.34 TiB

If you wanted to add another 10GB to your /usr bringing it up to a total of 15GB you should use commands like this (assuming your filesystems are ext2,3 or 4) {ref}.

lvextend -L+10G /dev/vg00/usr
resize2fs /dev/vg00/usr
Zoredache
  • 128,755
  • 40
  • 271
  • 413