I setup an Ubuntu guest on a CentOS KVM host with initially 6GB of disk space. How do I go about increasing the Ubuntu guest's disk space from the command line?
EDIT #1: I'm using a disk image file (qemu).
I setup an Ubuntu guest on a CentOS KVM host with initially 6GB of disk space. How do I go about increasing the Ubuntu guest's disk space from the command line?
EDIT #1: I'm using a disk image file (qemu).
qemu-img resize vmdisk.img +10G
to increase image size by 10GbFor better or worse, the commands below will run even if the target virtual disk is mounted. This can be useful in environments where the disk cannot be unmounted (such as a root partition), the VM must stay on, and the system owner is willing to assume the risk of data corruption. To remove that risk, you would need to log into the VM and unmount the target disk first, something that isn't always possible.
Perform the following from the KVM hypervisor.
Increase the size of the disk image file itself (specify the amount to increase):
qemu-img resize <my_vm>.img +10G
Get the name of the virtio device, via the libvirt shell (drive-virtio-disk0
in this example):
virsh qemu-monitor-command <my_vm> info block --hmp
drive-virtio-disk0: removable=0 io-status=ok file=/var/lib/libvirt/images/<my_vm>.img ro=0 drv=raw encrypted=0
drive-ide0-1-0: removable=1 locked=0 tray-open=0 io-status=ok [not inserted]
Signal the virtio driver to detect the new size (specify the total new capacity):
virsh qemu-monitor-command <my_vm> block_resize drive-virtio-disk0 20G --hmp
Then log into the VM. Running dmesg
should report that the virtio disk detected a capacity change. At this point, go ahead and resize your partitions and LVM structure as needed.
These serverfault questions are similar but more specific, KVM online disk resize? & Centos Xen resizing DomU partition and volume group. The 1st asks the question of how to increase a KVM guest while it's online, while the 2nd is XEN specific using LVM. I'm asking how to accomplish this while the KVM is offline.
NOTE: This link was useful for METHOD #1, and shows how to accomplish increasing a KVM's disk space (ext3 based), HOWTO: Resize a KVM Virtual Machine Image.
One thing to be aware of with KVM guests is that the partitions they're using inside can effect which method you can use to increase their disk space.
METHOD #1: Partitions are ext2/ext3/ext4 based
The nuts of this method are as follows:
# 1. stop the VM
# 2. move the current image
mv mykvm.img mykvm.img.bak
# 3. create a new image
qemu-img create -f raw addon.raw 30G
# 4. concatenate the 2 images
cat mykvm.img.bak addon.raw >> mykvm.img
Now with the larger mykvm.img file in hand, boot gparted and extend the existing partition into the newly added disk space. This last step basically extends the OS partition so that it can make use of the extra space.
METHOD #2: Partitions are LVM based
Here are the steps that I roughly followed to resize a KVM guest that used LVM internally.
run fdisk inside VM and delete & re-create LVM partition
% fdisk /dev/vda
...
Device Boot Start End Blocks Id System
/dev/vda1 * 1 13 104391 83 Linux
/dev/vda2 14 3263 26105625 8e Linux LVM
Command (m for help): d
Partition number (1-4): 2
Command (m for help): p
Disk /dev/vda: 48.3 GB, 48318382080 bytes
255 heads, 63 sectors/track, 5874 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/vda1 * 1 13 104391 83 Linux
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (14-5874, default 14): 14
Last cylinder or +size or +sizeM or +sizeK (14-5874, default 5874):
Using default value 5874
Command (m for help): p
Disk /dev/vda: 48.3 GB, 48318382080 bytes
255 heads, 63 sectors/track, 5874 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/vda1 * 1 13 104391 83 Linux
/dev/vda2 14 5874 47078482+ 83 Linux
Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): 8e
Changed system type of partition 2 to 8e (Linux LVM)
Command (m for help): p
Disk /dev/vda: 48.3 GB, 48318382080 bytes
255 heads, 63 sectors/track, 5874 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/vda1 * 1 13 104391 83 Linux
/dev/vda2 14 5874 47078482+ 8e Linux LVM
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or
resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
%
Reboot the VM
Resize the LVM physical volume
% pvdisplay
--- Physical volume ---
PV Name /dev/vda2
VG Name VolGroup00
PV Size 24.90 GB / not usable 21.59 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 796
Free PE 0
...
% pvresize /dev/vda2
% pvdisplay
--- Physical volume ---
PV Name /dev/vda2
VG Name VolGroup00
PV Size 44.90 GB / not usable 22.89 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 1436
Free PE 640
...
Resize the LVM Logical Volume
% lvresize /dev/VolGroup00/LogVol00 -l +640
Extending logical volume LogVol00 to 43.88 GB
Logical volume LogVol00 successfully resized
Grow the File system
% resize2fs /dev/VolGroup00/LogVol00
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/VolGroup00/LogVol00 is mounted on /; on-line resizing required
Performing an on-line resize of /dev/VolGroup00/LogVol00 to 11501568 (4k) blocks.
The filesystem on /dev/VolGroup00/LogVol00 is now 11501568 blocks long.
The above is my example, but I followed the steps on this website
Resize and Expand Internal Partitions in One Step
I had an Ubuntu host with a qcow2 guest file image and wanted to resize the disk and expand the appropriate partitions all in one step. It requires you to set up the libvirt guest filesystem utilities, but those are useful to have around anyway.
Inspiration from here: http://libguestfs.org/virt-resize.1.html
The key command here is: virt-resize
Preparation:
* Install libvirt file system utilities package
* sudo apt-get install libguestfs-tools
* Test to see if it works (it won't) -- you need to see "===== TEST FINISHED OK =====" at the bottom:
* sudo libguestfs-test-tool
* If you don't see "===== TEST FINISHED OK =====" at the bottom then repair it:
* sudo update-guestfs-appliance
* Run the test again and verify it works
* sudo libguestfs-test-tool
Now do the following:
1) shutdown the guest:
2) Check out the current sizing and view the partition name you want to expand using libvirt utility:
sudo virt-filesystems --long --parts --blkdevs -h -a name-of-guest-disk-file
3) Create the new (40G) output disk:
qcow: sudo qemu-img create -f qcow2 -o preallocation=metadata outdisk 40G
img: sudo truncate -s 40G outdisk
4) Copy the old to the new while expand the appropriate partition (assuming your disk partition from step 2 was /dev/sda1):
sudo virt-resize --expand /dev/sda1 indisk outdisk
5) Rename the indisk file as a backup, rename the outdisk as indisk (or modify the guest XML)
6) Reboot the guest and test the new disk file carefully before deleting the original file
7) Profit!
It is possible to do online resize, without stopping the VM. libvirtd supports this natively:
Find the block device name. Should be something like "vda"
$ virsh domblklist <libvirtd_vm_name>
Resize the virtual device:
$ virsh blockresize --domain <libvirtd_vm_name> --path <block_device_name> --size <new_size>
Here is an example were I expand the vda
disk from 50GB
to 51GB
for undercloud
VM.
[root@localhost ~]# virsh domblklist undercloud
Target Source
------------------------------------------------
vda /home/images/undercloud.qcow2
Now take a look at the .qcow2 image file's details:
[root@localhost ~]# qemu-img info /home/images/undercloud.qcow2
image: /home/images/undercloud.qcow2
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 38G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Now let's resize the vda block device:
[root@localhost ~]# virsh blockresize undercloud vda 51G
Block device 'vda' is resized
And confirm:
[root@localhost ~]# qemu-img info /home/images/undercloud.qcow2
image: /home/images/undercloud.qcow2
file format: qcow2
virtual size: 51G (54760833024 bytes)
disk size: 38G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
[root@localhost ~]#
Then you can use this script inside the VM to show the commands to resize the block devices and fs: https://github.com/mircea-vutcovici/scripts/blob/master/vol_resize.sh. Here is a sample output:
mvutcovi@ubuntu1904:~$ wget -q https://raw.githubusercontent.com/mircea-vutcovici/scripts/master/vol_resize.sh
mvutcovi@ubuntu1904:~$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 15414216 7928904 6682600 55% /
mvutcovi@ubuntu1904:~$ sudo bash vol_resize.sh --block-device /dev/vda1
sfdisk -d /dev/vda > dev_vda-partition-table-$(date +%F_%H%M%S).txt # Backup MS-DOS partition table for /dev/vda block device.
parted -s /dev/vda resizepart 1 # Resize MS-DOS partition /dev/vda1
# Update kernel with new partition table from disk
partx -u /dev/vda
partprobe /dev/vda
blockdev --rereadpt /dev/vda
kpartx -u /dev/vda
resize2fs /dev/vda1 # Resize ext3 or ext4 filesystem
mvutcovi@ubuntu1904:~$
If you are using LVM within the VM the simplest way to do this woudl be to add a new virtual disk to the VM and expand the volume group and logical volumes onto that.
To check if you are using LVM run sudo pvs; sudo vgs; sudo lvs
, you will get something like this out:
PV VG Fmt Attr PSize PFree
/dev/vda1 vgWWW lvm2 a- 30.00g 0
VG #PV #LV #SN Attr VSize VFree
vgWWW 1 2 0 wz--n- 30.00g 0
LV VG Attr LSize
root vgWWW -wi-ao 28.80g
swap vgWWW -wi-ao 1.19g
if the VM's OS is using LVM. In the above example the VM has a 30Gbyte vdisk, configured using LVM with one volume group called vgWWW containing two logical volumes, one for swap and one for everything else.
If LV is in use in the VM:
sudo pvcreate /dev/vdb
sudo vgextend vgWWW /dev/vdb
sudo lvextend --extents +100%FREE /dev/vgWWW/root
(or something like sudo lvextend --size +8G /dev/vgWWW/root
if you don't want to grow it all the way, this example would add 8Gb to the volume)resize2fs /dev/vgWWW/root
Note: the above assumes the vg/lv names are the same as my example which is unlikely, change as appropriate, also if the VM already had a virtual drive called vdb
the new one will be something else (vdc
, and so on)
Note: resize2fs
will only work on ext2, ext3 and ext4 filesystem. If you are using something else it will error and do nothing.
Note: as you are resizing a live filesystem resize2fs
won't prompt you to run fsck
first as it would for an unmounted filesystem, it will just go ahead. You might want to run a read-only filesystem check there are no issues before proceeding.
This way you can expand the partition you want:
# see what partitions you have?
virt-filesystems --long -h --all -a olddisk
truncate -r olddisk newdisk
truncate -s +5G newdisk
# Note "/dev/sda2" is a partition inside the "olddisk" file.
virt-resize --expand /dev/sda2 olddisk newdisk
See more examples here.
There is a possibility to increase the size of your VM's disk without rebooting the VM if you're using virtio drive and LVM.
(Optional) Create a primary partition with fdisk to get /dev/vdb1, then use kpartx -a /dev/vdb to reread the partition table
Use vgextend vg_name /dev/vdb1 (or /dev/vdb if you did not create a partition)
You're done.
Another way to do it
truncate -s +2G vm1.img
go in the make a disk rescan and after you can do your lvm resize.
qemu-img resize vmdisk.img +10G
to increase image size by 10GbAll credit goes to Josphat Mutai. Note that sizes, disk numbers and names mentioned below are different from the question and yours will be too.
Ubuntu / Debian
$ sudo apt -y install cloud-guest-utils gdisk
CentOS / RHEL / Fedora
$ sudo yum -y install cloud-utils-growpart gdisk
Check if using LVM (indicator for LVM something like /dev/mapper/xyz
):
cat /etc/fstab
For 'non lvm', device vda
, rootfs at partition 1
:
$ sudo growpart /dev/vda 1
# Uncomment and run one of these (ext or xfs)
#sudo resize2fs /dev/vda1
#sudo xfs_growfs /
For 'lvm', device vda
, partition 2
, physical volume vda2
, logical volume group rhel-root
:
$ sudo growpart /dev/vda 2
$ sudo pvresize /dev/vda2
$ sudo lvextend -r -l +100%FREE /dev/mapper/rhel-root
For the longer versions of each, see below!
Find partition number using lsblk
(in this example "1
" from vda1
)
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 253:0 0 10G 0 disk
└─vda1 253:1 0 10G 0 part /
Grow partition '1' of 'vda' using growpart
$ sudo growpart /dev/vda 1
CHANGED: partition=1 start=2048 old: size=20969472 end=20971520 new: size=62912479,end=62914527
To grow the file system, first check whether ext2/3/4 or XFS using df
:
$ df -hT | grep /dev/vda
/dev/vda1 ext4 30G 1.2G 7G 5% /
For ext4 file system, use resize2fs
:
$ sudo resize2fs /dev/vda1
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/vda1 is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 4
The filesystem on /dev/vda1 is now 7864059 blocks long.
For XFS, it can be grown while mounted using the xfs_growfs:
$ sudo xfs_growfs /
Find partition number:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 40G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 29G 0 part
├─rhel-root 253:0 0 26.9G 0 lvm /
└─rhel-swap 253:1 0 2.1G 0 lvm [SWAP]
Grow partition '2' of 'vda' using growpart
:
$ sudo growpart /dev/vda 2
CHANGED: partition=2 start=2099200 old: size=18872320 end=20971520 new: size=60815327,end=62914527
Resize physical volume with pvresize
:
$ sudo pvresize /dev/vda2
Physical volume "/dev/vda2" changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resized
Find name of volume group using df
(yours is likely different than below) and resize logical volume used by the root file system using the extended volume group with lvextend
:
$ df -hT | grep mapper
/dev/mapper/rhel-root xfs 27G 1.9G 26G 8% /
$ sudo lvextend -r -l +100%FREE /dev/mapper/rhel-root
Size of logical volume rhel/root changed from <26.93 GiB (6893 extents) to <36.93 GiB (9453 extents).
Logical volume rhel/root successfully resized.
If you didn’t use -r
option in previous step, the file system will still show the old size. To make the file system report the actual size, including extended do the following.
For ext4 file system, use resize2fs
:
$ sudo resize2fs /dev/name-of-volume-group/root
For xfs filesystem use xfs_growfs
:
$ sudo xfs_growfs /
meta-data=/dev/mapper/rhel-root isize=512 agcount=4, agsize=1764608 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=7058432, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=3446, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 7058432 to 9679872
If you have LVM in your VM then this is crazy easy and fast.
sudo system-config-lvm
in terminal)*.I found the GUI quite intuitive, but follow next steps if you have problems.
Note! At least on CentOS 6 LVM GUI is not installed by default, but you can install it with yum install system-config-lvm
.
Resize image:
qemu-img resize vmdisk.img +16G
increases image size by 16 GB.
If your image has GPT (GUID Partition Table) then drive size used in GPT will differ from new size, you need to fix it with gdisk
:
MY_DRIVE=/dev/vda
gdisk $MY_DRIVE <<EOF
w
Y
Y
EOF
or with parted
:
parted $MY_DRIVE print Fix
For some reason parted
fix is not working when no tty is presented (for example when provisioning with Vagrant) so I use gdisk
.
Increase partition size to fill all available space:
MY_PARTITION_GUID=$(
gdisk $MY_DRIVE <<EOF | sed -n -e 's/^Partition unique GUID: //p'
i
EOF
)
MY_PARTITION_FIRST_SECTOR=$(
gdisk $MY_DRIVE <<EOF | sed -n -e 's/^First sector: \([0-9]\+\).*/\1/p'
i
EOF
)
gdisk $MY_DRIVE <<EOF
d
n
$MY_PARTITION_FIRST_SECTOR
x
a
2
c
$MY_PARTITION_GUID
w
Y
EOF
The x a 2 <Enter>
part is optional and needed if you are using legacy BIOS.
MY_PARTITION_GUID=...
and c $MY_PARTITION_GUID
parts are also optional and needed only if you use partition UUID in /etc/fstab
or somewhere else.
Reboot or reread partitions with partx -u $MY_DRIVE
or partprobe
.
Extend partition, example for ext2
, ext3
or ext4
:
MY_PARTITION="${MY_DRIVE}1"
resize2fs $MY_PARTITION
GPT Disk and Non-Linux:
virsh shutdown myVM
qemu-img info mydisk.img
sudo qemu-img resize mydisk.img 50G
sudo sgdisk -e mydisk.img
New size of mydisk.img of myVM will be 50G and gpt partition table backup will be re-taken to the last sectors of disk. Otherwise most OS does not recognize disk as resized ... Linux does because gparted automatically fixes it most of the time.
! Gparted gets confused if you clone your disk to bigger disk with dd without cleaning bigger disks backup gpt partition (virtual or bare metal disk doesn't matter)
You can use solus vm with gparted mounted. Once used with the gparted you can easly boot the system and adjust the space. Make sure you have the correct boot priority set. As a reference please refer the below url that could come handy. https://greencloudvps.com/knowledgebase/11/How-to-Extend-the-Hard-drive-on-KVM-after-upgrading-VPS.html