25

I have seen this answer for growing EBS volumes, but I would like to shrink one.

The default Ubuntu Server images are 15 GB while I really need only 2 GB max (I use a different volume for data). Is there a way to shrink the size of the volume?

Peter Smit
  • 1,649
  • 4
  • 21
  • 37

3 Answers3

27

I had the same question as you, so I worked out how to do it.

First, I did this from the Ubuntu 32-bit EBS-backed ami from the US-East region, other OS's or images may work differently. However, I suspect that you should be ok, as long as you are using an ext* filesystem. It might work on other filesystems, but you'll have to figure out how to resize them on your own.

The steps are basically:

  1. Attach two volumes to a running instance, the first based on the snapshot you want to shrink, and the second a blank volume having the new size you want to shrink to.

  2. Check the file system of the first volume and repair any errors.

  3. Shrink the file system on the first volume so it is only as big as it needs to be to hold the data.

  4. Copy the file system from the first volume to the second.

  5. Expand the file system on the second volume to it's maximum size.

  6. Make sure everything looks good by checking the second volume for errors.

  7. Take a snapshot of the second volume.

  8. Create a machine image based on the snapshot of the second volume you just took.

You first need to get some information from the ami you want to shrink. In particular, you need the kernel ID and ramdisk ID, if any (the image I shrunk didn't have a ramdisk). All this information should be available from the aws management console ,in the AMI window.

The kernel ID looks like kia-xxxxxxxx, and the snapshot ID looks like snap-xxxxxxxx, and ramdisk IDs look like RIA-xxxxxxxx.

Next, launch a linux instance. I launched a Ubuntu instance. You can use a t1.micro instance if you like. It doesn't take much power to do these next steps.

After the machine is running, attach the snapshot you wrote down from the first step. In my case, I attached it to /dev/sdf

Then, create a new volume, having the size you want. In my case, I created a 5GB volume, as that's the size I wanted to shrink it to. Don't create this new volume from a snapshot. We need a new blank volume. Next, attach it to the running instance, in my case I attached it as /dev/sdg

Next, ssh into the machine but don't mount the attached volumes.

At this point, I erred on the side of paranoia, and I opted to check the file system on the large volume, just to make sure there were no errors. If you are confident that there are none, you can skip this step:

$ sudo e2fsck -f /dev/sdf

Next, I resized the file system on the large volume so that it was only as big as the data on the disk:

$ sudo resize2fs -M -p /dev/sdf

The -M shrinks it, and the -p prints the progress.

The resize2fs should tell you how large the shrunkin filesystem is. In my case, it gave me the size in 4K blocks.

We now copy the shrunkin file system to the new disk. We're going to copy the data in 16MB chunks, so we need to figure out how many 16MB chunks we need to copy. This is where that shrunken file system size comes in handey.

In my case, the shrunk file system was just over 1 GB, because I had installed a lot of other programs on the basic Ubuntu system before taking a snapshot. I probably could have gotten away with just copying the size of the file system rounded up to the nearest 16MB, but I wanted to play it safe.

So, 128 times 16MB chunks = 2GB:

$ sudo dd if=/dev/sdf ibs=16M of=/dev/sdg obs=16M count=128

I copied in blocks of 16MB because with EBS, you pay for each read and write, so I wanted to minimize the number of them as much as possible. I don't know if doing it this way did so, but it probably didn't hurt.

We then need to resize the file system we just copied to the new volume so that it uses all the available space on the volume.

$ sudo resize2fs -p /dev/sdg

Finally, check it, to make sure everything is well:

$ sudo e2fsck -f /dev/sdg

That's all we need to do in this machine, though it couldn't hurt to mount the new volume, just as a test. However, this step is almost certainly optional, as e2fsck should have caught any problems.

We now need to snapshot the new volume, and create an AMI based on it. We're done with the machine, so you can terminate it if you like.

Make sure the small volume is unmounted if you mounted it, and then take a snapshot of it. Again, you can do this in the management console.

The final step requires the commandline ec2 tools.

EDIT:

Since this answer was posted the AWS console allows you to simply right click a snapshot and select Create Image from Snapshot. You will still need to select the appropriate Kernel ID. If it does not appear on the list make sure you've selected the appropriate architecture.

We use the ec2-register application to register an AMI based on the snapshot you just took, so write down the snap-xxxxxxxx value from the snapshot you just took.

You should then use a command like:

ec2-register -C cert.pem -K sk.pem -n The_Name_of_Your_New_Image
-d Your_Description_of_This_New_AMI --kernel aki-xxxxxxxx
-b "/dev/sda1=snap-xxxxxxxx" --root-device-name /dev/sda1

You of course need to replace the kernel ID with the one you wrote down at the beginning and the snapshot ID with the one you created in the previous step. You also need to point it at your secret key (called sk.pem) above, and your x509 cert (called cert.pem). You can of course choose whatever you want for the name and description.

Hope this helps.

Spencer Ruport
  • 477
  • 3
  • 17
Aaron
  • 371
  • 3
  • 5
  • Thanks, that helped! For large volumes (like 1TB) this procedure takes a long while on micro instance. I've seen no-fsck, rsync-based volume copying (e.g. here http://ubuntuforums.org/showpost.php?p=9866025&postcount=27), but dd-based approach feels much more reliable, even for non-root volumes. – chronos Feb 25 '11 at 19:56
  • The first command `sudo e2fsck -f /dev/sdf` might be a required step before doing the resize (was on my particular instance, an Amazon Linux AMI). – notacouch Mar 06 '14 at 17:08
  • 2
    Should be obvious but don't forget to make a file system on the volume (/facepalm) as per AWS docs, `sudo mkfs -t ext4 /dev/sdg`. – notacouch Mar 07 '14 at 00:30
1

Yeah, I've wondered this too. The following tutorial is overkill, but I think it contains the necessary tools: http://www.linuxconfig.org/Howto_CREATE_BUNDLE_UPLOAD_and_ACCESS_custom_Debian_AMI_using_ubuntu

Instead of install onto a new disk image as above, it should be possible to fire up the large AMI, create a new EBS, attach EBS to the running instance, and copy the running AMI over to the new EBS. Finally, register the new EBS as an AMI.

Take a look at this blog-post for some more background, especially the comment by freremark: http://alestic.com/2010/01/public-ebs-boot-amis-for-ubuntu-on-amazon-ec2

On a final note, the euca2ools seems like a great replacement for ec2-ami-tools - euca2ools include actual manpages! They have all the same names as the ec2-* commands, just with the euca- prefix. http://open.eucalyptus.com/wiki/Euca2oolsUsing

0

I wanted to reduce the size of the volume being used by an general EC2 instance. I followed similar steps to the other answers here but ran into an issue. So here is what I had to do to shrink my root volume...

In AWS Console

 1. Stop the source EC2 instance
 2. Create a snapshot of the volume you want to shrink
 3. Use the snapshot to create a new 'source' volume
 4. Created a new volume with smaller size (made sure it was big enough for the data on source)
 5. Attached both volumes to any EC2 instance (mine were /dev/sdf = source & /dev/sdg = target)
 6. Start the EC2 instance

On the EC2 instance

 7. sudo su -   (everything from here is run as root)
 8. mkdir /source /target
 9. mount -t ext4 /dev/sdf /source
 10. mkfs.ext4 /dev/sdg
 11. mount -t ext4 /dev/sdg /target
 12. rsync -aHAXxSP /source/ /target 
     ** notice that there is no trailing '/' after target if 
       you put one there your data will be copied to 
       /target/source and you will have to move it up a directory
 13. cat /boot/grub/grub.conf  (indicated that grub is using root=LABEL=/)
 14. cat /source/etc/fstab (indicated that fstab was also using LABEL=/)
 15. e2label /dev/sdg /
 16. umount /source
 17. umount /target

Back in AWS Console

 18. Stop the instance
 19. Detach ALL volumes from the instance
 20. Attach the 'target' volume to the instance using /dev/sda1 as the device
 21. Start the instance

Here is where we ran into problem that hasn't been mentioned as far as I can find. The instance started fine, great! But when I tried to ssh to the instance, I could not get connected. After many many variations of the above steps I finally decided to try to use the root volume from a freshly spun up EC2 instance.

In AWS Console

 1. Create a new EC2 instance with the right sized root volume
 2. Stop the new instance
 3. Detach the /dev/sda1 volume from the new instance
    ** used the 'source' volume from before & the new volume we just detached
 4. Attached both volumes to the original EC2 instance (/dev/sdf & /dev/sdg)
 5. Start the instance with the attached volumes

On the EC2 instance

 1. sudo su - 
 2. mkdir /source /target (only need to do this if you don't already have these directories)
 3. mount -t ext4 /dev/sdf /source
 4. mount -t ext4 /dev/sdg /target (no need to create a file system because it is already there)
 5. rsync -aHAXxSP /source/ /target 
 6. umount /source
 7. umount /target

Back in AWS Console

 1. Stop the instance
 2. Detach the 'source' and 'target' volumes from instance
 3. Attach the 'target' volume to the instance from step 1 using /dev/sda1 as the device
 4. Start the instance
 5. ** we use an elastic IP so we just reassigned the IP to the new instance

Hope this helps someone

kasdega
  • 101