4

So it's been a few days and I still can not connect to my new HVM instance with EC2 running Ubuntu 16. For reference, I am trying to upgrade our server from an m3 instance running Ubuntu 16, to a C5 instance running Ubuntu 16. For almost every method I've tried, I am able to get to the point where I stop my new C5 instance, detach all volumes, and attach the newly updated source volume as /dev/sda1, but then when I go to connect to the instance, I always end up timing out. Amazon's status check also fails, as it says the instance is unreachable. However, the system log shows no issues when starting up.

I've tried doing everything in this post. I've tried this post as well. I've looked on other sites, and have given this and this a try. I've even tried both the ec2 command line tools method, and converting an AMI from the ec2 console (online), however I either cannot launch a C5 instance with the converted AMI, or the instance will stop and fail (in the case of conversion via command line).

The only thing I can really think of that might be causing it, is the naming convention for the partitions on the C5 instance. Every single guide I've seen uses xvda/xvdf/xvdg. I could be wrong, but I do not have these partitions or disks, and instead have nvme0n1, nvme0n1p1, (the new HVM root), nvme1n1, and nvme1n1p1. When I tried the HVM / source / target disk method, I had nvme0n1/nvme0n1p1, nvme1n1 (target -- where everything should end up), and nvme2n1/nvme2n1p1 (source -- where everything was from, on m3). I found this Amazon post about nvme so I don't think this should be an issue, as I'm just using the correct disk / partition when using /mnt/, ie. I call mkdir -p /mnt/target && mount /dev/nvme1n1 /mnt/target instead of mkdir -p /mnt/target && mount /dev/xvdf /mnt/target, but nothing so far has worked. My instance becomes unreachable the moment I attach the target as /dev/sda1.

So, is there something that I'm missing when doing these with a disk named nvme*? Is there any other information or debug things I can provide to help understand the issue?

Alex
  • 221
  • 1
  • 7

2 Answers2

5

I realize that this question wasn't seen very much, but just in case, I'm hoping my results can help out someone in the future (maybe even me the next time I attempt this). I would like to thank Steve E. from Amazon support for helping me get my instance migrated <3

Anyways, there were 2 issues when migrating my Ubuntu 16.04 M3 (PV) instance to an Ubuntu 16.04 C5 (HVM) instance. The first issue was that the new C5 instances do use the new naming conventions, so other tutorials about migrating PV to HVM don't work quite the same way. The other issue was that my M3 (PV) instance had been through upgrades to Ubuntu. I actually had gone from Ubuntu 12 -> Ubuntu 14 -> Ubuntu 16 in the past year or so. This caused an issue where cloud network files weren't generated, and so my instance could not be reached.

Anyways to migrate an Ubuntu 16.04 PV instance to an HVM instance using the new nvme naming convention do the following:

Pre-Requisites Summary:

  1. Before starting, make sure to install the following on your PV instance:

    $ sudo apt-get install grub-pc grub-pc-bin grub-legacy-ec2 grub-gfxpayload-lists
    $ sudo apt-get install linux-aws
    
  2. Stop the PV Instance & Create a snapshot of the its root volume, restore this snapshot as a new EBS volume on the same availability zone of the source (Start the PV Instance right after the snapshot creation)
  3. Launch a new C5 HVM instance (destination) selecting the Ubuntu Server 16.04 LTS (HVM) on the same availability zone of the source instance (Keep this new instance EBS root volume size to 8GB, as this root volume will only be used temporarily)
  4. After the instance launches, attach the volume you restored on the step 1 (that's the root volume from the PV instance) as /dev/sdf (on the Ubuntu system, the name will be nvme1n1).
  5. Create a new (blank) EBS volume (same size as your 'source' PV root volume) and attach to the HVM instance as /dev/sdg (on the Ubuntu system, the name will be nvme2n1)

Migration:

Once logged into your instance, use sudo su to execute all commands as a root user.

  1. Display your volumes

    # lsblk 
    NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    nvme0n1     259:0    0    8G  0 disk 
    └─nvme0n1p1 259:1    0    8G  0 part /
    nvme1n1     259:2    0  100G  0 disk 
    nvme2n1     259:3    0  100G  0 disk 
    

    nvme0n1 is the HVM root you just created (just to boot this time) nvme1n1 is the PV root restored (will be converted to HVM) nvme2n1 is the blank volume (will receive the conversion from the PV root as nvme1n1)

  2. Create a new partition on nvme2n1 (nvme2n1p1 will be created)

    # parted /dev/nvme2n1 --script 'mklabel msdos mkpart primary 1M -1s print quit'
    # partprobe /dev/nvme2n1
    # udevadm settle
    
  3. Check the 'source' volume and minimize the size of original filesystem to speed up the process. We do not want to copy free disk space in the next step.

    # e2fsck -f /dev/nvme1n1 ; resize2fs -M /dev/nvme1n1
    
  4. Duplicate 'source' to 'destination' volume

    # dd if=/dev/nvme1n1 of=/dev/nvme2n1p1 bs=$(blockdev --getbsz /dev/nvme1n1) conv=sparse count=$(dumpe2fs /dev/nvme1n1 | grep "Block count:" | cut -d : -f2 | tr -d "\\ ")
    
  5. Resize the 'destination' volume to maximum:

    # e2fsck -f /dev/nvme2n1p1 && resize2fs /dev/nvme2n1p1
    
  6. Prepare the destination volume:

    # mount /dev/nvme2n1p1 /mnt/ && mount -o bind /dev/ /mnt/dev && mount -o bind /sys /mnt/sys && mount -o bind /proc /mnt/proc
    
  7. chroot to the new volume

    # chroot /mnt/
    
  8. Reinstall grub on the chrooted volume:

    # grub-install --recheck /dev/nvme2n1
    # update-grub
    

    Exit the chroot

    # exit
    

    Shutdown the instance

    # shutdown -h now
    
  9. After the conversion you need now to do this:

    Detach the 3 volumes you previous had on the HVM instance. Attach the last volume you created (blank) as /dev/sda1 on the console (it was previously attached as /dev/nvme2n1) on the HVM instance. Start the HVM instance.

The new HVM instance should now boot successfully and will be an exact copy of the old source PV instance (if you used the correct source volume). Once you have confirmed that everything is working, the source instance can be terminated.


Updating network configuration (optional)

Now, the steps above will work for a majority of the people here. However, my instance status was still not being reached. The reason was because I upgraded Ubuntu on my instance, instead of starting from a fresh image. This left the eth0 config activated, an no 50-cloud-init.cfg config file.

If you already have the file /etc/network/interfaces.d/50-cloud-init.cfg, then you can follow along and update the file, instead of creating a new one. Also assume all commands are run via sudo su.

  1. Shutdown the instance, detach volumes, and enter the same configuration as before. Attach the 8GB volume as /dev/sda1/, and your final destination volume as /dev/sdf/. Start the instance up and login.

  2. Mount /dev/sdf, which should now be nvme1n1p1 by doing the following:

    # mount /dev/nvme1n1p1 /mnt/ && mount -o bind /dev/ /mnt/dev && mount -o bind /sys /mnt/sys && mount -o bind /proc /mnt/proc
    
  3. Either create or update the file:

    /etc/network/interfaces.d/50-cloud-init.cfg
    

    With the following:

    # This file is generated from information provided by
    # the datasource.  Changes to it will not persist across an instance.
    # To disable cloud-init's network configuration capabilities, write a file
    # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
    # network: {config: disabled}
    auto lo
    iface lo inet loopback
    
    auto ens5
    iface ens5 inet dhcp
    
  4. Exit chroot (exit), shut down the instance (shutdown -h now).

  5. Follow step 9 from before!

You should be done!


Alex
  • 221
  • 1
  • 7
  • for those following these instructions, life will be easier if you did #sudo su before starting running any commands. Also, do not skip a step. I tried to skip 'minimize the size of the original filesystem' part and that didn't work out too well for me. Not sure why. – Prashant Saraswat May 07 '18 at 20:45
0

Thanks , hint for network configuration worked in upgrade case ( Ubuntu 14.04 PV to Ubuntu 18.04 PV ) . Converted upgraded Ubuntu 18.04 PV to Ubuntu 18.04 HVM with slight tweak in network configuration . Created a new /etc/netplan/50-cloud-init.config with following configuration

network:
    version: 2
    ethernets:
        all-en:
            match:
                name: "en*"
            dhcp4: true
        all-eth:
            match:
                name: "eth*"
            dhcp4: true
Quantim
  • 1,269
  • 11
  • 13