1

I have been trying to reduce the size of my Amazon Linux 1 AMI root volume using the procedure in this documentation (with some modifications made after failing to do so) and continuously run into errors with the step:

$ sudo grub-install --root-directory=/mnt/new-volume/ --force /dev/xvdf

This is legacy GRUB (Version 0.97-94.32.amzn1)

I was getting the following error at first:

Unrecognized option `--force'

and as a result removed the --force flag and jus tused:

$ sudo grub-install --root-directory=/mnt/new-volume/ /dev/xvdf

which has since resulted in:

/dev/xvdf does not have any corresponding BIOS drive

I have tried to create the BIOS boot partition using parted or fdisk following instructions mentioned in this thread but every method has led to the same failure. Please note that the specific instance type I am using (r5.large) renames the drives to corresponding "nvme*" names as noted in the lsblk output:

nvme0n1       259:3    0  200G  0 disk
├─nvme0n1p1   259:4    0  200G  0 part /
└─nvme0n1p128 259:5    0    1M  0 part
nvme1n1       259:0    0   40G  0 disk
├─nvme1n1p2   259:2    0   40G  0 part /mnt/new-volume
└─nvme1n1p1   259:1    0    1M  0 part

One article relevant to the error message was found in this Linux Questions post but this did not prove to resolve my issue. I've tried through chroot-ing into the partition and ran into the same issue and have tried using an intermediary Amazon Linux 1 or Amazon Linux 2 host but continue running into the issue.

I do note that this same issue occurs when using the root volume alone in Amazon Linux 1:

grub-install /dev/sda OR grub-install /dev/sda1

But regardless the new disk cannot be booted from unless it is listed as the secondary drive. Using the grub command alone from the Legacy GRUB manual to install has failed as well. Am I looking into the wrong procedure to create a new smaller root volume or is there something I am missing from the steps above? Can provide further details as necessary.

Rolo787
  • 11
  • 3

3 Answers3

1

Followed same manual and that what I think made it work:

On Ubuntu 20 in /boot/grub/grub.cfg had wrong uuid, so I needed to fix here: /etc/default/grub.d/40-force-partuuid.cfg and then re-generate new /boot/grub/grub.cfg with grub-mkconfig -o ...

Additionally I partitioned new EBS, volume, what seems you did too:

Device          Start       End   Sectors  Size Type
/dev/nvme0n1p1   2048    411647    409600  200M BIOS boot
/dev/nvme0n1p2 411648 104857566 104445919 49.8G Linux filesystem

Not sure if it was necessary, though.

os11k
  • 111
  • 4
0

You have to specify the correct block device as you are using nvme device instead of xvdf

sudo grub-install --root-directory=/mnt/new-volume/ --force /dev/nvme1n1
  • This is not the issue as sdf == xvdf == nvme1n1 in this matter. Even when executing this using the nvme1n1 drive name the issue persists. You can recreate this using the article mentioned and see that the same issue occurs. – Rolo787 Apr 12 '22 at 18:27
0

I found a workaround for Amazon Linux 1 by doing the following in the meantime but would still be open to further inspection.

  1. Launch a new instance using the same AMI but changing the root volume size to the desired amount.

  2. Stop the new instance, detach the smaller EBS volume, and attach it to the current instance where the larger root volume is attached (in the stopped state).

  3. Start the current instance (now with the smaller EBS volume attached as a secondary drive).

  4. Use the following to copy over the contents of the root volume (assuming that it is mounted to /mnt/new-volume:

    $ rsync -axv / /mnt/new-volume

  5. Stop the current instance, detach both volumes.

  6. Attach the smaller new root volume to the instance.

  7. Start the instance

It is not as elegant of a workaround but did suffice as it is not clear how the original root volume is created and booted from.

Rolo787
  • 11
  • 3