11

I am using an Ubuntu 16.04 EC2 from AWS on c5d.2xlarge HW. It comes with a 200 GB SSD called /dev/nvme1n1.

I am able to mount this drive using:

$ sudo mkfs.ext4 -E nodiscard /dev/nvme1n1

$ sudo mount -o discard /dev/nvme1n1 /home/ubuntu

To try to get it to automatically mount I also added to the /etc/fstab:

/dev/nvme1n1 /home/ubuntu/spda ext4 defaults,users,nofail,discard 0 2

My issues:

  • It does not seem to automatically mount when I stop / start the instance. I am not sure how to fix / debug.

  • When I mount it manually the folder created belongs to the root and I can't access it as a user.

My goal is to be able to start the instance and already have the drive mounted and accessible to the users.

MLu
  • 23,798
  • 5
  • 54
  • 81
John Corson
  • 113
  • 1
  • 6

3 Answers3

8

The 200GB SSD disk that you see is called Instance storage (or ephemeral storage) and is destroyed everytime you stop the instance and created new every time you start the instance.

That means two things:

  1. Don't store any precious data you want to retain over stop/start - it will be all gone when you stop it. These instance storage disks are great for caches, temporary dirs, swap space, etc. Anything that can be easily recreated if it's lost.

  2. Every time you start the instance the disk is blank - you must format it first (e.g. mkfs.ext4) before you can use it. The next time you stop/start it will be blank again and you must mkfs it again.

    That's why simply adding it to /etc/fstab isn't enough - the disk won't be formatted at the time the boot script attempts to mount it.

To resolve your problem you will have to create a custom script, e.g. /usr/local/sbin/mount-instance-store.sh with roughly this content:

mkfs.ext4 -E nodiscard -m0 /dev/nvme1n1
mount -o discard /dev/nvme1n1 /home/ubuntu/spda
chown ubuntu:ubuntu /home/ubuntu/spda

Then you'll have to make sure the script is executed during boot time. The way to do that is different for different distributions, for Ubuntu 16.04 this should work: How to automatically execute shell script at startup boot on systemd Linux

Hope that helps :)

MLu
  • 23,798
  • 5
  • 54
  • 81
  • Thanks! This answer was a BIG help! Unfortunately, I am only partway there. I made the .sh file and followed the instructions on systemd in the link. It works after a reboot BUT not after a full start and stop of the EC2 instance. After a start/stop, running the service manually gets it working BUT it still requires manual intervention. I wonder if I need more in my service file. Presently it only has the following in it: [Service] ExecStart=/usr/local/sbin/mount-instance-store.sh – John Corson Dec 11 '18 at 16:36
  • 2
    Got it working by making my service file: [Unit] Wants=network-online.target After=network-online.target [Service] ExecStart=/usr/local/sbin/mount-instance-store.sh [Install] WantedBy=default.target – John Corson Dec 11 '18 at 17:13
  • Can someone explain the rational why an EC2 instance that should have SSD available doesn't mount them by default? Feels like a little of dirty play by AWS, but I'm sure there's some reason for that. – Dror Atariah Apr 29 '20 at 12:22
  • Dror: Doing anything by default (meaning without configuration by the sysadmin) means making a lot of assumptions about filesystem type, mount point, etc -- and the use cases for instance storage _at scale_ are too widely varied for that, in actual practice. (Although I agree that providing some example scripts would have been nice for AWS to do.) – Ti Strga Jun 03 '22 at 15:38
2

With a newer version of cloud-init you can use the Disk Setup module. Cloud-init is available on all clouds for all major OS (even windows) afaict. Use the follwing line in /etc/fstab (modify as needed for the device, and mount point obviously):

/dev/nvme1n1    /mnt    auto    defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig   0   2

No cloud-init? No problem, as newer systemd also supports x-systemd.makefs and some other ones as documented here:

https://manpages.debian.org/testing/systemd/systemd-makefs.8.en.html https://manpages.debian.org/testing/systemd/systemd.mount.5.en.html

Andrew Schulman
  • 8,561
  • 21
  • 31
  • 47
ambakshi
  • 21
  • 2
  • I just get "wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error" when using this – Petah Dec 06 '21 at 23:16
-1

add the corresponding lines in /etc/fstab file in the following format

<device> <mount_point> <filesystem> <options> <dump-freq> <pass-num>

EX. if you have an ext4 partition and you want to automatically mount in /home/ubuntu

/dev/nvme1n1 /home/ubuntu ext4 defaults,rw,noatime 0 0

after the filesystem was mounted, you shoud give it the ownership to have access to it, but do it after mount it

chown ubuntu /home/ubuntu -R

comment this answer if you need more explanation

ensarman
  • 1
  • 1
  • 2
    Unfortunately that's not enough - the instance storage disk must be `mkfs`'ed before it can be mounted. – MLu Dec 11 '18 at 02:41