0

I'm using cloud-init to automatically prepare an AWS image (AMI) for use in a production environment - that way I can have the environment setup process tracked in a source control system, but I can skip that lengthy process when I need a new production server.

So the process is as follows:

  1. use a cloud-init file to boot a new base image (Ubuntu 14.04 cloud-image)
  2. wait for cloud-init to complete, then create an image from the running instance, and terminate it
  3. to launch a new production server, I use a small cloud-init to boot from the AMI and perform the final configuration (setting up the correct hostname, deploying software, etc).

The problem I'm having is that the first cloud-init config file uses disk_setup, and mounts modules to mount an EBS volume. Once that is done, the instance has /etc/fstab updated and everything is fine. After doing step 3, though, the resulting instance has the EBS volume (actually a copy of it) attached and mounted properly, but /etc/fstab does not contain the mounts for the volume. Luckily I don't restart after step 3, but I might and that will break the server.

Any idea what is going on? I don't use mounts in step 3's cloud-init config, but why does it not retain the fstab setup from the image?

Guss
  • 2,520
  • 5
  • 32
  • 55

2 Answers2

0

I can't really explain the behaviour you're experiencing, but AWS themselves are recommending that you should not use fstab entries but instead make use of RC init scripts. See quote below from Cindy@AWS. This forum post is pretty old though and is not really an answer to the same issue you have, but maybe doing it this way will solve your issue as well.

I recommend looking into using RC init scripts instead of using the fstab for this purpose (for EC2 instances). If a device listed in the fstab fails to be mounted then this will halt the boot process and you will not be able to ssh into the instance. Instead, using an RC script could allow a "soft failure" to occur so that you could still ssh in and then fix the problem.

Source: https://forums.aws.amazon.com/message.jspa?messageID=304528#304549

Bazze
  • 1,511
  • 9
  • 10
  • 1
    The concern raised is that a failed mount will cause the instance boot to fail - which is not a concern for me because I use `nowaitboot` flag on any volume I add. – Guss Dec 03 '14 at 15:43
0

The problem was that the second cloud-init configuration (used to start the production instance on step 3 of the OP) contained a small mounts section to mount an additional instance specific volume. When cloud-init encounters a mounts section, it does not append any found configuration to the current fstab, instead it overrides whatever cloud-init setup an upstream cloud-init created.

The solution is to either include all the previously generated mount configuration, or not include any new configuration and have all the volume configuration done at step 1.

Guss
  • 2,520
  • 5
  • 32
  • 55