1

I'm lunching an instance to host a Cassandra node and i'm testing some shutdown and startup scripts. The instance was lunched from the datastax PV AMI 'ami-8932ccfe'. I added 8GB of SSD EBS storage for root and launched it. On initial startup the Instance Store (ephemeral drive) was attached and everything was fine. I stopped the instance and started it again and the Instance Store was gone and the following error message was displayed on logging in:

ERROR mount -a:

Next I terminated the instance, relaunched the same AMI and setup, then I made a snapshot AMI and added the Instance Store there in an attempt to 'bake' it into the instance. However on stop and start I get the same issue.

My problem is that I only have permission to stop and start instances, I cannot create them myself so I have to keep bothering a college to help me launch them. I want to be able to stop the instance at the end of the day and start it again in the morning i.e. during working hours to reduce any costs. The server in question is just a development instance and so I'm not worried about data lose, all I need to run on startup are some scripts to create the tables. However because the Instance Store doesn't attach automatically on start up Cassandra doesn't install.

Can anyone tell me how to create an instance such that the Instance Store is automatically attached after a stop and start?

After I started the instance I used the following to get some metadata if it helps:

curl http://169.254.169.254/latest/meta-data/block-device-mapping/
ami
ephemeral0
root

curl http://169.254.169.254/latest/meta-data/block-device-mapping/ephemeral0
sdb

Cheers, Alexei Blue.

Alexei Blue
  • 111
  • 2

2 Answers2

1

The instance store volumes are deleted each time the instance is stopped. I am unsure what distribution datastax is running, but the correct way to do this is to create an init script that will:

  1. check to see if it is mounted
  2. if not, get the ephemeral0 drive information from the meta-data
  3. if not, format the volume and mount it

If you don't want to craft an init script to do it, you could insert some lines into /etc/rc.local to accomplish the same. Something like:

mount | awk '{print $3}' | grep -sq /mnt
test $? && exit 0
curl -s http://169.254.169.254/latest/meta-data/block-device-mapping/ | grep -sq ephemeral0
test $? && DEV=`curl -s http://169.254.169.254/latest/meta-data/block-device-mapping/ephemeral0`
test -n "$DEV" && mkfs -t ext4 /dev/$DEV
test $? && mount /dev/$DEV /mnt

This script is very much on rails and cannot deviate due to any errors. You might want to write a more robust one.

dialt0ne
  • 3,027
  • 17
  • 27
  • Hi @dialtOne, thanks for the reply, i've been waiting ages for a reply from AWS. I've tried running the script on an instance I stopped and started. As usual the ephemeral drive wasn't attached. When I run curl -s http://169.254.169.254/latest/meta-data/block-device-mapping/ephemeral0 I get 'sdb' however I don't have a directory '/dev/sdb' so when mkfs -t ext4 /dev/sdb is run I get the following: 'The device apparently does not exist; did you specify it correctly?'. – Alexei Blue Jul 11 '14 at 22:37
0

@dialtOne's script lead me on to some new research and I found this script on github.

Essentially my instance is using a different device schema convention xvdb

One thing I had to change about this script is line 62:

mdadm --create --verbose /dev/md0 --level=0 -c256 --raid-devices=$ephemeral_count $drives

Too the following:

mdadm --create --verbose /dev/md0 --level=0 -c256 --force --raid-devices=$ephemeral_count $drives

If your instance has one ephemeral drive like mine does you have to use --force to get the command to run.

After the script finished i was very pleased to see this:

df -h
...
/dev/md0  30G  173M  28G  1%  /mnt

Thank you again for your answer @dialtOne.

Cheers, Alexei Blue

Alexei Blue
  • 111
  • 2
  • You're welcome. The sdb vs. xvdb device name is a common issue. That's why that gist tries to do detection early on. – dialt0ne Jul 13 '14 at 04:16