13

I have MySQL running on an m1.xlarge instance with a 100GB EBS volume holding the data files. I would like to migrate to a m3.2xlarge instance and place the data files on the new 2 x 80GB SSD drives.

I stopped my instance, changed the type accordingly, and launched it. However all I could find for storage was a 15G tempfs and an 8G mounted drive.

$ fdisk -l
Disk /dev/xvda1: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

How do I get access to the 2 x 80GB SSDs for this instance type?

paiego
  • 253
  • 2
  • 8
  • `fdisk -l`, `mkfs`, then `mount`. – ceejayoz Mar 20 '14 at 01:14
  • 1
    When you changed the instance type and launched it, only the RAM and CPU type changes. In order to resize the disk in general the idea is: - make a snapshot of your instance - create a bigger volume from the snapshot in the same zone - attach new volume to instance /dev/sda1 - start the instance (DNS name changes) – LinuxDevOps Mar 20 '14 at 14:24
  • @LinuxDevOps: Thanks. By changing the way I did, will the attached volume still be available as it was before the instance change? – paiego Mar 20 '14 at 14:36
  • Check in your AWS web console under EC2 management -> Volumes if it's there (identify by capacity and 'available'), you may have lost it – LinuxDevOps Mar 20 '14 at 14:57
  • Dupes: http://serverfault.com/questions/571427/root-device-on-ssd-instance-types-ssd-vs-ebs-confusion http://serverfault.com/questions/490597/wheres-my-ephemeral-storage-for-ec2-instance – Chris Moschini Jun 14 '14 at 20:09

1 Answers1

13

So, for a full answer, basically your SSD drives are ephemeral disks, and according to the AWS documentation the only way to use these ephemeral disk is to create a new instance. (The feature to attach ephemeral storage to the instance after it has been create it's not available yet)

This is from the AWS docs:

Instances that use Amazon EBS for the root device do not, by default, have instance store available at boot time. Also, you can't attach instance store volumes after you've launched an instance. Therefore, if you want your Amazon EBS-backed instance to use instance store volumes, you must specify them using a block device mapping when you create your AMI or launch your instance. Examples of block device mapping entries are: /dev/sdb=ephemeral0 and /dev/sdc=ephemeral1. For more information about block device mapping, see Block Device Mapping

Like @LinuxDevOps mentioned you have to create a snapshot of your existing instance and then create a new one attaching the SSD volumes. After you login to your new instance you can do like @ceejayoz mentioned.

List your devices:

fdisk -l

Make a file system on the devices. For example ext4

mkfs.ext4 /dev/xvdb
mkfs.ext4 /dev/xvdc

Mount the devices:

mkdir -p /mnt/xvdb; mkdir -p /mnt/xvdc
mount /dev/xvdb /mnt/xvdb
mount /dev/xvdc /mnt/xvdc

For reference: list of device names according to instance types

There also other similar answers in SF and SO. For example: Where's my ephemeral storage for EC2 Instance

Rico
  • 2,185
  • 18
  • 19