Recently I made an AWS AMI from scratch. Now when I log into my instance it shows below output for fdisk -l
command:
% sudo fdisk -l
Disk /dev/xvde1: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvde2: 160.1 GB, 160104972288 bytes <<<<<<<<<
255 heads, 63 sectors/track, 19464 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvde3: 939 MB, 939524096 bytes
255 heads, 63 sectors/track, 114 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Here is my /etc/fstab
file entries which I specifically made for this instance at the time I was building the AMI from scratch.
/dev/xvde1 / ext4 defaults 1 1 none
/dev/pts devpts gid=5,mode=620 0 0 none
/dev/shm tmpfs defaults 0 0 none
/proc proc defaults 0 0 none
/sys sysfs defaults 0 0
/dev/xvde2 swap swap defaults 0 0
Now I'm confused why it's showing excessive 160 GB partition.
NOTE: This is a small instance store instance. I've restarted the instance twice.