I broke SSH by using chmod on an EC2 Instance — How do I correct?

0

I have been running an Amazon EC2 Instance for a while, and recently accidentally changed permissions/ownership on folders recursively. I can no longer SSH into the instance. I do not recall what the exact command was. I then regained basic access using the technique provided in the answer to this question: https://serverfault.com/questions/234061/re-gaining-root-access-to-an-ec2-ebs-boot-image

I am at a loss for what to do with Step 3 ("Modify it."), though. I do not know what to modify, and in my attempts to set the correct permissions, I have lost access to a number of recovery EC2 instances as well!

For reference, I am using Win7 and Putty/WinSCP to connect to the instance. Putty displays the following 2 errors when I attempt to log on to the instance through SSH:

Server Refused Our Key

No supported Authentication methods available (server sent: public key)

I am confident I am using the correct username, IP address, and private key for my instance.

Any help would be much appreciated.

user47249

Posted 2014-07-18T20:20:21.393

Reputation: 1

Answers

2

The solution, as the Server Fault post describes, is to mount your EBS to another (new) instance which you can connect to. The EBS will be just a drive there, and you will be able to remote connect to the instance since it is new. You can then sudo chown the directories you broke in the first place.

  1. Stop (NOT terminate) your first instance from AWS console

  2. Create a new linux instance from AWS console

  3. Detach the EBS from the first instance, and attach it to the new instance, again from AWS console. Give it the name: /dev/sdm (On the EBS console, you'll see that the volume attached is named sdm but mapped to xvdm on Linux, so it matches the commands below. But feel free to use another letter and adjust the next commands.)

  4. SSH to the new instance and execute:

    sudo mkfs -t ext4 /dev/xvdm
    sudo mkdir /old-ebs
    # optional, in case this is a bit more permanent
    sudo echo "/dev/xvdm        /old-ebs     auto    noatime,noexec,nodiratime 0 0" >> /etc/fstab
    sudo mount /dev/xvdm /old-ebs
    
  5. Now you have (sudo) access to your old EBS, and you can chown the directory you need. A tip here, if you're lost, would be to copy the rights from the new instance.

  6. Once you are done, you can stop the new instance, detach your EBS from it and re-attach it to the first instance, then start it.

The 3rd console command above, to add the mounted volume in the /etc/fstab so it gets mounted automatically at each reboot, is optional.

Nicolas Grasset

Posted 2014-07-18T20:20:21.393

Reputation: 121

My bad, /etc/fstab is necessary if you want to make it permanent. Will update – Nicolas Grasset – 2014-07-27T18:35:11.300

On the EBS console, you'll see that the volume attached is named sdm but mapped to xvdm on Linux – Nicolas Grasset – 2014-07-27T18:36:12.037

The commands are to be typed when you login to that new instance. – Nicolas Grasset – 2014-07-27T22:59:24.907

Prior to that, step 3, you should go to the EBS console (https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Volumes: if US East is your region), and attach your EBS volume to the new instance, and give it the name: /dev/sdm (you'll see, you don't have too much choice).

– Nicolas Grasset – 2014-07-27T23:01:44.687

They're good although not much was actually changed. Good if they make the instructions clearer :) – Nicolas Grasset – 2014-07-28T12:00:23.380

Okay, just in case you wonder: while "proof reading" without actually looking at any console, I was confused by "SSH to the new instance and mount the EBS instance to /dev/sdm", which for me implied that the mounting of /dev/sdm was done within the SSH session. Okay, I'm going to remove my comments. (Nice answer, thanks.) – Arjan – 2014-07-28T12:03:24.700

I know I'm an idiot for blindly following these steps, but sudo mkfs -t ext4 /dev/xvdm wiped the contents of the drive. – Kevin Hua – 2018-08-28T15:54:42.520