7

I created an EC2 instance on AWS, and I was assigned a default "security group". I understand that this acts as a virtual firewall for my server.

I had trouble connecting into this EC2 instance using SSH, and it turned out that the issue was not setting the "Source" to 0.0.0.0/0 in the security group's "Inbound Rules", as shown in the image below.

Is it safe to keep it like this or should I restrict the source to the IP of my home network?

Nobody can ssh into my EC2 instance without the *.pem file, right?

Security Group Inbound Rules

  • 1
    The forever wisdom is that you keep the default rule as "deny everything from everywhere, always" and on top of that, you add other rules for stuff you want to allow, like ssh from your home address. This makes it easier to enforce the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege), the computer security guideline of "if you only allow what is necessary, you have exposed yourself to the minimum possible risk." – Bass Aug 10 '20 at 04:29

4 Answers4

11

The way security works is not binary. Your instances are never "safe".

There are hundreds/thousands of attack vectors, and you make a cost-benefit decisions to put defenses against some of these vectors. It's prohibitively expensive to be fully defended from all of them.

In your situation, your system can have a vulnerability in any service/app that listens on the network interface, for example causing leak of data.

You've opened all TCP and UDP ports. It's enough to have TCP/22 if you want to use that *.pem and whatever other port you know you need.

Even OpenSSH can have a vulnerability. Hence, yes, it's better to have your home network IP range only.

kubanczyk
  • 13,502
  • 5
  • 40
  • 55
9

Security is like an onion - its all about layers, stinky ogre-like layers.

By allowing SSH connections from everywhere you've removed one layer of protection and are now depending solely on the SSH key, which is thought to be secure at this time, but in the future a flaw could be discovered reducing or removing that layer.

And when there are no more layers, you have nothing left.

A quick layer is to install fail2ban or similar. These daemons monitor your auth.log file and as SSH connections fail, their IPs are added to an iptables chain for a while. This reduces the number of times a clinet can attempt connections every hour/day. I end up blacklisting bad sources indefinitely - but hosts that have to hang SSH out listening promiscuously might still get 3000 failed root login attempts a day. Most are from China, with Eastern Europe and Russia close behind.

If you have static source IPs then including them in your Security Group policy is good, and this means the rest of the world can't connect. Downside, what if you can't come from an authorised IP for some reason, like your ISP is dynamic or your link is down?

A reasonable solution is to run a VPN server on your instance, listening to all source IPs, and then once the tunnel is up, connect over the tunnel via SSH. Sure its not perfect protection, but its one more layer in your shield of ablative armour... OpenVPN is a good candidate,

You can also leverage AWS's "Client VPN" solution, which is a managed OpenVPN providing access to your VPC. No personal experience of this sorry.

Other (admittedly thin) layers are to move SSH to a different port. This doesn't really do much other than reducing the script-kiddy probes that default to port 22/tcp. Anyone trying hard will scan all ports and find your SSH server on 2222/tcp or 31337/tcp or whatever.

If possible, you can investigate IPv6 ssh only, again it merely limits the exposure without adding any real security. The number of unsolicited SSH connections on IPv6 is currently way lower than IPv4, but still non-zero.

Criggie
  • 2,219
  • 13
  • 25
  • 1
    Nitpick: it's not `/etc/hosts.deny`; `fail2ban` creates its own firewall (`iptables`) rules. This has, mostly, the same effect, but a user who tries that out for the first time is likely to be confused if they can't find their ip in `/etc/hosts.deny`. – Guntram Blohm Aug 08 '20 at 11:28
  • @GuntramBlohmsupportsMonica ahh yes sorry - I mentally mixed that with another solution that I use. Correcting now - (btw you can use [edit] to fix factual errors directly) – Criggie Aug 08 '20 at 13:09
  • If you are IP-restricting access to your SSH server, is that not an implicit description of a threat model in which RSA is broken? – Adam Barnes Aug 09 '20 at 04:16
  • @AdamBarnes Is it? Maybe could-be some day. Relying on only one thing is like driving a car equipped with airbags and not wearing a seatbelt because "the airbags are there" – Criggie Aug 09 '20 at 05:14
  • 1
    @AdamBarnes Not really. SSL has been broken before, and with defects like HeartBleed, things like the SSH private keys and other highly sensitive data were recoverable. There are more possible defects than simply a break of the protocol itself. Most hacks in my experience essentially go around the normal process somehow, instead of just charging through by brute force. – SplinterReality Aug 09 '20 at 05:17
  • 1
    AWS Client VPN provides another layer of protection, I use it for specific scenarios, but it has limited authentication options - AD, SAML, or certificate. If you want MFA you have to use AD / SAML. Client VPN has a security group that you define so you can limit what connected users can do, e.g. port 22 into a subnet / another security group. I use it only for private solutions with no internet connectivity, it's overkill for many solutions. Security group with home / work IP whitelisted and PKI is sufficient. Also not exactly cheap https://aws.amazon.com/vpn/pricing/ – Tim Aug 09 '20 at 09:35
6

If software was perfect you could leave your server completely open to the internet as you have, but in practice there are bugs and other ways to compromise a server.

Best practice is to open specific ports to only the minimum IPs to achieve your goals. For example:

  • Open up port 22 (SSH) to only the IPs that require it, such as your home or work IPs.
  • Open ports 80 and 443 to the world, if you want to serve web traffic. However, if you want additional protection you can use a CDN / WAF such as CloudFront / CloudFlare (who haev a free tier) and only open 443 / 80 to CloudFlare IPs.
  • Open database ports to specific IPs only if required. If you do this your database has to be configured to accept those connections, which RDS isn't by default

You tend to only open other ports if absolutely required, and to the minimum number of IPs that will achieve what you need.

Tim
  • 30,383
  • 6
  • 47
  • 77
3

The more restrictive you can be with your rules, the better.

Worth noting, some home ISPs will use dynamic addresses, if you find yourself unable to connect to your instance at some point, check that first.