0

I'm developing system that allows access to client devices behind firewall, using port tunneling via SSH. Every client has dedicated linux based server, with port 80 being tunneled to public server. That way client's can have dynamic IP address and firewall enabled, no port forwarding on router is needed.

Every client has it's own port, eg.

  • Client A -> port 8888
  • Client B -> port 8889
  • Client C -> port 8890

They can connect from anywhere to their own webserver simply by calling http://mypublicserver.com:[their_assigned_port_no]

enter image description here

I can see some security issues, tho.

  1. Attacker can simply scan http://mypublicserver.com ports (65k of them) and get number of clients currently connected
  2. Having that, perform any attack on their login screen, DOS their machine, try accessing PhpMyAdmin, brute-force etc.

There is also a limitation, as I cannot assign more than ~60k clients... It isn't a problem at the moment as I don't expect to hit more than 10k of them. Still, having to guess port number is not hard at all. It is also prone to mistype, somebody might be trying to login with his own credentials into somebody else machine because he typed 8989 instead of 8898.

As a defence, I thought of additional string value (hash or something) sent with login page request. If it is missing or incorrect - return 404. That way you can't get to login page if you don't know the string value, but it makes it almost impossible to get without having this site bookmarked, because short string would be easier to remember, but also easier to crack.

Another way (and this is currently my favourite one) is to login via web interface of http://mypublicserver.com. You login with other set of credentials which redirects you to http://mypublicserver.com with authentication key (stored inside mypublicserver's database) sent as POST data. After succesfull login key is invalidated, generated new and sent to mypublicserver from the client's webserver.

But this also means a need to remember 2 sets of login-password, probably leading to confusion ("Why do I need to login twice?!")

I have already:

  • changed default location of phpmyadmin
  • introduced account lock with more than 10 failed login attempts

What do you think of that? Did I miss something important? How can I secure that system?

SilverlightFox
  • 33,408
  • 6
  • 67
  • 178
Mark
  • 101
  • 3

3 Answers3

2

I'm afraid that you have already failed at the first hurdle since you are using HTTP everywhere. This means that you are pushing unencrypted login traffic over the Internet which is subject to interception and misuse. You must have HTTPS at least from the client to the edge server.

You also haven't taken into account denial of service attacks though you haven't said whether you think that important. If all of the services are on guessable ports, an attacker can simply start streaming traffic to those.

Using multiple ports also makes it hard for clients to connect when behind a firewall. I'd recommend using a reverse proxy at the edge of your server network. That can both be the HTTPS terminator, load balancer (if needed) and can be made to redirect URL's to the appropriate back-end services.

On the front-end, clients should use a folder to identify their origin:

https://myserver.com/servicea
https://myserver.com/serviceb
etc.

The reverse proxy can redirect those to the correct back-end ports.

You don't need anything clever with ids/passwords though if you wanted to tighten things up further, you could use the reverse proxy or a firewall at the server end to restrict connections by source IP address.

Julian Knight
  • 7,092
  • 17
  • 23
  • I tried enabling HTTPS but with tunneling it reports warning (as it should be, because now the data comes from another server but the url stays as `http://mypublicserver.com`) and I don't know how to make it work. Is it even possible? – Mark Sep 21 '16 at 13:59
  • You need a https reverse-proxy/loadbalancer – Josef Sep 21 '16 at 15:08
  • @Mark: From client to edge server should be HTTPS. The system running the proxy could be any decent web server but NGINX is particularly good at that sort of thing. Or it could be a dedicated web proxy. The proxy has to handle both directions of traffic so if you don't run HTTPS internally, you need to translate outgoing traffic from HTTP to HTTPS as well as the reverse for incoming. Different question needed for that, already well covered elsewhere. Certainly possible, very common. – Julian Knight Sep 21 '16 at 15:52
  • [This search may give some clues](https://www.google.co.uk/search?q=how+to+reverse+https+proxy) obviously the exact config depends on your server architecture. – Julian Knight Sep 21 '16 at 15:53
  • Thank you for your post. I set up proxy using this tutorial as a reference https://www.leaseweb.com/labs/2014/12/tutorial-apache-2-4-transparent-reverse-proxy/ so I can hide port numbers and assign user-friendly domains. The problem with HTTPS persists, but I think it's material for another question (and probably not on security page, but rather systemfault). – Mark Sep 22 '16 at 10:04
  • Glad to hear it helped. – Julian Knight Sep 22 '16 at 11:29
0

This should be a comment - but its a bit long.

I'm struggling to decode your description of the problem. I think you're trying to say that you have a large number of web servers behind a firewall which you want to permit restricted access to via a gateway device exposed on the internet.

I'm guessing that the origin webservers don't have unique public IP addresses, and hence they are ssh clients to the gateway device.

Sorry, but I think this is a bad design and if you want to solve these problems you need to backup a bit. Even if scalability is not an issue, you've already run into limitations with how you route information to the back end, and its probably rather fragile.

I would configure the gateway device as a transparent forward HTTP proxy requiring authentication connecting to a VPN. The other end of VPN would be within the protected network at a secondary proxy capable of content based routing. In this model, you can provide a single SSL certificate at the gateway for all the access (you never mentioned SSL - this is pretty essential to be considered "secure"). You have the option of implementing the gateway authentication using various methods (via a cookie or NTLM or username and password) which could safely be cached by the clients.

symcbean
  • 18,278
  • 39
  • 73
  • Could you draw a diagram how it should look like? I don't really get the secondary proxy idea... Sorry. BTW I'll update my answer in a moment, providing diagram of how it is configured now. Thanks. – Mark Sep 21 '16 at 14:03
0

You could deploy a VPN server connected to the webservers' LAN, and configure the webservers to only allow connection from the VPN:

Client -> VPN -> Proxy -> Webserver

Your clients only have one login and password, and nobody can connect to the webservers outside of VPN. And you can deploy HTTPS

You don't have to make a overcomplicated setup for this.

ThoriumBR
  • 50,648
  • 13
  • 127
  • 142
  • Does it mean every client (connecting with web browser) has to be inside VPN to make it through? – Mark Sep 22 '16 at 06:14
  • Yes, every single client must be inside the VPN. Just configure the VPN to be certificate-based, and deny client-to-client routing. – ThoriumBR Sep 22 '16 at 11:40
  • I guess it is impossible, as the goal is to allow remote login from any place or device, without hassle of vpn configuration. Or did I misunderstood you and you can establish VPN connection without any additional steps, just by entering some authorization site and providing valid credentials? – Mark Sep 22 '16 at 11:46