9

I'm looking for some best practices documentation for implementation of a reverse proxy.

We need to allow an internal database / web server incoming access to the outside world and are trying to determine the most efficient and secure method to accomplish this.

We already have a Red Hat 2 server running a mod proxy Apache configuration, but want to take a look at today's best practices (as this server is ancient).

I've googled quite a bit and have as of yet been unable to find anything less than extremely general advice related to commonplace security best practices.

I would greatly appreciate anything you could provide.

Thanks! Erik

Irongrave
  • 191
  • 1
  • 1
  • 2
  • this depends on so many factors from your very special requirements and used proxy-servers (apache vs nginx vs varnish vs haprox vs $you_name_it), but if you reach 100 points i'd make a writeup :) – that guy from over there Jan 09 '14 at 20:52

1 Answers1

12

NIST SP 800-44 Guidelines on Securing Public Web Servers is a good starting point, though it's no magic bullet (and it's a few years old now).

In my experience some of the most important requirements and mitigations, in no particular order, are:

  • Make sure that your proxy, back-end web (and DB) servers cannot establish direct outbound (internet) connections (including DNS and SMTP, and particularly HTTP). This means (forward) proxies/relays for required outbound access, if required.
  • Make sure your logging is useful (§9.1 in the above), and coherent. You may have logs from multiple devices (router, firewall/IPS/WAF, proxy, web/app servers, DB servers). If you can't quickly, reliably and deterministically link records across each device together, you're doing it wrong. This means NTP, and logging any or all of: PIDs, TIDs, session-IDs, ports, headers, cookies, usernames, IP addresses and maybe more (and may mean some logs contain confidential information).
  • Understand the protocols, and make deliberate, informed decisions: including cipher/TLS version choice, HTTP header sizes, URL lengths, cookies. Limits should be implemented on the reverse-proxy. If you're migrating to a tiered architecture, make sure the dev team are in the loop so that problems are caught as early as possible.
  • Run vulnerability scans from the outside, or get someone to do it for you. Make sure you know your footprint and that the reports highlight deltas, as well as the theoretical TLS SNAFU du-jour.
  • Understand the modes of failure. Sending users a bare default "HTTP 500 - the wheels came off" when you have load or stability problems is sloppy
  • Monitoring, metrics and graphs: having normal and historic data is invaluable when investigating anomalies, and for capacity planning.
  • Tuning: from TCP time-wait to listen backlog to SYN-cookies, again you need to make make deliberate, informed decisions.
  • Follow basic OS hardening guidelines, consider the use of chroot/jails, host-based IDS, and other measures, where available.
mr.spuratic
  • 7,937
  • 25
  • 37