34

So, this is the situation. It seems we need to have an open TCP port 5432 to the world, where a customer has access to his PostgreSQL database.

For obvious reasons, we can't say just "no", only as a last-last resort.

What are the biggest troubles? How can I defend our infrastructure?

Anyways: why shouldn't it be opened to the world? I think, maybe it is more secure than some 20 year old, unmaintained FTP server.

P.S. VPN isn't ok. Some encryption maybe (if I can give him a JDBC connection URL which works).

Josip Rodin
  • 1,575
  • 11
  • 17
  • 4
    SSH tunnels aren't an option? This how-to article actually uses [PostgreSQL](http://blog.sensible.io/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html) as an example. You could provide the customer with a pre-configured SSH client to make it easy to connect. – Jeff Camera Sep 10 '14 at 03:28
  • @LuciferSam No. The db will be used by an in-house developed java application on around 100 company machines. Our only way to configure them is to give a jdbc connection url to their localnet administration, any other is very-very problematic. –  Sep 10 '14 at 07:53
  • @milkman what does the app do? perhaps it could query a RESTful server instead? Obviously, passing SQL to REST doesn't gain anything, but assuming it is CRUD.. – tedder42 Sep 10 '14 at 20:21
  • @tedder42 It manipulates the database of the users CMS, which is hosted by us as well. We don't have permission to change its source. –  Sep 11 '14 at 08:03

5 Answers5

46

Require SSL, keep SELinux turned on, monitor the logs, and use a current PostgreSQL version.

Server side

Require SSL

In postgresql.conf set ssl=on and make sure you have your keyfile and certfile installed appropriately (see the docs and the comments in postgresql.conf).

You might need to buy a certificate from a CA if you want to have it trusted by clients without special setup on the client.

In pg_hba.conf use something like:

hostssl theuser thedatabase 1.2.3.4/32 md5

... possibly with "all" for user and/or database, and possibly with a wider source IP address filter.

Limit users who can log in, deny remote superuser login

Don't allow "all" for users if possible; you don't want to permit superuser logins remotely if you can avoid the need for it.

Limit rights of users

Restrict the rights of the user(s) that can log in. Don't give them CREATEDB or CREATEUSER rights.

REVOKE the CONNECT right from PUBLIC on all your databases, then give it back to only the users/roles that should be able to access that database. (Group users into roles and grant rights to roles, rather than directly to individual users).

Make sure users with remote access can only connect to the DBs they need, and only have rights to the schemas, tables, and columns within that they actually need. This is good practice for local users too, it's just sensible security.

Client setup

In PgJDBC, pass the parameter ssl=true:

To instruct the JDBC driver to try and establish a SSL connection you must add the connection URL parameter ssl=true.

... and install the server certificate in the client's truststore, or use a server certificate that's trusted by one of the CAs in Java's built-in truststore if you don't want the user to have to install the cert.

Ongoing action

Now make sure you keep PostgreSQL up to date. PostgreSQL has only had a couple of pre-auth security holes, but that's more than zero, so stay up to date. You should anyway, bugfixes are nice things to have.

Add a firewall in front if there are large netblocks/regions you know you don't ever need access from.

Log connections and disconnections (see postgresql.conf). Log queries if practical. Run an intrusion detection system or fail2ban or similar in front if practical. For fail2ban with postgres, there is a convenient how-to here

Monitor the log files.

Bonus paranoia

Extra steps to think about...

Require client certificates

If you want, you can also use pg_hba.conf to require that the client present an X.509 client certificate trusted by the server. It doesn't need to use the same CA as the server cert, you can do this with a homebrew openssl CA. A JDBC user needs to import the client certificate into their Java Keystore with keytool and possibly configure some JSSE system properties to point Java at their keystore, so it's not totally transparent.

Quarantine the instance

If you want to be really paranoid, run the instance for the client in a separate container / VM, or at least under a different user account, with just the database(s) they require.

That way if they compromise the PostgreSQL instance they won't get any further.

Use SELinux

I should't have to say this, but ...

Run a machine with SELinux support like RHEL 6 or 7, and don't turn SELinux off or set it to permissive mode. Keep it in enforcing mode.

Use a non-default port

Security by only obscurity is stupidity. Security that uses a little obscurity once you've done the sensible stuff probably won't hurt.

Run Pg on a non-default port to make life a little harder for automated attackers.

Put a proxy in front

You can also run PgBouncer or PgPool-II in front of PostgreSQL, acting as a connection pool and proxy. That way you can let the proxy handle SSL, not the real database host. The proxy can be on a separate VM or machine.

Use of connection pooling proxies is generally a good idea with PostgreSQL anyway, unless the client app already has a built-in pool. Most Java application servers, Rails, etc have built-in pooling. Even then, a server side pooling proxy is at worst harmless.

Josip Rodin
  • 1,575
  • 11
  • 17
Craig Ringer
  • 10,553
  • 9
  • 38
  • 59
  • 3
    If the client has a static $IP only allow that through the firewall to $port too. – user9517 Sep 09 '14 at 07:17
  • Thank you very much! Pgjdbc has this param, but I can give him only a jdbc connection url, and I am not sure if it will work with his java application (proprietary, undebuggable). Ok, if not I will ask a new question. Thank you your detailed answer! –  Sep 09 '14 at 09:07
  • @lain He is always moving, and this app is running on his laptop... but anyways, thank you! –  Sep 09 '14 at 09:09
  • @Milkman It can be passed as a URL option, like all the other parameters. `jdbc:postgresql://host:port/dbname?ssl=true` – Craig Ringer Sep 09 '14 at 11:32
  • why not a VPN, maybe using client certificates instead of password? i fell it more versatile and secure to expose the VPN that directly exposing the DB. – Lesto Sep 09 '14 at 11:45
  • 2
    @lesto Actually, I think exposing a VPN *massively* increases the attack surface compared to just one restricted service. People forget that the VPN then becomes an attack channel for any malware on the remote machine(s) to punch through all the perimeter security and get to the guts of the network. I only find them acceptable if they connect to a DMZ that treats them as being just as toxic as Internet hosts. – Craig Ringer Sep 09 '14 at 11:47
  • 1
    @CraigRinger i'm not saying to remove the rest of the protection, but to encapsulate service into VPN – Lesto Sep 09 '14 at 11:50
  • 1
    @lesto Sure, agreed, a VPN can be a useful extra layer if it's not treated as Magic Security Sauce like many admins unfortunately do. – Craig Ringer Sep 09 '14 at 11:55
2

A simple extension to Craigs impressive action plan:

Maybe the user is using only a relative little set of network providers (for example, his mobil network provider while moving, his cable network from home and workplace outgoing ip from work).

Most network provider have many IPs, but not really many subnets. So, you can give an iptables filter, which limits the postgresql access to the network segments which are used by your customer. This greatly reduced the attack possibilities of randomly-selected trouble sources of the net.

A simple support scenario:

  1. Your customer calls you, "I can't login".
  2. You find out with a tcpdump -i eth0 -p tcp port 5432 command, where is he coming from.
  3. With a whois 1.2.3.4 you can get the ip address used by this ip. For example, it can be 1.2.3.0/24.
  4. With an iptables -A INPUT -s 1.2.3.0/24 -p tcp --dport 5432 -j ACCEPT (or some similar) you allows the tcp connections with his new subnet.

There is a very good perl script named uif which can provide permanent and intuitive declarable iptables rulesets. (Google for "uif iptables").

peterh
  • 4,914
  • 13
  • 29
  • 44
  • 1
    Interesting idea, but that sounds a bit fragile though. – nishantjr Sep 09 '14 at 15:26
  • @nishantjr Of course it is not a standalone solution, only a possibility to make things better. – peterh Sep 09 '14 at 15:41
  • A more practical approach might be to whitelist individual ISPs and/or countries, for ways to do this see e.g. http://stackoverflow.com/questions/16617607/parsing-ripe-allocations-regex – Josip Rodin Jan 28 '16 at 10:37
2

Here's a fairly simple Fail2ban configuration for PostgreSQL based on the HOWTO linked above but fine-tuned to actually work with Ubuntu packages, catch another error condition and skip over various debug messages to make it quicker:

/etc/fail2ban/filter.d/local-postgresql.conf:

[Definition]

failregex = <HOST>\(\d+\) FATAL:  password authentication failed for .+$
            <HOST>\(\d+\) FATAL:  no pg_hba.conf entry for host .+$

ignoreregex = duration:

/etc/fail2ban/jail.d/local-postgresql.conf:

[local-postgresql]
enabled  = true
filter   = local-postgresql
action   = iptables[name=PostgreSQL, port=5432, protocol=tcp]
           sendmail-whois[name=PostgreSQL, dest=root@localhost]
logpath  = /var/log/postgresql/postgresql-9.3-main.log
maxretry = 3
Josip Rodin
  • 1,575
  • 11
  • 17
2

Fail2ban is a powerful tool, but don't trust that a filter will work as is. Test any filters using the failregex tool, and remember to escape any quotes (I.e. "admin" would be \"admin\"). As an example, testing the following filter failregex line from my /etc/log/postgresql/postgresql-9.3-main.log did not work for me.

fail2ban-regex '2016-09-20 14:30:09 PDT FATAL:  password authentication failed for user "admin"' '<HOST>\(\d+\) FATAL:  password authentication failed for .+$'

The above gave me

Failregex: 0 total

I had to update the failregex to match the log format.

fail2ban-regex '2016-09-20 14:30:09 PDT FATAL:  password authentication failed for user "admin"' 'FATAL:  password authentication failed for user \"<HOST>\"'

This gave me a positive result.

Failregex: 1 total

The fail2ban-regex test can also be implemented on entire log files.

fail2ban-regex /var/log/postgresql/postgresql-9.3-main.log /etc/fail2ban/filter.d/postgresql.local

The above gave me the following positive result with the updated failregex.

Failregex: 169 total

metersales
  • 21
  • 2
1

I was having issue to configure fail2ban because of missing IP address in the default PostgreSQL log config. I am using PostgreSQL 12 and 13 on Debian 10. So if anybody will face same thing here is how to fix it.

/etc/postgresql/13/main/postgresql.conf

log_line_prefix = '%m {%h} [%p] %q%u@%d '

Default was: '%m [%p] %q%u@%d '. I added %h to pass IP address to fail2ban.

/etc/fail2ban/filter.d/postgresql.conf

[Definition]
failregex = \{<HOST>\} .+? FATAL:  password authentication failed for user .+$

/etc/fail2ban/jail.d/postgresql.conf

[postgresql]

enabled = true
filter = postgresql
logpath = /var/log/postgresql/postgresql*.log
maxretry = 3
bantime = 86400
port = 5432

Then restart fail2ban service systemctl restart fail2ban.

And watch the logs tail -f /var/log/fail2ban.log.