1

I've set up a DNS RPZ where I "redirect" users to a walled garden using DNS RPZ records when users try to access a list of bad sites.

Let's say a user tries to access badsite.com. The redirection to my walled garden for http connections works but for https connections it causes certificate errors because the URL in the browser remains the same (badsite.com) but the resolved IP is connecting to my walled garden (walledgarden.example.com).

Is there any way to resolve the certificate errors when using DNS RPZ for redirection on https connections?

Andrew B
  • 31,858
  • 12
  • 90
  • 128
thelok
  • 13
  • 3
  • 1
    You can create your own CA, let it trust by every PC and software in use. But isn't it enough that the users get a certificate error? In case they accept it, they get the warning, otherwise they close the page either. – sebix Mar 02 '15 at 19:12
  • Agree with @sebix. Either way, you've blocked access to the bad sites. – Andrew Schulman Mar 02 '15 at 20:02
  • While I agree that it is good that the bad site is blocked, it would foster an environment where no one ever looks at the certificate anymore and simply clicks "Accept" -- this happens alot nowadays but it would be nice if a user could be redirected to a walled garden telling them that they tried to access a bad site rather than have them add an exception for a certificate for them to get a warning. Worst case is they accept the certificate that was not blocked and end up going to the malicious site. – thelok Mar 02 '15 at 22:17
  • @thelok You could serve a record pointing to an address with nothing listening on 443/tcp. That way, no certificate errors that they can learn to handle incorrectly and also blocked. – Håkan Lindqvist Mar 03 '15 at 05:06

1 Answers1

1

For a moment, I want you to pretend you are a malware author who has successfully compromised the DNS servers of your company. You are trying to use DNS to serve bogus IP addresses whenever someone tries to visit a bank. Unfortunately, you can't get those confounded browser warnings to go away when HTTPS gets invoked.

That's basically what you're asking us to help you with. Your intentions are benign compared to this supposed malware author, but that doesn't change the fact that the technology is working as intended here. You cannot design security around intent.


Since the policy action occurs at the DNS level, there's no way to know whether the user is using HTTP or HTTPS at the time the query is sent. The only thing you can control is whether or not you're going to return an IP address, and what that IP address is.

Once you've arrived at this point, this is a basic HTTPS hijacking scenario. All the same rules apply. If you're in a position to manipulate the trusted CAs, you can manipulate the browsers. Other than that, no dice.

You have four options here:

  1. Follow the suggestion of sebix in the comments: push a CA cert to every workstation that will be subject to this RPZ protection. If this is an enterprise, it's perfectly doable and in the best case scenario such a CA cert might exist already.
  2. Deal with things as they are now, which provides a way for people to see a description of why they aren't getting to the site in question.
  3. Change your rewrite to prevent them from getting a webpage at all. Instead of sending them to a webpage, "eat" the query with rpz-drop., CNAME . (NXDOMAIN), or CNAME *. (NODATA).
  4. Choose an IP address that will always refuse the port 443 connection and give it an A record that suggests what is going at the policy level. Have your rewrite CNAME point to this record. This will at least give a technical person some sort of breadcrumb to find when they begin troubleshooting. Obviously these technical people will be in the minority, but it's better than nothing.

An example of #4 would be something like this:

# RPZ zone file
$ORIGIN example.rpz.
badsite IN CNAME filtered-malware-site.example.com.

# normal zone file
$ORIGIN example.com.
filtered-malware-site IN A 203.0.113.1
Andrew B
  • 31,858
  • 12
  • 90
  • 128
  • 1
    Just as a note, frivolous use of the first option can be damaging to both the security of your users and their trust of you. (Adding a trusted CA cert has huge implications on both security and privacy, the use and protection of such a cert should be carefully considered.) – Håkan Lindqvist Mar 03 '15 at 05:40
  • @Håkan Agreed. Such a CA is best managed by your security team, if it must exist. Certs issued by such a CA have the potential to be incredibly damaging to the business *and* personal assets of everyone on the network. It must be protected and its usage carefully monitored. – Andrew B Mar 03 '15 at 06:53
  • Thank you for the well thought out answer and suggestions. – thelok Mar 03 '15 at 12:29
  • As a follow up question, if I pushed a CA cert to machines then would that solve the issue of the cert warnings? I've tried that solution by generating my own certs with a CN and Alternate CN of badsite.com and wildcard solutions *.com, etc, but how would another list of thousands of sites generally get accepted by a browser? Or would this need to be done using a proxy that would need to decrypt the SSL communication and then generate dynamic certificates for each requested site? – thelok Mar 03 '15 at 12:41