-5

Preloading is a primitive operation. You must preload for a year or more, and "be aware that inclusion in the preload list cannot easily be undone," according to the registration tool. Therefore, if there is ANY chance of an error, it is prudent NOT to preload, at least for awhile. During that time your website is wide open for MiTM attacks by each first request by each visitor (to find the HSTS header).

Is there any simpler solution, one that doesn't have the problems with redirection, http access, and preloading?

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • 2
    It seems like this question and your answer were just created to stand on a soapbox. This is not a security _peer_-review site or an internet standards body. If you would like to propose changes to the DNS system, I would suggest talking to people with the most influence in those standards bodies, particularly browser, networking hardware, and OS vendors. – Ghedipunk Dec 27 '19 at 20:17
  • This is a forum on information security. Where better to raise a new idea first, especially for someone like me with no experience in working with standards bodies? – David Spector Dec 27 '19 at 21:08
  • 3
    @DavidSpector You are wrong. The Stack Exchange network is not "a forum", we do not discuss "ideas". This is a question and answer site, and your "question" is not a question, it's a thinly disguised rant. And there is no problem with preloading either. –  Dec 27 '19 at 21:10
  • 1
    StackExchange is a collection of Q&A sites, rather than a discussion forum, and our little corner of Infosec is less of a "what if" Q&A forum than most. While great discussions happen that lead to further understanding, the focus is on the questions and answers, which provide quick knowledge to those involved and people coming later. Can I suggest asking why browsers use preload lists and HSTS instead of looking for DNS records? (There's a bit of insight when following that line, as Conner's response to your answer suggests.) – Ghedipunk Dec 27 '19 at 21:14
  • Many websites (https://www.usenix.org/system/files/conference/foci18/foci18-paper-syverson.pdf) have explained why a central list such as the preload list cannot scale up to the size of the Web when we have achieved the Secure Web that W3C has proposed. – David Spector Dec 27 '19 at 21:18
  • The registration tool itself recommends a ramp-up process to weed out errors. – schroeder Dec 27 '19 at 21:51

2 Answers2

2

Preloading isn't as dangerous as you're making it sound. The only requirement for it to not break your site is that you have TLS working, and if TLS weren't working, then your site is unsafe anyway. The right answer is "just preload".

  • There are many known problems with preloading, including the fact that it is managed by a commercial company, the fact that the entire list of preloaded domains must be part of every browser's database (so it cannot scale up to the entire secure future web), and that domains cannot be removed from the preload list. I would appreciate your removing your downvote. – David Spector Dec 27 '19 at 20:12
  • How is the preload list being managed by Google any more of a problem than your root CA list being managed by Microsoft/Apple/Canonical? The preload list is tiny these days compared to modern disk sizes. You shouldn't ever have to remove a domain, but if you do for some bizarre reason, you can; it's just not quick or easy. – Joseph Sible-Reinstate Monica Dec 27 '19 at 20:16
  • The answer is that the DNS was designed to scale up well. A list in a single database is not. The root CA list is rather small as compared with a list of all domains in the world. – David Spector Dec 27 '19 at 21:09
  • So it's okay for commercial companies to manage small lists, but not big ones? – Joseph Sible-Reinstate Monica Dec 27 '19 at 21:11
  • It is better for neutral agencies to be in charge of important lists. That is why ICANN was formed, no? – David Spector Dec 27 '19 at 21:12
  • But you just said that it's okay for commercial companies to manage root CA lists, and those are more important than preload lists. – Joseph Sible-Reinstate Monica Dec 27 '19 at 21:13
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/102618/discussion-between-david-spector-and-joseph-sible-reinstate-monica). – David Spector Dec 27 '19 at 21:19
-6

Currently, most URL redirection from http to https is done in error-prone ways (depending on the expertise of developers, webmasters, and hosting companies), such as using the Apache Redirect directive and/or rewrite engine. Browsers themselves cannot step in and change the scheme unless they know that HSTS applies.

One solution might be to eliminate HSTS, and instead add a new flag to the DNS zone records, declaring that a domain supports https, and not http.

If an agent or browser sees this flag during its DNS lookup, it would silently rewrite the user's HTTP scheme to HTTPS. This would be guaranteed not to fail, since the authoritative DNS zone declared that it cannot fail.

By gradually eliminating all http requests that can reasonably be eliminated, cleartext negotiations will gradually be eliminated from the Internet, leaving us all increasingly secure and yet supporting old links, small device URLs, edge cases, etc.

  • 4
    DNS lookups are possibly the only thing that are *easier* to intercept en-masse than a plain HTTP request. Therefore, I'm not sure why you believe that this is more secure than not using HSTS. – Conor Mancone Dec 27 '19 at 20:53
  • The answer is that a flag indicating that a domain can support https connections is not harmful. I believe that this is more secure than using HSTS because HSTS allows a MiTM attack when preloading is not done. – David Spector Dec 27 '19 at 21:11
  • 2
    "when preloading is not done" So do preloading. – Joseph Sible-Reinstate Monica Dec 27 '19 at 21:12
  • 4
    @DavidSpector to be clear: your issue with HSTS is that it if you aren't on the preload list, then you are vulnerable to a MitM. Your suggest fixing this with an additional DNS check. **However**, DNS happens over plain text and can also be spoofed by an attacker. As a result, your proposed solution is *also* vulnerable to a MitM. Therefore it seems to me that your proposed solution doesn't actually solve anything. – Conor Mancone Dec 27 '19 at 21:18
  • Interesting. After being soundly voted down for no good reason, yours is an actual good reason. Is there a technical problem in requiring DNS servers to be secure (https, TLS, and port 443)? – David Spector Dec 27 '19 at 21:31
  • Yes, an attacker could appear to change the DNS zone. But this is not a real problem since a mistake either way can only result at worst as the site being inaccessible. It can't leak secure information. – David Spector Dec 27 '19 at 21:45
  • 1
    @DavidSpector you just opened a gigantic can of worms. Normally DNS doesn't operate over HTTPS, but there is actually a proposal to do exactly that and some browsers have already rolled it out to **heavy** criticism. It's been very controversial – Conor Mancone Dec 27 '19 at 21:51
  • 1
    Your answer to "how can it be avoided?" is to propose a new, untested, unavailable process that would require changes to a few different standards? This isn't so much an answer as it is a wish list. – schroeder Dec 27 '19 at 21:53
  • 2
    "soundly voted down for no good reason"... You asked a question and answered it in a way that people of the site don't find useful. For me (I can't speak for others), it came across as trying to set yourself as an authority over the various committees that set web security policy. Pick yourself up and dust yourself off; I see a related question that would probably be well received around here, and I'm inviting you to ask it, since it was your idea: "Why did the browsers choose to implement preload lists and HSTS over, say checking custom DNS records?" – Ghedipunk Dec 27 '19 at 21:57
  • 1
    An *actual* answer on the client-side is to use HTTPS Everywhere (https://www.eff.org/https-everywhere) – schroeder Dec 27 '19 at 22:06
  • I can understand controversy, and I can understand how I can be hated for proposing something new. But, technically, I'm still not understanding why the insecurity of the DNS system has anything to do with my proposal. If a malicious user makes it seem like a declared https site is actually an http site, the only result is that a user attempting to visit the site is told that he or she cannot visit a secure site using an http URL. How does that further the aims of any malicious user, other than just another DoS attack, which can be dealt with in the usual ways? – David Spector Dec 27 '19 at 22:49
  • To schroeder: the H-E project hosts a database of rulesets, so it suffers from the same scaling problem as HSTS+Preload. – David Spector Dec 27 '19 at 22:55
  • To Ghedipunk: thank you, good idea. I will ask that question. – David Spector Dec 27 '19 at 22:56