49

A website I frequent have finally decided to enable TLS to their servers, only not to mandate it as a lot of websites out there do. The maintainer claims that TLS must be optional. Why?

On my own website I have long set up mandated TLS and HSTS with long periods, and the weaker cipher suites are disabled. Plaintext access is guaranteed to be walled out with a HTTP 301 to the TLS-protected version. Does this affect my website negatively?

Jonas Schäfer
  • 295
  • 1
  • 11
Maxthon Chan
  • 649
  • 1
  • 8
  • 12
  • 12
    They may fear, that HSTS will get them into trouble, if ANYTHING goes wrong (i.e. their free CA stops issuing certificates or is removed from browser trust stores due to some issue). With the current TLS ecosystem, you create dependencies to the trusted CAs/the browser vendors. It's currently hard to avoid and worth it, but you can still see this as an problem and not enforcing HTTPS as an solution to stay independed in case something happens. – allo Apr 02 '17 at 11:03
  • anyone want to mention the requirement of tls for h2, which is far fast than http1.1. good for you doing hsts, I recently sent my site for the hsts preload list, hopefully I can just disable port 80 all together – Jacob Evans Apr 02 '17 at 20:58
  • See this: https://security.stackexchange.com/questions/53250/why-do-some-websites-enforce-lack-of-ssl – d33tah Apr 03 '17 at 10:23
  • Because certificate authority fees may be [too expensive to afford](http://serverfault.com/a/161291/139844)? – gerrit Apr 04 '17 at 18:18
  • 4
    @gerrit This argument does not stand in front of low-cost and free certificate authorities like Let's Encrypt. – Maxthon Chan Apr 04 '17 at 18:20
  • 1
    Let's Encrypt does not work with every host, and it isn't as simple as using a better host. I use App Engine which isn't (directly) supported for technical reasons. – Carl Smith Apr 05 '17 at 01:09
  • @CarlSmith Let's Encrypt is just an example of free or low cost certificate authority. There are other ones out there. – Maxthon Chan Apr 05 '17 at 01:11
  • Perhaps. I could only find two free options, and it wasn't obvious how to use either one with App Engine. I'll just buy one, as not offering encryption is just poor service, but it's not *always* trivial. – Carl Smith Apr 05 '17 at 02:33

9 Answers9

62

In this day and age, TLS + HSTS are markers that your site is managed by professionals who can be trusted to know what they're doing. That is an emerging minimum-standard for trustability, as evidenced by Google stating they'll provide positive ranking for sites that do so.

On the other end is maximum compatibility. There are still older clients out there, especially in parts of the world that aren't the United States, Europe, or China. Plain HTTP will always work (though, not always work well; that's another story).

TLS + HSTS: Optimize for search-engine ranking
Plain HTTP: Optimize for compatibility

Depends on what matters more for you.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
  • 16
    Maybe it's me being picky, but that first sentence seems a bit of a stretch: a site being https doesn't tell anything about the professionalism or the trustworthiness of the people in charge. A site can be https and still be developed/managed by people who don't sanitize inputs, making the site vulnerable to SQL injection or XSS; or it can be https and be invalid, not accessible, or not usable. – Alvaro Montoro Apr 02 '17 at 05:14
  • 34
    Using HTTPS isn't a guarantee of professionalism, but the lack of it most certainly tells the opposite. – Esa Jokinen Apr 02 '17 at 08:56
  • 8
    The use of TLS and HSTS are signals, part of a much larger array, that the site may be worth reading. Unlike others, *its trivially easy to test for the presence of*, so that's why Google is using it as a proxy for the rest. – sysadmin1138 Apr 02 '17 at 13:32
  • 2
    We probably just have different concepts of reading worthiness and professionalism, because I don't see how having https has relation with a site being worth reading or the professionalism of the people in charge of it. – Alvaro Montoro Apr 02 '17 at 15:51
  • 3
    @Braiam Stack Exchange is migrating to https only and will start using hsts fairly soon. Http is still available, not because of compatibility, but because they are being slow and careful, and it is technically difficult to migrate. – captncraig Apr 02 '17 at 16:44
  • 2
    I'll second that I much prefer http sites given the loading speed differential from China. No, you don't have to care or prefer it but, no, it shouldn't be mandatory if banking &c. are not involved. – lly Apr 02 '17 at 18:38
  • 2
    @captncraig They are being slow and careful *because of compatibility*. – user253751 Apr 03 '17 at 00:29
  • @AlvaroMontoro Or, one might say: Using HTTP (or HTTPS without HSTS) as protocol is emerging to be the equivalent of using blinking purple-on-green text and a 3D-rotating "@" image for mailto links in the content. Neither tell you directly something about the competence level on their side, but ... – Hagen von Eitzen Apr 03 '17 at 05:43
  • 4
    @esajohnson - lack of https doesn't showcase unprofessionalism. It shows there is no "need" for it. For example, a CentOS mirror. – warren Apr 03 '17 at 18:07
  • 1
    Another compatibility issue pertains to APIs. If your site has an API, and it may be accessed by capability-limited clients (IoT/embedded devices, old Java or mobile apps, etc), allowing HTTP is theoretically better than enabling the required known-broken SSL methods (e.g. SSLv3) to permit compatibility with those clients. In that case though, I'd have directives on the server that take any *non*-API URL and 301 to https://. – Doktor J Apr 04 '17 at 20:45
  • On App Engine, you add `secure: always` to your config and you have HTTPS. It hardly proves you're a pro. I've had students set up encrypted sites that only started coding earlier that month. – Carl Smith Apr 05 '17 at 01:13
  • 1
    @CarlSmith It appears you're a pro from a general consumer's POV, which is all that matters to your marketing department, regardless of whether or not it's an accurate reflection of reality. – Jason C Apr 05 '17 at 03:04
  • @warren I disagree, https does more then just encrypt end to end, it also prevents MitM attacks. Sure, you can verify said CentOS iso with a hash provided on the main https-protected site, but it provides an additional level of security, especially for those too lazy/inept to verify hashes. Additionally, it also hides the path names, so outside observers can see you connecting to mirror.com, but not the whole mirror.com/iso/CentOS.iso. – Programmdude Apr 05 '17 at 14:49
30

There is one good reason for simple read only websites not to use HTTPS.

  • Web caches can't cache images that are transported over HTTPS.
  • Some parts of the world have very low-speed international connections, so depend on the caches.
  • Hosting images from another domain takes skills that you can’t expect the operators for small read only websites to have.
maxkoryukov
  • 105
  • 5
Ian Ringrose
  • 870
  • 1
  • 6
  • 12
  • 1
    Read-only contents can be deployed on CDN if you target those countries. The CDN mirrors the static contents using their own means and still serves them through HTTPS. CDN can be fairly easy to find and for small websites not that expensive to use. – Maxthon Chan Apr 02 '17 at 18:25
  • 8
    @MaxthonChan, try explaining to my mother what a CDN is..... Yet she may setup a website with the times of local church services. – Ian Ringrose Apr 02 '17 at 19:28
  • 1
    If she is setting up a website with local church service info, just tell her to upload the files to a certain address to make it visible to the Internet, and give her the address of some CDN service. – Maxthon Chan Apr 02 '17 at 19:31
  • 1
    "Web caches can’t caches images that are transported over HTTPS." What is this all about? There is nothing about a cache that prevents it working on HTTPS. – Michael Hampton Apr 02 '17 at 21:46
  • 6
    @MichaelHampton how can a cache read the image from a HTTPS stream without having the description keys? And would you trust an ISP with your keys? – Ian Ringrose Apr 02 '17 at 21:51
  • 9
    You should make it more clear as to which caches you are talking about. – Michael Hampton Apr 02 '17 at 21:52
  • @IanRingrose it is quite simple. They download your images to their servers and then thy host them when needed. If there is different domain for static files then it is perfectly fine. – Hauleth Apr 03 '17 at 15:26
  • Similar to proxy caching, which is also foiled ? CDN is good for download links, but there can be a noticeable delay if you have CNAME'd sub domains for images on pages. – mckenzm Apr 04 '17 at 07:41
  • 2
    @IanRingrose If your mother is setting up a website with local church service info, it's unlikely that caching behavior on international connections will come into play, unless it's a very popular church. – Jason C Apr 05 '17 at 03:06
14

The maintainer claims that TLS must be optional. Why?

To truly know the answer to this question, you must ask them. We can, however, make some guesses.

In corporate environments, it's common for IT to install a firewall that inspects traffic incoming and outgoing for malware, suspicious CnC-like activity, content deemed inappropriate for work (e.g. pornography), etc. This becomes much harder when the traffic is encrypted. There are essentially three possible responses:

  1. Give up on monitoring this traffic.
  2. Install a root CA on users' machines so you can perform MitM decryption and inspection.
  3. Wholesale block encrypted traffic.

For a concerned sysadmin, none of these options are particularly appealing. There are a great many threats that attack a corporate network, and it is their job to protect the company against them. However, blocking a great many sites entirely raises the ire of users, and installing a root CA can feel a bit scummy, as it introduces privacy and security considerations for users. I remember seeing (sorry, can't find the thread) a sysadmin petition reddit when they were first turning on HSTS because he was in exactly this situation, and didn't want to block all of reddit simply because he was compelled by the business to block the porn-focused subreddits.

The wheels of technology keep churning ahead, and you'll find many who argue that this sort of protection is old-fashioned and should be phased out. But there are still many who practice it, and perhaps it is them with whom your mysterious maintainer is concerned.

Xiong Chiamiov
  • 2,874
  • 2
  • 26
  • 30
  • how about terminating ssl at the frontend server/load balancer/similar and logging the traffic after that? – eis Apr 02 '17 at 15:22
  • 1
    @eis That assumes that the company controls every website that employees might visit, which is unlikely. The post does not appear to be about TLS on an intranet website. – Xiong Chiamiov Apr 02 '17 at 16:35
9

There are several good reasons to use TLS

(and only few marginal reasons not to do so).

  • If the site has any authentication, using HTTP expose for stealing sessions and passwords.
  • Even on static, merely informational sites, using TLS ensures no-one has tampered with the data.

  • Since Google I/O 2014, Google has taken several steps to encourage all sites to use HTTPS:

  • The Mozilla Security Blog has also announced of Deprecating Non-Secure HTTP by making all new features available only to secure websites and gradually phasing out access to browser features for non-secure websites, especially features that pose risks to users’ security and privacy.

There are also several good reasons to enforce TLS

If you already have a widely trusted certificate, why not always use it? Practically all current browsers supports TLS and has root certificates installed. The only compatibility problem I've actually seen in years have been Android devices and Missing intermediate certificate authority as Android only trusts root CAs directly. This can easily be prevented by configuring the server to send the chain of certificates back to the root CA.

If your maintainer still would like to allow HTTP connections without direct 301 Moved Permanently, say for ensuring access from some really old browsers or mobile devices, there is no way for the browser to know that you even have HTTPS configured. Furthermore, you shouldn't deploy HTTP Strict Transport Security (HSTS) without 301 Moved Permanently:

7.2.  HTTP Request Type

   If an HSTS Host receives a HTTP request message over a non-secure
   transport, it SHOULD send a HTTP response message containing a status
   code indicating a permanent redirect, such as status code 301
   (Section 10.3.2 of [RFC2616]), and a Location header field value
   containing either the HTTP request's original Effective Request URI
   (see Section 9 "Constructing an Effective Request URI") altered as
   necessary to have a URI scheme of "https", or a URI generated
   according to local policy with a URI scheme of "https").

The problem of various sites configured for both protocols is recognized by The Tor Project and the Electronic Frontier Foundation and addressed by a multibrowser HTTPS Everywhere extension:

Many sites on the web offer some limited support for encryption over HTTPS, but make it difficult to use. For instance, they may default to unencrypted HTTP, or fill encrypted pages with links that go back to the unencrypted site.

Mixed content was also a huge problem due to possible XSS attacks to HTTPS sites through modifying JavaScript or CSS loaded via non-secure HTTP connection. Therefore nowadays all mainstream browsers warn users about pages with mixed content and refuses to automatically load it. This makes it hard to maintain a site without the 301 redirects on HTTP: you must ensure that every HTTP page only loads HTTP contect (CSS, JS, images etc.) and every HTTPS page only loads HTTPS content. That's extremely hard to achieve with the same content on both.

Esa Jokinen
  • 43,252
  • 2
  • 75
  • 122
  • `If your maintainer still would like to allow HTTP connections without direct 301 Moved Permanently, say for ensuring access from some really old browsers or mobile devices, HSTS is the correct choise as it only enforces HTTPS when it is clear that the browser supports it` but in this case client (even HTTPS-compatible) will never know of HTTPS version is they load HTTP initially. – Cthulhu Apr 02 '17 at 07:30
  • Re your last paragraph: HSTS header is ignored during non-HTTPS connection. – Cthulhu Apr 02 '17 at 07:31
  • 1
    `HSTS Host MUST NOT include the STS header field in HTTP responses conveyed over non-secure transport.` `If an HTTP response is received over insecure transport, the UA MUST ignore any present STS header field(s).` https://tools.ietf.org/id/draft-ietf-websec-strict-transport-sec-14.txt – Cthulhu Apr 02 '17 at 07:42
  • Thanks for pointing out my false hint, Cthulhu! Inspired of that, I've made major improvements to my answer. Please be welcome to also be critical towards the new content. :) – Esa Jokinen Apr 02 '17 at 08:46
5

It all comes down to your security requirements, user choice, and risk of implicit downgrading. Disabling old ciphers server-side is largely necessary because browsers will happily fall through to absolutely horrible ciphers client-side in the name of user experience/convenience. Making sure nothing of yours that depends on a secure channel to the user cannot be reached with an insecure method is, of course, also very sound.

Not allowing me to to explicitly downgrade to insecure HTTP when I've deemed that your blog post about why you like Python more than Ruby (not saying you do, just a generic example) isn't something I mind the spooks or the public knowing I accessed is just getting in my way for no good reason, on the assumption that HTTPS will be trivial for me.

There are, today, embedded systems which don't have the ability to use TLS out of the box, or ones which are stuck on old implementations (I think it's awfully bad that this is so, but as a power user of [insert embedded device here], I sometimes can't change this).

Here's a fun experiment: try downloading a recent version of LibreSSL from the upstream OpenBSD site over HTTPS with a sufficiently old TLS/SSL implementation. You won't be able to. I tried the other day on a device with an older OpenSSL build from 2012 or so, because I wanted to upgrade this embedded system to more secure, new stuff from source - I don't have the luxury of a prebuilt package. The error messages when I tried weren't exactly intuitive, but I presume it was because my older OpenSSL didn't support the right stuff.

This is one example where the move the only-HTTPS can actually detriment people: if you don't have the luxury of recent pre-built packages and want to fix the problem yourself by building from source, you're locked out. Thankfully, in the LibreSSL case, you can fall back to explicitly requesting HTTP. Sure, this won't save you from an attacker already rewriting your traffic, capable of replacing source packages with compromised versions and rewriting all checksums in HTTP bodies describing the packages available for download on the webpages you browse, but it's still useful in the much more common case.

Most of us aren't one unsecured download away from being owned by an APT (Advanced Persistent Thread: security jargon for national intelligence agencies and other extremely well-resourced cyber threats). Sometimes I just want to wget some plain text documentation or a small program whose source I can quickly audit (my own tiny utilities/scripts on GitHub, for example) onto a box that doesn't support the most recent cipher suites.

Personally, I'd ask this: is your content such that a person could legitimately decide "I'm okay with me accessing being public knowledge"? Is there a plausible chance of real risk to non-technical people accidentally downgrading to HTTP for your content? Weight your security requirements, enforced-privacy-for-your-users requirements, and risk of implicit downgrades against the ability of users who understand the risks making an informed choice on a case-by-case basis to go unsecured. It's entirely legitimate to say that for your site, there's no good reason to not enforce HTTPS - but I think it's fair to say that there are still good use-cases for plain HTTP out there.

mtraceur
  • 197
  • 6
  • 1
    *"try downloading a recent version of LibreSSL from the upstream OpenBSD site over HTTPS with a sufficiently old TLS/SSL implementation"* The flip side of this of course is: try downloading a recent browser with a sufficiently old browser, for example one that only implements HTTP/1.0 without support for the `Host:` header. Or try surfing modern sites with a web browser that only supports the Javascript of 2001. At some point we as a community need to move on, which unfortunately breaks things for some. The question then becomes: is the added value worth the breakage? – user Apr 04 '17 at 14:05
  • @MichaelKjörling Those are also problems, of varying severity. I'll add building recent compiler versions to that list. Some are more defensible than others. I'm not sure if you're asserting disagreement or why if you are: in the second sentence of my post, I agree that it _is_ justified to prevent old ciphers on an HTTPS connection, since it protects most users from downgrade attacks they'd otherwise have no meaningful visibility into or defense against. (I don't think most modern websites failing to gracefully degrade is remotely as justified, but that's kinda beside the point.) – mtraceur Apr 07 '17 at 07:33
  • @MichaelKjörling To clarify, the point of bringing that up was because it's an example of where _providing plain HTTP_ to the user had a clear benefit, which was the core point of the question being answered. It was not in any way to cast a negative light onto the OpenBSD/LibreSSL projects, for which I have pretty substantial respect. I thought the second sentence of the first paragraph would've ruled out such a negative interpretation. If you think that was unclear or could be worded better, please feel free to edit my answer or suggest improvements. – mtraceur Apr 07 '17 at 07:38
3

There is a lot of discussion here as to why tls is good - but that was never asked as in the original post.

Maxthon asked 2 questions:

1) why has has a random, un-named site decided to maintain both http and https presences

2) Is there a negative impact to Maxthon serving only 301 responses to http requests

With regard to the first question, we don't know why the providers chose to retain both http and https sites. There may be lots of reasons. In addition to the points about compatibility, distributed caching, and some hints about geo-political accessibility, there is also a consideration about content integration and avoiding ugly browser messages about the content being insecure. As Alvaro pointed out, TLS is just the tip of the iceberg with regard to security.

The second question, however is answerable. Exposing any part of your site of your website via http when it actually requires https for secure operation provides an exploitable vector for attacks. However it does make some sense to maintain this in order to identify where traffic is being incorrectly directed to port 80 on your site and fixing the cause. I.e. there is both a negative impact and the opportunity for a positive impact, the net result depends on whether you are doing your job as an administrator.

Sysadmin1138 says that https impacts seo rankings. While Google have stated that it does impact rankings, the only reliable studies I have seen suggest the difference is small. This is not helped by people who should know better claiming that, since top ranked sites are more likely to have an https presence, an https presence therefore improves rankings.

symcbean
  • 19,931
  • 1
  • 29
  • 49
1

This is not a good reason, as it means you have bad/broken/insecure clients, but if there are automated processes accessing resources via the existing http:// urls, it's possible that some of them do not even support https (e.g. busybox wget, which doesn't have TLS support internally and only added it more recently via an openssl child process) and would break if they were given a redirect to an https url that they can't follow.

I would be tempted to deal with this possibility by writing the redirect rule to exclude unknown (or known-legacy) User-Agent strings from being redirected and let them access the content via http if they want, so that actual browsers can all benefit from forced https/hsts.

  • 1
    Remind me how many decades ago any well maintained tool (e.g. wget) didn't support HTTPS? – Oleg V. Volkov Apr 04 '17 at 10:18
  • @OlegV.Volkov: I think you missed the word busybox in my answer. – R.. GitHub STOP HELPING ICE Apr 04 '17 at 14:35
  • Checked it out - well, now I see. I don't really get why then can't just build the damn thing and then don't package build tools but whatever. Thinking back I also remembered some more cases when people was restricted to stripped down or outdated tools and it would be good to have plain HTTP. Could you please fix caps so I can revert vote after edit as well? – Oleg V. Volkov Apr 07 '17 at 12:32
1

In the past, I've had to use HTTP rather than HTTPS because I've wanted to <embed> pages from elsewhere that themselves have been served over HTTP, and they won't work otherwise.

Algy Taylor
  • 111
  • 2
1

There are very few good reasons for using HTTP instead of HTTPS on a website. If your website handles transactions of any kind or stores any kind of sensitive or personal data, you must absolutely use HTTPS if you want said data to be secure. The only decent reason I would see for not enforcing HTTPS is if your website relies on caching as HTTPS does not work with caching. However, it is often worth sacrificing a bit of performance in order to ensure security of your website. It is also possible that your clients may not support HTTPS, but really, in 2017, they should.

Ken
  • 149
  • 4