Not a real full answer (because only browsers vendors could probably really answer you), but some perspective.
For whatever reasons there seems to always have been a "gap" (for whatever other name you can call it, my English is not sufficient to find the proper term here) between the "Web people" and the "DNS people".
It seems there always have been some misunderstandings between the two, which shows in multiple areas and produced strange outcomes.
For example, browsers vendors never wanted to use the DNS SRV
records, while they are useful for many needs, and are used by other protocols. This among other things exacerbated the problem about "CNAME at apex", which is back on the drawing table this time with drafts specifically written/supported by the browsers people or some of them, to create SVCB/HTTPSSVC
records. This may explain as well why DANE (TLSA
records that basically allow to encode in the DNS some information about which certificates or certificates authority a given service use, in order for the client to check) is today more something existing in the SMTP world than the web world (but the Web PKI as it stands today is a big ecosystem that is difficult to move... look at what was needed to have something like Let's Encrypt, or how slowly things move at the CABForum to obsolete all algorithms or introduce new ones for example).
There is at least one actual technical fact that creates an impedance mismatch that remains open: from technical records in the DNS, it is difficult to defined administrative boundaries, and yet you need them for things like cookies related permissions. The best, but not perfect, solution for this right now is the Mozilla curated "Public Suffix List", and there were former attempts, like IETF DBOUND working group, that failed to produce any positive outcomes.
I guess the same issue could be at play in "HSTS preload vs data in the DNS" as any list such as the HSTS preloading is typically a list of hostnames, and then you have the problem of the "level". If example.com
is preloaded, should other.example.com
be too? And the opposite? How does it depend on the TLD? On the 2LD? Etc. If you have an exhaustive list, you do not have the issue (as you decide that everything is protected below and starting at one level; Google for example prelisted their newer TLDs like APP
, DEV
and NEW
to the surprise of many domain name buyers not reading the fine print telling them that their website in those TLDs will not work on HTTP and hence that certificates are mandatory to acquire to have HTTPS). If instead you want to have records in the DNS for that, you need to define how records set at each level impacts those at level below and are themselves impacted at level above. This is one of the complex part that made endeavors like DBOUND fail.
There is also another pervasive issue at play here. For anything above to be useful in the DNS, you need DNSSEC. Otherwise you are absolutely vulnerable to a MiTM. But for both technical and non technical reasons DNSSEC is not deployed at 100% level, and never will but yet browsers want a solution for all cases.
Added to that there is an ongoing, sometimes veiled, belief by some people that if you have TLS, then you do not need DNSSEC (besides also those opposing DNSSEC per se, due to its complexity, the imperfect balance between benefits and drawbacks, etc.). This could be true on the paper, but certainly is not in practice as long as "most" TLS handshakes are authenticated by DV-issued certificates, and that BGP is not secured against hijacks. In fact, it is even worse now in the face of automated CAs.
You can see a huge example of this dichotomy with the recent DNS over TLS and even more so with DNS over HTTPS technologies, with some browsers having even decided to force DoH upon their users. This has only started to spark huge controversies and debates.
Note for example, that Mozilla "Trusted Recursive Resolver" program at https://wiki.mozilla.org/Security/DOH-resolver-policy does not require prospective applicants to be DNSSEC enabled. This speaks a lot. And of course, using a specific name in the DNS as canary (https://use-application-dns.net/) needs explicitly to NOT use DNSSEC so giving this signal to end-users: "DNSSEC is an important technology - supposedly - to protect your domains... but when it comes around the important part of configuring the DNS subsystem we have to use domain names explicitly without DNSSEC for things to work out". I can understand end users becoming completely confused there.
You can also count that nowadays both the web and the DNS are complex technologies reaching many parts and being discussed/specified in many documents and forums. It is probably mostly impossible for someone to be an expert in both at the same time and for a long time, which means there is unfortunately less opportunity for bridges and smart ideas that use the advantages of one side to cover the disadvantages of the other one and the opposite. Recently this was shown at the DNS side by the now famous "DNS camel".
Another issue coming to play is around "agility":
- there are only a few major browsers vendors, and they kind of control all their clients (browsers installation) because the clients auto-update themselves (there are counter examples of course, many sites still have to battle around having to support old Internet Explorer versions for example)
- while they are also only a few major DNS software implementations, they control far less their installations (both authoritative and recursive nameservers) which means some features may take a very long time to be actually deployed (and DNSSEC was in that case too), alongside all obsolete features that needs to be kept around (but recent "DNS flag days" options are trying to cut down this part). Nameservers software is typically not "auto-updated", or at least not without some careful monitoring and handling by a sysadmin, while browsers are automatically updated with many users not even being aware of changes (which is also why changes like "let us enable DoH automatically and by force" creep through and are discovered with anger)
So for the side (the web) that wants fast turnaround and features being delivered to their users, they may not want to rely on DNS features as those would take far longer to deploy, probably. All of this is of course highly subjective.
So I think some of the above can explain why the "web world" decided to go forward with a solution completely internal to their world (the HSTS preloading list), in order not t
Is this a good idea or not? Everyone can discuss and argue about that.
At least there is both a scale problem (having an exhaustive list), a maintenance problem (updates of the list), and a delivery problem (including the list in browser software code). But at the same time, like explained above, there is no competing "equivalent" solution at the DNS level so it is not like you have two choices more or less at same stage (each one with its own benefits and drawbacks).
Like HPKP (HTTP Public Key Pinning) was the reference in the past but become obsolete, maybe once CT (Certificate Transparency) Logs really become mandatory everywhere for every case, a browser can switch to a simpler algorithm such as "if that hostname is covered by a certificate, as shown in CT Logs, then it means HTTPS only" and then HSTS Preloading is not needed anymore. Of course, this simple sentence have various technical consequences that are harder to solve than just the time to write the sentence.
Or you just decide that HTTP is dead and should never be used anymore and hence it is 100% HTTPS always for everyone, no need to maintain a list.
Browsers seem also to go towards that direction...