7

I have 3 websites that could be configured as a "VirtualApplication" in servicedefinition.csdef:

www.mydomain.net/enroll

www.mydomain.net/admin

www.mydomain.net/

... or I can configure them as a site:

enroll.mydomain.net

admin.mydomain.net

www*.mydomain.net

Since I intend to install SSL on my site, I plan on buying the domain with a SAN Name of "www.mydomain.net" OR "*.mydomain.net". My question is what approach is more secure?

Is there any guidance as to what the cookie domain should be, or the federation URI? My concern with the VirtualApplication approach is that the root cookie for "/" may open me up to various attacks.

Similarly, I'm concerned about using a wildcard certificate since it's more expensive and the fact it's a wildcard removes any possibility of an EV cert. Also I believe wildcard certs do not have a strong warranty (in case of SSL tampering).

AviD
  • 72,138
  • 22
  • 136
  • 218
makerofthings7
  • 50,090
  • 54
  • 250
  • 536

2 Answers2

9

There is at least one security benefit of sub-domains. If there is an XSS vulnerability on enroll.mydomain.net it can't be used to hijack a session on admin.mydomain.net. This is due to the Same-Origin Policy. It would also make it easier to move that application to a different server if need be. Isolation of failure is a good Defense in Depth approach.

So yes, sub-domains are a safer than directories.

rook
  • 46,916
  • 10
  • 92
  • 181
  • 2
    The SOP is pretty neat when it works! – atdre Apr 13 '11 at 21:20
  • 2
    @atdre and when it doesn't Mozilla and Google pay you :). – rook Apr 13 '11 at 21:22
  • 1
    @Rook: No, they don't pay me because I don't report it to them. I believe in the No More Free Bugs campaign -- http://nomorefreebugs.com -- and I prefer payment/agreement-to-pay BEFORE I begin testing. If I run across them on accident, then I try to forget I ever found it – atdre Apr 13 '11 at 21:26
  • 1
    @atdre yeah that movement is pretty popular. I guess it depends why you do it. I reported 20+ vulns in Mozilla web apps and I was never paid (and then I hit the 3k payment cap on just 2 of the vulns :). I also write exploits and release them for free. But I do it for street cred and the challenge. – rook Apr 13 '11 at 21:32
  • 3
    @Rook: Some people do it because they have a "hacker addiction". I recommend a therapist, not bug bounty programs ;> Is that what you mean by "street cred and the challenge"? – atdre Apr 13 '11 at 21:43
  • 1
    @atdre I have come to appreciate my addictions. Thank you for prying. – rook Apr 13 '11 at 21:45
  • 4
    @Rook: Hacking people is my addiction. Sorry to take it out on you – atdre Apr 13 '11 at 21:49
  • @atdre no worries. – rook Apr 13 '11 at 22:12
  • Attacker cannot know which subdirectory exists, but they can know which subdomain exists via dns result. How should we consider this point? – luochen1990 May 21 '22 at 07:15
9

Neither is necessarily more secure, but I will give you some opinions and perhaps facts.

Host-extract can determine hostnames, which is also possible via virtual host enumeration.

DirBuster, skipfish, and fuzzdb can rely on forced browsing, directory indexes, and predictable resource locations to find vulnerable directory structures and other issues. Spiders and crawlers can traverse directories, and certain ones can also traverse hostnames (I know that skipfish is capable of this).

In some ways, virtual hosting is more difficult to manage (you mentioned SSL, which is a great point). If this is easy for you to manage, it may end up being more secure because of the ability for directories to leak much more information than hostnames, and to be vulnerable to attacks such as XSS much more easily (as @Rook describes in his answer).

I prefer to at least separate out sites (by hostname) which contain behavior (e.g. Flash, Ajax, Javascript libraries, RIA frameworks, dynamic pages, etc) versus ones that contain formatting or static content (HTML, CSS, XHTML, non-dynamic pages). I think it's also good practice to separate out SSL domains from non-SSL domains. The reason here is because you can provide more stringent controls around contextual-escaping on SSL/TLS hostnames than you would on non-SSL/TLS hostnames. If you separate out Javascript from CSS from HTML, then you know which contextual-encoding must take place on which hostnames as well. I have more suggestions about how to handle XSS in this post where I mention to not build HTML on the server and other tidbits.

Usually this is never up to a security professional, or security team. It's up to the developers, marketers, and SEO experts. They decide what gets a hostname or a URI structure. I will research into this topic further and update my answer as I get new information. It's an interesting question.

atdre
  • 18,885
  • 6
  • 58
  • 107
  • 6
    +1 for the behavioural/risk based segmentation. From a governance perspective this makes business risk so much easier to deal with at board level and for regulatory compliance! – Rory Alsop Apr 13 '11 at 21:40
  • @Rory And accepted for the same reason! Good idea, though I wonder how that will complicate authentication and single sign on among the various domains – makerofthings7 Nov 02 '11 at 02:36