Neither is necessarily more secure, but I will give you some opinions and perhaps facts.
Host-extract can determine hostnames, which is also possible via virtual host enumeration.
DirBuster, skipfish, and fuzzdb can rely on forced browsing, directory indexes, and predictable resource locations to find vulnerable directory structures and other issues. Spiders and crawlers can traverse directories, and certain ones can also traverse hostnames (I know that skipfish is capable of this).
In some ways, virtual hosting is more difficult to manage (you mentioned SSL, which is a great point). If this is easy for you to manage, it may end up being more secure because of the ability for directories to leak much more information than hostnames, and to be vulnerable to attacks such as XSS much more easily (as @Rook describes in his answer).
I prefer to at least separate out sites (by hostname) which contain behavior (e.g. Flash, Ajax, Javascript libraries, RIA frameworks, dynamic pages, etc) versus ones that contain formatting or static content (HTML, CSS, XHTML, non-dynamic pages). I think it's also good practice to separate out SSL domains from non-SSL domains. The reason here is because you can provide more stringent controls around contextual-escaping on SSL/TLS hostnames than you would on non-SSL/TLS hostnames. If you separate out Javascript from CSS from HTML, then you know which contextual-encoding must take place on which hostnames as well. I have more suggestions about how to handle XSS in this post where I mention to not build HTML on the server and other tidbits.
Usually this is never up to a security professional, or security team. It's up to the developers, marketers, and SEO experts. They decide what gets a hostname or a URI structure. I will research into this topic further and update my answer as I get new information. It's an interesting question.