56

How to find out what programming language a website is built in?

How much of a Django application could be reverse-engineered if the owner forgot to turn debug mode off?

And other Qs like these ^ .

Shortly: It would seem that at least in terms of web app development, we want to disclose as little information to the attacker as possible.

  • Attackers want to determine the platform our web app is running on, but we want to trick them into believing it's a different platform than it actually is;
  • We are advised to switch debug mode off because detailed exception info might leak portions of the source code of our app (not the platform's);

If we open-source our web app's server code, we willingly hand everyone these very pieces of information that the questions I linked to discuss how to hide; and even more information than that.

It would seem, therefore, that open-sourcing the app is one of the last thing one would want to do.

This is surprising to me because:

  • Some think open-sourced apps are safer because more friendly eyes may look at the code, looking for exploitable bugs and submitting patches;
  • Not open-sourcing the app because of aforementioned issues is security through obscurity, which is bad.

However, according to @Mason Wheeler's comment on this site:

I think that, if even a security tester can't figure out what language the site is built in, that makes it more secure because then no one will know which exploits to try. (Yes, there are occasionally valid use cases for security through obscurity.)

Therefore, is it agreed upon that open-sourcing the server-side code of a web app is a horrible idea?

RonJohn
  • 105
  • 3
gaazkam
  • 5,607
  • 11
  • 24
  • 37
  • 12
    **"if even a security tester can't figure out what language the site is built in, that makes it more secure because then no one will know which exploits to try."** Lots of security vulnerabilities are common across many languages. Of the OWASP Top 10, only one (Using Components with Known Vulnerabilities) is particularly language specific. – Lie Ryan Jun 04 '19 at 00:21
  • 1
    @LieRyan 1 in 10 is a lot. Defense in depth is all about plugging as many holes as you can, in as many ways as you can. – Jon Bentley Jun 04 '19 at 11:30
  • 7
    There's also a difference between open-sourcing the source code of the app and open-sourcing the _production configuration_ of the app. No matter where you land on the spectrum of open- vs. closed-source, you never want an outsider to find out your database credentials or 3rd-party API tokens. Leaving debug mode on in production is a good way to have those secrets exfiltrated. – smitelli Jun 04 '19 at 21:39

5 Answers5

70

It's a complex matter because there are several aspects to consider, with pros and cons, and there might not be a definite answer.

The security advantage of open source software is supposed to come from a "law" that Wikipedia calls "Linus's law", which says that "given enough eyeballs, all bugs are shallow". To start with, you'd have to ask yourself how many eyeballs you are going to have. For example, is your project going to be shared, forked, used extensively by lots of users and reviewed by a large community? Or is your software only going to be used on your website and no one else will care about it? Or maybe no one else will be able to reuse it because it doesn't come with a free-software license? In the end there are going to be white-hat eyeballs and black-hat eyeballs, so you need to be willing to accept that on one hand you will get some security improvement from ethical hackers, but on the other hand you will also be attacked by black hats. Will attackers be especially interested in targeting your project, or is it just going to be subject to non-targeted attacks? These are all things you should consider, and it might not be easy to draw a conclusion. It's also worth remembering that in several open source projects there were security vulnerabilities that had been there for a long time despite all the eyeballs of the community (see Linus's law on Wikipedia).

Security by obscurity is another concept that is often misunderstood, because its name makes it sound like it's all about keeping something secret. It's not so. Security through obscurity is when a significant part of your security comes from the secrecy of the methods (the implementation). Here's an example of security by obscurity:

// Login without password if URL has parameter dev=debug
// I'm a stupid dev, so I believe this is secure because nobody knows about it!
// But this source code can't be published or I'll be hacked at once
if ($login_password === $password || $_POST['dev'] === 'debug') {
    login_ok();
}

Anyway, even if your code is correct and you rely on security by design, nothing stops you from using a layer of obfuscation on top of it. That is, keeping the source code private can help you because it will slow down a potential attacker. The important thing to remember is that obscurity is ok and can be considered a great asset only if it's just a layer on top of good design.

In conclusion, I'd say you'd better not publish the source code unless you have a reason to do so (for example because you want your software to be free/libre and you want to create a community around your project). If your only goal is to improve the security of the application, you won't gain anything from just publishing it on GitHub, for example. If you are really worried that your software might contain mistakes and you'd like someone else to help you by providing more "eyeballs", you might consider paying for a professional security audit.

reed
  • 15,398
  • 6
  • 43
  • 64
  • 8
    Reason's I'd like to publish it: 1) It's a hobby project, so a bit of personal pride is involved - I'd just like to present the results of my work; 2) I'd like to ask more experienced devs to look at the source code and do some rudimentary review - not necessarily / only security-wise, I mean general code quality; 3) AFAIK it just looks good to be able to link a github account with something bigger-than-trivial in a CV. Oops now I read that publishing source code of a webapp might be a horrible idea, were reasons 1-3 fallacious? I'm worried. I wrote this comment to explain my thinking – gaazkam Jun 02 '19 at 21:52
  • 8
    Linus' Law is a myth in general. Also see Peter Gutmann's [Engineering Security](http://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf). –  Jun 03 '19 at 00:22
  • 16
    There can be _some_ derived security benefits of publishing a project on GitHub -- in that some security analysis services such as LGTM.com (which, for full disclosure, pays my salary) are free to use for public GitHub projects. Whether the benefit of this outweighs the risk that would-be attackers will notice a problem before you do, is a different and thornier issue ... – hmakholm left over Monica Jun 03 '19 at 01:31
  • 51
    @jww It is not a myth. The salient qualifier is _with enough eyeballs_. Many people take that to mean that open source automatically makes something more secure, which of course is not true. – forest Jun 03 '19 at 06:36
  • 1
    @gaazkam, if that's the scenario and those are your goals, then go ahead and publish the code like everyone else does in your situation. The pros are more than the cons in your case. Security will probably not be an issue at all, unless the project becomes popular enough or your are a specific target for some attackers. – reed Jun 03 '19 at 08:57
  • 4
    @forest, I think the law should actually be "given enough eyeballs, *enough* bugs are shallow", or "given enough *high-quality* eyeballs ever since the *beginning* of a project, all bugs are hopefully shallow". IMO the problem is that open-source software often starts as simple projects managed by a couple of devs, and the good eyeballs may only come (much) later. Then it also depends on how big the codebase has become at that point, if it relies on third party code, etc. – reed Jun 03 '19 at 09:09
  • 22
    Linus' law is not a myth, but it's also not that useful because it's a tautology. Given enough eyeballs to make all bugs shallow, all bugs are shallow. – barbecue Jun 03 '19 at 15:23
  • 2
    Unless you've done something particularly interesting, you're unlikely to attract developers to review your code simply by putting it on github. There's a TON of projects there, and publishing it as OSS is a bit like screaming in the wind in the middle of the Atlantic Ocean. If there's some community of people that'll do code review just for fun, I don't know, but open sourcing it and code review are two different things. Realistically the security risks of publishing a hobby project are incredibly small. Nobody is going to all the trouble of hax0ring your hobby project. – Steve Sether Jun 03 '19 at 18:26
  • 7
    @barbecue Sometimes tautologies are still useful, if they serve to draw attention to something which, while tautological, is not intuitively obvious or frequently thought about or discussed. Linus's Law is valuable for the way it points out that there's a real, pragmatic advantage to going against the default of secrecy. – Mason Wheeler Jun 03 '19 at 19:57
  • @SteveSether - There's the [CodeReview.SE] stack for that, although they don't take entire codebases. – Bobson Jun 03 '19 at 22:07
  • 3
    "For example, is your project going to be shared, forked, used extensively by lots of users and reviewed by a large community?" - Like OpenSSL? My Heart Bleeds for anyone that thinks there's much credibility in "all open source bugs are shallow" after that. Unless the code is explicitly being audited for security by those people, it doesn't matter how many coders visit your github page or the stackoverflow answer and select your code block with ctrl-c and paste it into their editor of choice with ctrl-v. – Rob Moir Jun 04 '19 at 09:11
  • 4
    @RobMoir OpenSSL is the exception that proves the rule. It could be a textbook example of how not to run an open-source project. Heartbleed happened largely because nobody was paying attention to the quality of the code until Heartbleed happened. But afterwards, when the community realized just how bad it had gotten, they started putting some real effort into auditing OpenSSL. Several bugs were found and fixed very quickly, and the code quality improved dramatically. In other words, yes, this is proof that Linus's Law works *when it is applicable.* – Mason Wheeler Jun 04 '19 at 14:28
  • 1
    "Laws" that only work when they're applicable (that's a tautology, btw!) aren't laws! The implied phrase is _universal_ law! Exceptions don't prove rules! They disprove them! It's a silly phrase that has dubious origins. Linus's "law" is more of an approximate guideline, a maxim. That particular exception proved it's not a law that can be relied on, so you need to make your own mind up. – Greenaum Sep 13 '19 at 10:26
  • 1
    @Greenaum Most laws have requirements. If you're not on Earth - close enough to a gravity source - the law of gravity will do shit to your apple. Yes, Yes the general law / gravity still applies, it just has not visible effect, but the everyday law that the apple falls down doesn't happen. Doesn't make the law that apples fall down when thrown up invalid, it requires "up" and "down" to exist^^ The eyeballs need to look - ie the law is very abbreviated. Plus it doesn't say the bugs go away, just that they can be seen when enough eyes actually look. – Frank Hopkins Jun 08 '20 at 17:34
  • @Greenaum So if anything, Heartbleed would be an indication that the law holds, because, once people started looking they found the bug. ^^ (Now that being said, yes, it's describing a general effect in a abbreviated and perhaps overstated way, a more to the point and less edgy way to say it might be "the more people vet the code the more likely issues are found - all else being equal", but that does sound way less catchy^^ – Frank Hopkins Jun 08 '20 at 17:40
11

It's important to read what I wrote in context.

Yes, there are occasionally valid use cases for security through obscurity.

This was written in response to a question about penetration testing a black-box system, where the idea of the source being available wasn't even on the table. In such a case, security by obscurity can have a certain amount of utility. As a general rule, though, it's not particularly useful, and when it is, it's only useful as a single part of a larger defense-in-depth strategy. You can't attack a target if you don't know where it is, but once the bad guy knows what he's aiming at, you'd better have other defenses in place as well or you're going to be up the proverbial creek.

Also--and more applicable to this question that does consider the possibility of opening up the source code--whatever benefits you might gain from hiding the sources are heavily outweighed by Kerckhoff's Principle: "[you should always assume that] the adversary knows the system." In other words, if you can't consider your system secure if the adversary has the full source code, you can't consider it secure, period. (Which is just common sense; how much trust would you place in a physical lock where the hardware store told you it's important to hide it so people don't see what kind of lock it is?)

If you begin with the assumption that the adversary knows the system--which, in the real world, means he may have gained this knowledge through other hacking, by some form of espionage, or various other compromises--then it logically follows that publishing the code doesn't teach him anything new. What it does do is give the good guys the opportunity to catch up: honest people who would never try to hack you or compromise your security are now invited to have a look and point out places where it could be better.

From a strictly cybersecurity perspective, publishing the source is a clear winner over obscurity. There may be other reasons to not want to do this, (protection of business methods or other sensitive information,) but being better at keeping hackers out is not one of them.

Mason Wheeler
  • 1,625
  • 1
  • 11
  • 15
  • 8
    I think this is a case where sensible advice got repeated until it turned into a meaningless mantra. Security THROUGH obscurity means security which is obtained solely by use of obscurity, and that's fairly weak. But it got turned into a buzzphrase, **security-through-obscurity**, and over the years it gradually became a knee-jerk reaction to say "Security-through-obscurity is bad, mmmkay?" instead of actually discussing the topic. Sort of like how RAID5 is now the devil, and anyone who mentions using it will be called an idiot. – barbecue Jun 03 '19 at 21:06
  • 4
    @barbecue I'd agree, but I'd compare it more to Dijkstra's "goto considered harmful". In 1968 when dijkstra said it, gotos were a massive problem. It took at least 20-25 years to almost entirely eliminate gotos in (almost) all languages and replace them with better structures. There's almost, but not quite zero instances where a goto should be used. security through obscurity is similar. It was long over-used and abused, and there's probably a few places where it might be still appropriate, but like a goto, consider using it carefully as there might be better alternatives. – Steve Sether Jun 06 '19 at 18:06
  • Sure, a system should stand up even if the enemy has your source code. But you don't have to give it to him! And "even if" is as unrealistic as "secure". It's like an arms race, or a battle. You do everything you can to secure your system, while a cracker uses every tool they have to try get into it. Trying to reduce the information your enemy has, is just one more thing. Things like encryption standards are studied and tested by mathematicians because that's their job (and interest). It's nobody's job to investigate this guy's system, or nobody who's helpful at least. – Greenaum Jun 22 '19 at 02:41
  • @Greenaum No, you are completely missing the point of what I wrote. Please re-read the next-to-last paragraph of my answer. You only speak of giving it to the bad guys, but opening up the source means you are *also giving it to the good guys.* And there's a lot more of them. It means you give the people who want to help you the opportunity to do so, and give yourself a massive advantage in the process. – Mason Wheeler Jun 22 '19 at 13:11
  • Mason, please re-read "Things like encryption standards are studied and tested by mathematicians because that's their job (and interest). It's nobody's job to investigate this guy's system, or nobody who's helpful at least." from my answer. There aren't necessarily any good guys at all, as others have mentioned. It's not many people who'd spend time checking somebody else's mundane and boring code. If a good guy has no use for or interest in any of the gigabytes of code out there, he won't put hard work and time into a security analysis of it. Work is tiring enough when you're paid! – Greenaum Jun 23 '19 at 09:24
  • @Mason-Wheeler Just to clarify, I agree with Kerkhof's Principle. Sure even if enemy has the source code, the security wall should still stay up. My point was, the common trope "the good guys will audit it for you free of charge" doesn't stand for every, or most, occasions. As a programmer, and apparently security geek, how much time do you spend white-hat pentesting strangers' systems? And as an imaginary black hat, how useful would the source code be to you? It shouldn't BEAT the security, but it certainly tells you where to start. Saves all that wiretapping and reverse engineering protocol. – Greenaum Nov 01 '21 at 23:16
6

There is an important difference between publishing your code, and failing to protect your code from being accessed by an exploit.

If you publish your code, there will be blackhats, greyhats, and whitehats with access to your code. The whitehats will disclose any vulnerabilities they discover, and greyhats may disclose them if the bug bounty is big enough. Blackhats will try to hack you no matter what, and the published code will help them, so there is a trade-off between the benefit you stand to get from vulnerabilities being disclosed, and the danger of vulnerabilities being discovered by attackers. But that trade-off often favours publishing.

If your code is uncovered by an exploit, then it will mostly be blackhats who have access to it. If vulnerabilities are discovered this way, there is much less chance of them being responsibly disclosed, so it is very unlikely this trade-off will work in your favour.

James_pic
  • 2,520
  • 2
  • 17
  • 22
6

Security through obscurity isn't a bad thing, it just isn't a reliable one. It should never be your main or only method of protecting something. It can be misleading and illusory. But it still has some value, on top of other methods. Not knowing your platform might slow down an attacker, or cause him to miss possible avenues of attack. Particularly since the converse applies, there are pre-written pen-testing tools which apply common known vulns one after the other. Why risk exposing them?

The "friendly eyes" approach works for Linux because lots of people actually use Linux, and have an interest in it's continued and improved functioning. Same with Apache and whatever else. That is unlikely to be the case for your web app. You're presumably developing your web app for your own company's specific use rather than a more general framework that anyone might incorporate into their site. So you're unlikely to benefit from contributors.

Obscure 'em up! And also all the other, proper stuff too.

Greenaum
  • 205
  • 1
  • 2
  • 14
    Obscurity won't protect against a determined attacker, but it does help against lazy, casual attackers, and there are lots of those. – barbecue Jun 03 '19 at 21:16
  • *Security* through obscurity can often be a bad thing. Obscurity on its own may not be – Dev Jun 04 '19 at 16:50
  • 1
    @Dev What's the benefit of obscurity if it's not security? – Alex Jun 05 '19 at 02:12
  • @Alex I'm not sure. – Dev Jun 05 '19 at 06:15
  • 1
    Beware, „Sexurity by obscurity“ does not mean „it is more secure because we keep information about the system under locks“, it generally stands for „it is only secure because nobody knows the inner workings“, and as such it is a very dangerous security strategy. – eckes Jun 05 '19 at 10:12
  • @Alex Security theater? – Mason Wheeler Jun 23 '19 at 10:25
  • @eckes, I know! That's why I clearly said he should use other methods of security as well. All the proper stuff. He should try his best to write code with no holes in it. There are tools and methods to help with this. But when he's written his code, according to best practice, and battened down the hatches, as one should, there's still the question, should he publish his code? Obscurity can help, because an attacker who has the code has an advantage compared to not having. The code should ideally be impregnable. But keeping it to yourself is keeping a useful head-start from your enemy. – Greenaum Aug 29 '19 at 05:22
  • I think there's a problem with "if and only if" here. Necessary implication. This is years later of course. That my argument was obscurity as just one method among others. A small "booster", one answer, but not all or only. Where my opponents I think were arguing the other way. Cross purposes. "Proper security, plus obscurity" vs "Security through only obscurity and nothing else". Ah, logic. Instead of English, I'm gonna start communicating in C from now on. What could possibly go wrong speaking a perfect language? – Greenaum Nov 01 '21 at 23:28
2

I have developed several open source projects including Digitalus CMS 2007-2012. Now that we have build tools like NPM I prefer an approach where I share specific, more generic modules, which would be next to impossible to claim intellectual property for, then keep the final implementation private.

This approach is easier to adopt for a wider range of projects, and gives you the benefits of having thousands of eyes on your core libraries while enabling you to protect your core application.

ForrestLyman
  • 121
  • 2
  • 1
    Writing re-usable generic modules, upon which you build more specific implementations ontop seems to be one of the core ideas of open source software anyways. Glad that you decided to share! –  Jun 04 '19 at 07:51
  • Best of both worlds! And it makes perfect sense. No white hats are gonna care about your specific implementation, but if there's community scrutiny of the modules it's made from, right up to the last couple of layers of your app which you keep private, you're doing it just right. Taking advantage of where public interest is, but closing up the bits that only a hacker would care about. I shouldn't even be posting, I can't improve on what you said. Except "Bravo!" – Greenaum Nov 01 '21 at 23:24