75

Recently, I have discovered a security flaw in a business website. This website has a password-protected "Partners Area", and like many websites it provides a form to reset the user's password.

When a user asks for a password reset for his nickname, a new password is sent to their email address and that password becomes immediately effective. The problem is (if this wasn't already a problem) that the new password is a fixed one, for all users. So an attacker can easily get access to any account.

Now, the only operations a user can do within their Partners Area are:

  • View/change email address
  • Change password
  • Download some manuals and utilities (it's definitely not classified stuff)
  • Fill out a repair form (then the process will continue by email)
  • Download logos and images for marketing purposes

The only things I see for a malicious attacker to exploit are:

  • Prevent future access to a legitimate user (which will probably be able to reobtain right after a phone call)
  • Discover information about who the company customers are (guessing random nicknames and looking at their email address). Anyway, it's not something someone would keep as a secret.

Even if I am always very disturbed by things like this, in this case I must admit that it might not be a big deal. Are flaws like this acceptable compromises, in a context where not much harm can be caused?


Since I think someone misunderstood a detail: that website belongs to an external company. I have no role in the development of that website, and no control over any decision about it.

danieleds
  • 749
  • 1
  • 5
  • 8
  • In general, they *can* be. – user253751 Jul 25 '16 at 01:02
  • 15
    Security holes isolated might not look important, the problem becomes when your application is swish cheese with these holes. – Braiam Jul 25 '16 at 01:51
  • 7
    Is privacy irrelevant for your question? Someone who knows/guesses someone’s nickname can then see their email address, which ought to be private information. – unor Jul 25 '16 at 01:52
  • @unor it is important, just not _that_ important. Guessing nicknames to get random email addresses not associated with any sensitive information? There are easier ways. – danieleds Jul 25 '16 at 07:23
  • 5
    If this also works for an admin account it could be a much more severe vulnerability. – domen Jul 25 '16 at 10:05
  • If the repair form that transitions to email can have arbitrary files attached, an attacker can use that for additional attack purposes. – Nzall Jul 25 '16 at 12:52
  • 12
    Usually if that area was under a password then someone wanted that area hidden. The fact that anyone can get access to it is a vulnerability, no matter whether sensitive content is shown (also, something that's looking benign to you may be considered sensitive by someone else). And what if later on they decide to implement more important stuff in there without knowing their "security" is broken? – André Borie Jul 25 '16 at 13:05
  • 8
    If it's "not a big deal" then why not just make the partners area public? Because that's effectively the situation you have right now. – Ajedi32 Jul 25 '16 at 13:10
  • @Ajedi32 While I agree with you, the website in question belongs to another company so I don't know their policies. – danieleds Jul 25 '16 at 15:28
  • You should notify them, then they can evaluate whether it's a concern, and hopefully explain a negative decision. – OrangeDog Jul 25 '16 at 16:03
  • Beware. People responsible for allocating development resources will often have an interest in underestimating the impact of a security problem. Often it is easier to fix a problem than to accurately asses the security impact. – kasperd Jul 25 '16 at 16:17
  • @OrangeDog I've notified them as soon as possible (but still got no reply). Anyway, my question was a general one, and I used this event just as an example. – danieleds Jul 25 '16 at 16:46
  • 3
    @danieleds - if you have proof that you've notified them (e.g., you've logged a ticket or a sent an email) and proof that you've asked how to proceed, then this _should_ protect you if the security risk ever became a problem. Make sure it's "official" and traceable, so if it ever came back to you, you could point and say "I did ask part but never responded". It'd be good if you log repeated queries - e.g., send a follow up email or two, put comments on an issue tracking system. – VLAZ Jul 25 '16 at 17:53
  • Do note that if you can change the email in this manner, then you could create repair forms and then approve them, generating significant cost. Also if any info in this Partners Area is used to validate a caller's identity, then an attacker can now compromise anything that can be done via phone as well. – Nanban Jim Jul 26 '16 at 14:56
  • 1
    Acceptable to whom? – Kevin Krumwiede Jul 27 '16 at 06:37
  • @KevinKrumwiede to both the company and the users. Actually, in a perfect world the interests of both parties about security should coincide. – danieleds Jul 27 '16 at 10:57
  • 1
    If it's "not a big deal" then what makes it a **security** flaw? Except perhaps in the "backwards" sense that the system has **too much security**, as Ajedi32 suggests. – Luis Casillas Jul 27 '16 at 17:55
  • @danieleds The interests of companies and users "coincide" in the same way as the interests of buyers and sellers. Both parties want everything for nothing, and the result is a grudging compromise. – Kevin Krumwiede Jul 28 '16 at 05:36
  • @LuisCasillas Agreed; if not much harm can't derive from it then it's not *really* a security flaw to the business, is it? You need to balance cost against return. – Thomas Jul 29 '16 at 07:37

10 Answers10

86

Yes. This is a problem - a big problem. Lately I found a design flaw in a business' webshop that allowed me to insert innocent notes in other visitors' charts.

Seems innocent, and only annoying, until I looked further and found that I was also able to insert Javascript code (XSS) into those notes. So in other words, I could exploit XSS on every visitor's chart. I made a quick PoC showing them how I could easily hack the computer of any visitor (in this case myself, it was a PoC) using that design flaw, XSS, BeEF, and Metasploit.

So even the smallest flaw may result in a big risk after all.

Besides that, who says that the error you found is the only one the developer of that website made? Maybe he also made tons of other mistakes.

Reporting would be the best you could do - even if it looks unnecessary.

O'Niel
  • 2,740
  • 3
  • 17
  • 28
  • 3
    Additionally, you never know accurate about the threat agents if mapped to the business asset you're risking it into. If an organization has a threat model already, I do not see a point neutralizing a threat, taking pro-active measures for them or if at all - a risk exists, the acceptance has to be chalked out in such a way that it's verifiable. – Shritam Bhowmick Jul 24 '16 at 23:17
  • 5
    This answer is flawed. XSS is certainly a harmful exploit, but OP is asking about non-harmful (or "mildly" harmful) exploits. – Kenneth K. Jul 25 '16 at 02:58
  • 21
    @KennethK. I'm not talking solely about XSS. I'm also talking about 'seemingly innocent' design-flaws (like the one I described), and how those small flaws can result into a big error with additional small flaws,... – O'Niel Jul 25 '16 at 06:19
  • 3
    XSS doesn't allow "hacking the computer" of people browsing that website. XSS only affects data on that one website. – D.W. Jul 25 '16 at 11:33
  • 6
    @D.W. It does allow it. XSS > BeEF > Metasploit > Reverse_TCP_meterpreter. Not with pure XSS only, but by using frameworks and several exploits it is possible. – O'Niel Jul 25 '16 at 12:19
  • 5
    @O'Niel, if you're assuming you have an exploit that works against people's browsers, then those visitors have far worse problems: you can hack their browsers even if the website had no flaw and no XSS. It's bogus to put any of the blame on the design flaw or the XSS. The reasoning in this answer is flawed; you try to argue that this design flaw is a big problem, by giving an example of a design flaw that, by all evidence, wasn't actually a big problem. – D.W. Jul 25 '16 at 16:59
  • @D.W. What are you trying to say? I don't have an exploit against people's browser specific (Where did I even say that?). BeEF does need XSS/(or at least a malicious page) in order to work and to be able to combine Metasploit with it, so how do you come up with things like: "no flow and no xss", I never said that. But it is possible to hack people's computer by using a combination of BeEF and Metasploit. Bothered searching it up? Maybe some beginner tutorials? – O'Niel Jul 25 '16 at 19:50
  • 2
    I'm trying to say your answer doesn't make sense and is using faulty logic. If you have a working exploit against visitors' browsers, then it makes no sense to blame the "small design flaw in the business's website" as the cause of your ability to hack the computers of people who visit that website; the real cause is that you know a browser exploit. If you don't have a working exploit against visitors' browsers, then you can't "hack their computers" and the logic of your answer makes no sense. – D.W. Jul 25 '16 at 20:05
  • 2
    Basically, your first 3 paragraphs are not a valid instance of where a small flaw results in a big risk. It's either an example of a big flaw resulting in a big risk (where here the big flaw is the browser vuln you know how to exploit), or an example of a small flaw resulting in a small risk (if you don't have a working browser exploit). – D.W. Jul 25 '16 at 20:05
  • 2
    @D.W It is basically a small flaw (being able to post comments on other's chart - which ain't necessary a security issue, rather annoying); which results in a big flaw because of the XSS-issue. The problem is, that if I didn't found the XSS, it'd just be a small annoying issue; but because of the XSS, it's becoming a big issue. And why is XSS a big issue? Because it allows attackers to hack other's people PC by using several frameworks. – O'Niel Jul 25 '16 at 20:14
  • 5
    https://xkcd.com/386/ – Alexander Jul 26 '16 at 10:49
  • 2
    @O'Niel So your answer to the question in the heading really is "No" and not "Yes", correct? – Dubu Jul 26 '16 at 16:03
  • 2
    @D.W. Small flaw: [can post XSS on another site visitor's chart]. Big risk: [skilled attacker discovers the small flaw]. – Dan Henderson Jul 26 '16 at 20:37
  • @Dubu Indeed. "No", it's not acceptable. – O'Niel Jul 26 '16 at 22:03
  • @DanHenderson, the "big risk" [visitors computers getting hacked] wasn't caused by XSS; it was caused by the browser vulnerability that O'Niel was exploiting. Given knowledge of a browser vulnerability, the visitors could have been compromised just as readily even if there was no XSS flaw at all. The "small flaw" didn't cause the "big risk"; rather, a "big flaw" (an unpatched exploitable browser vulnerability) caused the "big risk" (visitors computers get hacked). P.S. The risk that someone discovers a small flaw is, almost by definition, a small risk. – D.W. Jul 27 '16 at 01:40
  • 3
    @D.W. 'The "small flaw" didn't cause the "big risk" ' no, but it **enabled it**. Without the XSS, O'Niel would have no vector to initiate the browser vulnerability targeting the site's users, specifically. He could certainly compromise visitors to a site he controls, but it's only thanks to the XSS flaw on this site that he has a path to this site's users. – Dan Henderson Jul 27 '16 at 06:26
  • 1
    @D.W. those users could theoretically mitigate their risk by only browsing to sites they trust. Now, due to this site's XSS vulnerability, a site they trust is now running malicious code that exploits their browser vulnerability. Seems clear enough to me. Also, blaming the users for an attack against their vulnerable browser made possible by a "small flaw" on the site isn't going to fly in any court (of law or public image). If the site was notified of the flaw and chose to do nothing, that's on THEM and no one else -- even if their users all run IE8. – Doktor J Jul 27 '16 at 22:24
61

Your question is: Are security flaws acceptable if no much harm can derive from them?

The answer is yes, if decided by business while understanding the consequences.

What you are doing is called a risk assessment. For each risk you must highlight the consequences for your company when it is instantiated. Based on that assessment you (you = someone who has the power to make the business decision) have three choices:

  • you can accept it - by assuming that the costs of fixing it are not worth the consequences
  • you can mitigate it: fix it to the point where you can accept the consequences
  • you can insure against it - effectively offloading the risk to someone else.

As you can imagine, there are several hot areas in a risk assessment.

The first one is the assessment of the consequences and the probability. There are numerous books and articles about how to do that, at the end of the day this is based on vigorous hand waving and experience. The output is never like the one in the books

we have a 76% probability of this happening, which will cost us 126,653 €

but rather

well, I feel that this is a risk we should take care of

Note that the "consequences" part may sometimes be quantifiable (loss of profit for online commerce for instance) but usually are not (loss of image for your company for instance).

Beside the dubious theoretical aspects of risk assessments there is one huge advantage you should always take advantage of: you put a risk on the table and it must be dealt with somehow.

This is not only a place-where-the-back-loses-its-noble-name--coverer, it is the right tool to highlight where information security efforts should go to. It also raises your visibility (there are not so many proactive cases where you can raise your visibility) and forces you to take a hard, deep, pragmatic look on what is important and what is not.

WoJ
  • 8,957
  • 2
  • 32
  • 51
  • 9
    Good answer - it answers the question without dwelling on the particular example. And, yes - that's is how security risks should be handled - make it clear what the impact is and somebody (manager, product owner, etc) decides how to handle it. _Accepting_ the risk is a valid approach. Offloading can be, too. It's all about what you can afford. However, it always starts with _knowing_ what the risk is. – VLAZ Jul 25 '16 at 16:22
  • I'm accepting this answer, even if the accepted answer should be a mix of most of the provided answers. I believe the "false sense of security" (@Falco) is an important aspect to keep in mind. – danieleds Jul 26 '16 at 06:29
  • The slashes in your third-from-last paragraph are a bit confusing. I'm not sure if they are supposed to emphasize part of your text, or if they're meant to indicate an and/or-type relationship between certain words. If your intent is emphasis, you can use \*italics\* and \*\*bold\*\* instead; if they represent multiple-choice pieces of your sentence, then they need to be spaced differently. As it is now, I can't even make sense of that sentence. – Dan Henderson Jul 26 '16 at 19:34
  • @DanHenderson: I updated it to remove extra parenthesis. It is hopefully more readable now. – WoJ Jul 26 '16 at 20:05
  • Yes, that's much more clear. – Dan Henderson Jul 26 '16 at 20:39
  • What does "place-where-the-back-loses-its-noble-name--coverer" mean? – Wayne Conrad Jul 27 '16 at 19:01
  • 1
    @WayneConrad: this is an euphemism for ass-coverer (when you go down the back, the name suddenly changes to a less noble one). Source: an expression I heard many years ago. – WoJ Jul 27 '16 at 19:06
  • While you might accept some level of risk it's important that we are aware of blended risks, where one issue might appear trivial, but two trivial issues used together might be markedly more serious. – James Snell Jul 27 '16 at 21:43
  • A vital problem here is the phrase "business decision". Can we get acceptance from the budget owner who risks the real financial loss (accounting dept, marketing dept, etc.) - I mean the budget which would actually pay the loss? Most often I see technical people using phrase "business decision" as a code word for *anyone* non-technical who is willing to say "yes, yes, just make it done" (like a product owner, who will rarely pay a loss from their budget). This is because if we ask proper persons, we would always hear a firm NO, and need to escalate up to CEO almost any risk acceptance. – kubanczyk Jul 28 '16 at 08:35
  • The problem, of course, is that those evaluating the risk don't always have a clear view of what the risks are (because many security holes are unknown until exploited) - a minor vulnerability can be escalated into something much larger. My company went through this when we hired a pentester, the pentester got in through the minor hole that we all knew was there, but didn't worry about because it was harmless, but from there he was able to use an OS flaw to get more privileges, then to capture a user's credentials and eventually escalate to an admin's credentials and move on to other servers. – Johnny Jul 28 '16 at 15:32
31

The problem that I see with such a simple password reset scheme is that it suggests further vulnerabilities in the platform. A flawed concept of security is rarely so isolated as to only happen once, since such flaws are usually related to a developer's practices regarding security. At minimum, I'd suspect that their internal login procedures might also be susceptible to the same flaw, potentially allowing attackers to access databases, code, and processes they shouldn't normally have access to.

From there, it might be possible to modify the server's code to report cleartext passwords, or glean additional private information, and possibly allow attacks on further systems. After all, even though this is 2016, there are still many people out there that still use the same password for their bank accounts as they Facebook, despite the obvious risks associated with doing so. Even if not, being able to associate a nickname with an email address might put other accounts the user has at risk as well; the more information an attacker knows about a user account, the more they can leverage trying to subvert other accounts owned by the same person.

At minimum, I'd suggest you contact the site owner and see if they'll fix the problem, and if not, consider not using their application unless absolutely vital. I'd also recommend changing your email on the user account to a throw-away account that's not connected to an email address that you care about. We're no longer in an age where we can assume apparently minor flaws won't come back to haunt us later.

phyrfox
  • 5,724
  • 20
  • 24
16

If I see this scenario right, they can change E-Mail address and password of any account, then start a repair-form and continue the repair-process via mail.

The support team will probably assume that the E-Mail address is legit and sensitive information can be exchanged with the recipient - and if it is a know customer, you might even start working on an order received via the website/mail.

Another problem could be if you can access a contact history or history of your repair orders? Maybe a customer has written confidential information into his repair orders, or even the number and type of order is something which could reveal problems in his business?

Another problem could be a massive spamming of customer-mail addresses. If I invoke your password reset a million times, it will send a million Mails to your users, not only filling their inbox, but also landing you on several spam-filter lists... where it can be quite a hassle to get your server removed from these lists afterwards.

DoS is of course very easy, if I just have to enumerate nicknames and can reset all account passwords.

But the biggest problem is a false sense of security

It could be we are overlooking some angle or problem which exists right now. But even if there isn't any problem now - What if someone decides to implement a new functionality into this page next year? Maybe for customers to order/pay online. - You provide a context which is only accessible with username and password and people and developers will rely on that. Everyone will think "this is a secure part of the application which can only be accessed by customers so I can do X and rely on Y"

If an application is practically public accessible, it should look like it is. If the application looks secure, it should be secure!

Falco
  • 1,493
  • 10
  • 14
  • 4
    False sense of security is exactly right: better to have no password system at all than to have a password system that is critically flawed. – Mark E. Haase Jul 25 '16 at 15:27
  • @mehaase: up to a point. A low hurdle is better than no hurdle *if* you can avoid the problem of giving a false sense of security. It seems to be a common idea that it's not worth implementing a security measure at all if it can't be perfect. Raising the effort level required for an attack can deter some attackers, and maybe stop some automated robot attacks. – Peter Cordes Jul 27 '16 at 09:21
  • @PeterCordes I disagree. There is a difference between setting a hurdle (for example a link with an embedded id/password) which will make it harder for attackers to access the page, but will not deliver the feeling of a secure area to the user - the user is just visiting a bookmark/clicking a link. Which feels like a public page, just not listed on google. When you provide a login form and display the green SSL-padlock, the user will feel like he is in a secure space, this will do more harm than good! – Falco Jul 27 '16 at 10:40
  • 1
    That's not really a disagreement; I agree that example doesn't have enough real security to outweigh the false sense of security. You have to weigh the inconvenience to users and potential false sense of security against the real benefits. Of course some bad / weak security measures (especially ones that are user-visible and require extra work from users) aren't worth it. Anyway, I just wanted to argue against the fallacy that it's not worth doing anything if perfect security is impossible. – Peter Cordes Jul 27 '16 at 10:49
  • @PeterCordes I can agree to that :-) – Falco Jul 27 '16 at 10:52
  • Given that perfect security is *always* impossible, it's a hard point to disagree with! – see sharper Jul 29 '16 at 07:19
6

There's two perspectives here:

  • As a user, yeah, I'd be concerned, I'd let the owner know, and I'd refrain from sharing any sensitive information on that site.
  • As the site owner/developer, it's your responsibility to evaluate whether any potential security risk is serious enough to warrant effort. Not every risk is going to be severe enough to justify action, judged by likelihood of occurrence, impact of a breach, and effort required to control the risk.

In this case, you've got (at a guess):

  • severity: low
  • likelihood: moderate
  • effort: low

and so they probably should do something about it; there's a very good chance that they're just unaware of the problem.

In the general case, in response to your question "Are security flaws acceptable if no much harm can derive from them?" - yes, they can be. You need to determine if the severity/likelihood/effort tradeoff makes it worthwhile to fix a problem. 'Accept the risk' is a perfectly reasonable response in many cases.


As an extreme example, "aliens who can break strong crypto visit Earth" is a risk that my business faces. I choose not to control that risk as the likelihood of it occurring is so low that it's not worthwhile.

Ian Howson
  • 169
  • 2
1

This is a good question and not easy to answer.

Every security risk is just that, a risk. And addressing that risk needs to weigh cost, confusion, and dangers of the risk, against the proposed fix.

Looking at your specific question, you have a "private" part of the site, that has some information on it, but no real harm can come from someone accessing that part of the site. Your security hole also requires that the hacker would have to know that every password is reset to the same thing, and what that thing is.

So right now, today, your largest Risk is nothing or at least low.

Tomorrow your largest risk is that the private section may have confidential information.

The cost to "fix" seems pretty small. Specially if your already mailing out the new "fixed" password. Essentially, just change the password assignment to a random one and the issue is "fixed" for now. It may not be the best but it is better.

So, you have a low cost to fix, but a low danger security risk. You need to weigh that against business needs and determine if it's worth it.

Keep in mind that the business may count on that fixed password. For example the support staff may have been trained to reset the password then tell the user on the phone the new password, and stay with them till they can get in, then help them change it. You need to account for this when figuring out costs.

What I do:

When I find a bug or security issue I document it, and estimate a development cost to fix. Then I add it to a list, and let the right people know. It may never be taken off that list, but once a year (or every 6 months) I review that list with the site owners, and address the issues that I can.

With this risk, it would likely not be fixed very quickly. I could see a lot of business needs coming first, and that's ok. But at least it's documented, and when someone tells me they want to put "secret" information in that part of the site, I can tell them about the risk.

It's also important to note that this type of risk is likely to lead to other types of risks. When this was coded a bad security decision was made. The site should be checked for other bad decisions.

coteyr
  • 1,506
  • 8
  • 12
0

Remember that as the developer, you might have a better idea of the actual security risks than a user might. If a user discovers their account is hijacked, they might assume that the unauthorized user might have accessed more information than what they are actually able to. Even if no sensitive information is actually exposed, your company may still take a reputation hit.

Also consider the fact that you might not know what information is harmless to be leaked. For example, what if the user has a new revolutionary manufacturing process that a competing company is trying to figure out? If a special piece of equipment used in the new process needs to be repaired and their email is changed, the repair order might give the other company an idea of what their new process is.

Its sounds like a simple fix, and ultimately you are going to have to make that assessment to see if the risk is worth the cost of fixing it.

John Smith
  • 74
  • 3
0

Impersonating a legitimate user from a customer (who is treated as trusted to some extent) is a good way to start a social-engineering-based attack.

A repair form -> email process is ideal for this given that you control the email address. Perhaps you could buy a similar domain and change the email address "someone@domain.com"->"someone@domain.ca", which fits nicely with "I've just transferred to our Canadian subsididary so I don't have my records, could you remind me...?"

If you can gain a little information on the customer/supplier history you can further impersonate the supplier to the customer. Often this is as simple as asking "What was the name of the service engineer who visited a few months ago? They were apparently very helpful on a specific issue but I was out and my colleague dealt with them"

Chris H
  • 4,185
  • 1
  • 16
  • 22
0

Nope, if user get access to that site, he will be able to access the email of someone by using his nickname wich is confidential data, in some countries this is enough to force you to NOTIFY ALL USERS that someone was able to penetrate the DB and that there was a information leak. This damage both credibility and open to lawsuits.

So in general, security flaws are acceptable, but this might not be an acceptable security flaw. Also this open to all kind of spam your users wich could suddendly start to receive malware and spam via email.

Hackers are always creative to put to good (for them) use the flaws they found. You assume the users identity is not a crucial information, but it could be if it is in example a cheating website. Also, what if one user was actively promoting against a sect/cult and suddendly cults member can find out his identity? What if someone victim of stalking is disclosed and thanks to that the stalker is able to find him again?

Leaking identity without warning users is privacy violation, so be carefull. Flaws should be fixed and risk-analyzed as soon as they are found.

In example I would prefer all my forum posts got deleted than allowing someone access to my email address.

CoffeDeveloper
  • 516
  • 3
  • 12
  • 1
    as I stated this is not my website, the only thing I can do (and I already did) is to report the problem. – danieleds Jul 27 '16 at 17:38
  • Good for you this is not your website :D – CoffeDeveloper Jul 27 '16 at 17:45
  • Are you sure the password is not fixed? (in example it could be a hash of your username, using a salt unique to you) so did you tested it with a second account? – CoffeDeveloper Jul 27 '16 at 17:46
  • The reset password is the same for all the users. Anyway, it was just an example: as I explained, no sensible information was exposed. This was the whole point of my question :) – danieleds Jul 27 '16 at 17:52
  • It should be considered case by case unluckily. Even allowing to change the avatar of a random user could be harmful (depending if oyu can replace it with offensive pictures and get him banned in example). – CoffeDeveloper Jul 27 '16 at 17:59
-2

It depends, I'll point to the extreme scenarios.

Scenario A - don't worry! From what you're saying, there's not much true concern about the data available. If it may have just as well been public and the whole security setup is just a marketing/loyalty scheme, then there's not too much to care about. It may explain the sloppy procedure as well, if it was done by someone who didn't have the expertise, just for the sake of it. As long as that person does their usual jobs well it should be fine.

Scenario B - worry! If this is just a hint about how things go in some parts of the organisation, then it's possible that you may find a helluva even worse security holes. If the server is vital for the business, first consider checking that proper backups are in place and then start a thorough audit.

Then of course there's the rest of the alphabet.

lucian
  • 187
  • 2