19

About 2 weeks ago, I stumbled across a web application, that can be used by gyms to manage the information about their members. This includes data like the name, billing address, birth date, and medical history. The gym I am visiting (in Europe) is also using this application and so I took a closer look at the application. I didn't dig very deep to avoid legal issues, but these are some of the "problems" I found:

  • The login allows infinite tries
  • The JSON response from the backend includes information whether the username or password was incorrect
  • The user password is stored in the local storage in plain text
  • There is an unrestricted file upload for profile pictures
  • An old PHP version is used
  • There are multiple backends that throw exceptions (this way I could find out which PHP framework they are using)
  • Session IDs can be overwritten (Session fixation)
  • It seems like there is no input validation. They are using React, so XSS is not as easy but still possible

All of these don't seem like super-critical to me, unless someone really takes their time and tries to exploit these potential vulnerabilities. From what I can tell, there are least 20,000 customers stored in their database. Also it seems like all the customer data is stored in one big table for all the different gyms that are using this application.

The kind of data that is stored about the customers seems to be very personal and shouldn't be in the wrong hands I guess. So I contacted this company anonymously and told them about my concerns. They responded to me a few days ago and said that they fixed everything - however I checked it and basically nothing changed in this web application (still the same vulnerabilities).

So here is my question: How should I proceed? Should I give them a second chance or contact some kind of data protection authority? And would you consider these problems/vulnerabilities critical? (like already said: I didn't dig too deep, but even with my limited security knowledge I think I could get most of the user data into my hands within a few days)

schroeder
  • 123,438
  • 55
  • 284
  • 319
Moritz W.
  • 193
  • 5
  • Why do you ask "And would you consider these problems/vulnerabilites critical?" Why does it matter if we think these problems are critical or not? What does "critical" mean for you? – Sjoerd Dec 24 '19 at 13:01
  • For me to get a feeling how serious this issue is. Maybe this web app would be a valuable target to an attacker. – Moritz W. Dec 24 '19 at 13:14
  • 2
    Does the company accept credit card payments at this site? If so, they are required to be PCI compliant, and the site should display an indication that they are PCI compiant - usually in the form of a badge bearing the name of a third-party PCI verification company, such as Trustwave, TRUSTe, McAfee, etc. Does the site accept credit card payments, and if so, does it bear such a badge? – mti2935 Dec 24 '19 at 15:58
  • Is this a self-hosted system - or a SaaS system? Vulnerabilities in self-hosted systems are more difficult to deal-with - but that also limits the total amount of damage that can be done (assuming each install of the self-hosted system might be using a recent PHP version, a more recent release of the software, etc). – Dai Dec 25 '19 at 00:39
  • 3
    "20,000 customers stored in .... one big table" - how have you been able to determine this? "I could get most of the user data into my hands within a few days" - That would seem to suggest a far more serious vulnerability than what you have listed? – MrWhite Dec 25 '19 at 01:09
  • How do you know the passwords are stored in clear text? – Pedro Lobito Dec 25 '19 at 01:34
  • 6
    There's a good chance that when they declared the bugs "fixed", they really meant that the bugs are fixed in the source control, and next time they redeploy, these fixes will be applied. If you assume that the fixes got merged a few days back, it could easily be a month before you could see any differences. While immediate fix of security issues would be nice, it's nearly Christmas, and these are not obviously critical. – user3757614 Dec 25 '19 at 02:25
  • ```The user password is stored in the local storage in plain text ``` that's a huge red flag, and almost guarantees that they're also vulnerable to timing attacks. combine that timing vulnerability with infinite retries and you can probably get into any account. (do their password verification scheme use `$input!=$password` ? then they're vulnerable. do they use `hash_equals(hash("sha256",$input,true),hash("sha256",$password,true))` ? then they're not vulnerable) – user1067003 Dec 25 '19 at 10:44
  • 1
    No there are no credit card payments accepted. I think that all the data is stored in one table, due to the user id being an auto increment value and I could inspect the user IDs for different gyms, which suggested that these auto increment are all within the same range. Apparently the application is hosted over AWS, but they are maintaining the PHP install themselves. Since they are using the "Zend" Framework the passwords are probably hashed. But they are storing the user password in the browser and every time you reload the page a login request is sent with the stored password. – Moritz W. Dec 25 '19 at 11:31
  • 1
    The difficulty (as I see it) with that is, although it's terrible to store and re-use the password like that, it's not stone-cold exploitable because in some sense it's not categorically worse than having a plaintext password briefly in RAM on the user's machine. Because, RAM goes to swap goes to theoretically recoverable by someone who gets hold of the hardware later, just as "data in local storage which you make some feeble effort to delete at end of session", is theoretically recoverable. It's easier than from swap, but it's not like I can read your local storage any time I like. – Steve Jessop Dec 26 '19 at 02:31
  • So they're going to think "this problem is really niche, because for practical purposes local storage is considered secure. After all, we use cookies for session tokens, which also can be used to hijack accounts". Whereas they should be thinking, "just don't store plaintext passwords: it's dumb regardless of the fact that you do store other sensitive data in the same place". – Steve Jessop Dec 26 '19 at 02:36
  • A possible exploitation I see here is XSS, right? With no input validation I could maybe get some kind of script injected, that will simply read the password from the local storage. Then I could drop it off to the unrestricted file upload on their server and wouldn't even have to worry about CORS. So it wouldn't even be necessary to recover something from someones hardware to get access to the stored password... – Moritz W. Dec 26 '19 at 09:19

3 Answers3

12

Should I give them a second chance

Yes. It is typical to wait several months and communicate several times with the developing company before taking any further action.

If the company has shown that it is not willing to fix the issue, a possible next step is to publicly disclose the issue.

or contact some kind of data protection authority?

This is a good idea. I don't have experience with this, but you could at least inform such an authority what you found and are discussing next steps with the company.

And would you consider these problems/vulnerabilites critical?

No, but it shows that they haven't done anything to secure their systems, so it is likely that there are more serious vulnerabilities.

Sjoerd
  • 28,707
  • 12
  • 74
  • 102
  • 3
    I'd disagree that these problems/vulnerabilities are not critical. `The user password is stored in the local storage in plain text` If that doesn't have klaxons going off loud enough to wake the dead, something's wrong. Local password storage in plain text is an unforgivable sin. – dgnuff Dec 25 '19 at 09:05
  • 2
    infinite retries + plaintext passwords = timing vulnerability, anyone with technical know-how can probably get into any account on that system via a timing attack on the password, *THAT* is a critical vulnerability in my book (i don't know for a fact that they don't check the password in a constant-time manner, but it really sounds like the programmer making their login system didn't know what he was doing, thus it's extremely likely that they are.) – user1067003 Dec 25 '19 at 10:49
7

While these are not super-critical, I'd personally go for a responsible disclosure.

In a nutshell that means that you inform them about the vulnerabilities and also tell them that you will publish those after x days - regardless of wether they fixed them or not.

Google has a 90-day disclosure policy, which seems pretty standard nowadays.

The idea of this is that:

  • You give the company a reasonable timeframe to fix things
  • You also make them responsible and put the pressure on for a timely fix

You should obviously try to contact their security people directly (if they have any) and assist them if possible. However, if they don't react and don't fix in time, go public. Instead or in addition to publishing, you can contact an appropriate authority - especially if you don't get any reaction.

If this in Europe, they would be in violation of the GDPR for not appropriately securing personal data and if you contact the supervisory authority they would probably move in with fines and some unpleasant questions.

If you wish to remain anonymous, you could also try to contact an established infosec professional and see if they would go public or advise you.

Publishing (even by tweeting) will also have the side effect that you can build a name for yourself.

Can I get in trouble for this?

Of course companies may not be happy about disclosure, and may try to retaliate legally against researchers or journalists.

If you stay within the limits of the law, you can successfully defend against this kind of lawsuit, but that doesn't mean they can't cause major trouble for you.

As far as the law goes: What is allowed or not can be very different in different parts of the world; you need to check what your local law is. Most western countries allow security research, but do not allow you to actually access confidential data or disrupt systems (not even as a proof of concept).

Some options are:

  • Remain anonymous when you publish (though you then need to know how to protect your identity)
  • Tip off a journalist. They will protect you as a source, but there is no guarantee they're interested in your case
  • Tip off the authorities, though there is no guarantee they'll follow up on the case
  • Tip off a researcher (or team) who does this professionally. They will have experience and a legal department on their side
  • Stick with companies that offer a "safe haven" for security researchers in the first place.

That said, the majority of companies these days seem to appreciate good-faith reviews and many will even give public kudos or bounties.

Note: Some companies offer bounties but in return want you to agree that you don't publish without their permission. It is not uncommon that researchers refuse the bounty rather than to be bound by such terms.

averell
  • 1,083
  • 7
  • 10
  • Yeah its really hard to communicate with them as they don't seem to be interested at all. But yeah its Europe, so they probably don't want any trouble with the authorities about the GDPR. – Moritz W. Dec 25 '19 at 11:37
  • 2
    Honestly, GDPR is new enough that I think the relevant authorities (ICO here in the UK) have lower hanging fruit to look at than some probable-weaknesses in a webapp without even a demonstrated exploit. I'm not saying it's not a GDPR violation, I'm saying don't hold your breath waiting for the authorities to force them to fix it. – Steve Jessop Dec 26 '19 at 02:06
0

You cannot just perform a security audit without them asking for it, because it could as well be interpreted as a hacking attempt (this is something else than accidentally finding out about it). When comparing this to Google's policies, this is an extremely bad comparison, because Google offers bounties for that - they literally ask for it - and they pay one for identifying possible attack vectors.

If you should feel uncomfortable with the way they handle your personally identifiable information, just cancel your membership and demand the erasure, which is your good right under the General Data Protection Regulation (GDPR). Within that legal framework, they most likely have had to name one DPO, a so called "Data Protection Officer", in charge of ensuring the compliance with the law.

Unless being able to browse their customer database and to take screenshots as proof, better invest your time in something meaningful ...alike jogging anonymously through the forest, free of charge and privacy concerns.

  • Like already said, I didn't dig too deep for exactly that reason ("hacking attempt"). But inspecting the JSON response from the server or the data that is saved in the browser should not be a problem at all. Also I am not too worried about myself here (my own customer record is not present in this app), but more about the privacy of other customers etc. – Moritz W. Dec 25 '19 at 20:21
  • 2
    Google's Project Zero and other infosec professionals regularly perform security audits of third parties without "being asked for it". This is standard practice. Of course you have to stay within legal limits, and **must not** actually access data or maninpulate the system. – averell Dec 25 '19 at 21:09
  • Additionally, the notification policy does not apply here at all. Art. 33 is a requirement for the controller (that is, the company handling the data) to inform the authorities in case of a breach. However, [Article 25](https://gdpr-info.eu/art-25-gdpr/) specifically requires that data is properly protected. The supervisory authority has [the power to investigate](https://gdpr-info.eu/art-58-gdpr/) and take steps to rectify such problems. Even if you don't have a customer database (which you should never have), the authority can take action if they know data is not properly secured. – averell Dec 25 '19 at 21:19
  • @averell it is impossible to proof the possibility of a breach - without accessing sensitive information. This equals "unauthorized computer access" (otherwise known as hacking) (unless this is by accident). The intent does not justify the means - and comparing corporations to individuals, that is questionable. Canceling the contract is the most easy way to secure the own information. –  Dec 26 '19 at 20:30
  • The legal wording begins like this :`Whoever — having knowingly accessed a computer without authorization or _exceeding authorized access_, and by means of such conduct _having obtained information_ ...and depending which computer that was, this can results in a warrant, for the confiscation of whatever computer one owns. When this is not the own computer, lets say to research into a security system installed on it, this can be applied. And it is probably well-known, where the seed money for founding Google came from ...a direct comparison of an individual with them, that is megalomaniac. –  Dec 26 '19 at 21:11
  • Um, no. You do not need to „proof“ anything for the authority to do something. They can investigate mere suspicions if they want (they don‘t have to, but they can). Moreover, the OP never accessed _any_confidential information. They made quite clear that they are not concerned about their **own** information but want to do a public a service. I‘m not sure what you have with Google, there are plenty of individual infosec contributors who do responsible disclosure. I just mentioned the Google Team because they are well-known enough to set certain standards. I‘ll amend my answer when I have time. – averell Dec 27 '19 at 07:22
  • @averell Well, I don't share the view, that the customers of a gym would represent the "general public". This would be something else, when it would be a government website or the website of a bank, which the mere of the general public uses. It is the question, which "responsible disclosure policy" applies, when that gym not published any such policy on their own; for [example](https://www.basf.com/global/en/legal/responsible-disclosure-statement.html) (in this case, they also ask for it). While nothing could be proven, that's almost alike "I don't like the color of that button". –  Dec 28 '19 at 05:47
  • You seem to be under the mistaken impression that responsible disclosure is something that a company „allows“ somehow. The whole idea is about disclosing regardless of consent - I have already linked the Wikipedia page above. – averell Dec 28 '19 at 08:47
  • @averell it is still kind of a legal gray area, where ethics and business interests may collide - and as of the Keeper example, this had not been researched on a remote host. The topic here is still some gym and not a major tech company, which may have whole different business interests - if even interested at all. Their web-site service provider might rather care, than them - since this also is a matter of liabilities. –  Dec 28 '19 at 09:46