3

I am working with a company and have found the following:

When a user registers, they make a "random" hash. This "random" hash often has collisions and has to be regenerated to keep the hash unique in the database.

There's also a feature that lets you log in automatically. This works by having the hash be part of the URL. This means the hash is also stored in access logs. This can happen with full access to regular users and even some admin users.

Links are actively used throughout the site to switch between sections of the site, these links are also sent out in emails all the time.

The hash never changes, and once you are in, you can view a person's name, e-mail, DOB, payment history (but not credit card number) and make any change you want to the account. Again, hashes are never invalidated and can be reused an infinite number of times.

Many of the engineers feel this is fine and don't want to change the system because it's too much work, and think that it's secure enough.

I am arguing this is a 10 alarm fire that has to be addressed as soon as possible.

What are your thoughts? Am I overreacting to this?

Joe
  • 2,734
  • 2
  • 12
  • 22
  • I suppose the better question is, Is there any situation that this is considered secure? And a follow up question, Should this be fixed? I know opnions aren't an SE thing but I would love to hear some. – Jack Hamerston Feb 08 '19 at 11:15
  • Also, if you think it should be fixed, how much of a budget would you put towards fixing this for a $250 Million company? – Jack Hamerston Feb 08 '19 at 11:26
  • Possible duplicate of [Understanding Session Fixation Vulnerability](https://security.stackexchange.com/questions/55876/understanding-session-fixation-vulnerability) – Tobi Nary Feb 08 '19 at 11:58
  • Clarification: I'm guessing the hash is not really a hash, but rather a random string of fixed length? – Conor Mancone Feb 08 '19 at 18:58
  • Gitlab did almost this exact same thing. They eventually changed the behavior, but didn't consider it a vulnerability for a long time. Of course you guys sound like you spend a bit more time shouting these access tokens from the roof tops... https://www.incapsula.com/blog/blocking-session-hijacking-on-gitlab.html https://sensorstechforum.com/gitlab-hijacking-bug-users-cyberattacks/ – Conor Mancone Feb 08 '19 at 19:01

3 Answers3

4

First things first: That sounds horrible. Seriously.

When a user registers, they make a "random" hash. This "random" hash often has collisions and has to be regenerated to keep the hash unique in the database.

How can this even happen? If the hash has collisions on a regular basis you simply don't have the correct hash for the job. What kind of hash is it? And why don't you use just any available hash that is considered secure e.g. something like SHA 256?

There is a login link that if you access, with the hash placed in the url, you are auto logged into the account. This means the hash are also stored in access logs. This can happen with full access to regular users and even some admin users.

This means that you are actively storing plaintext information in logs and in emails and sending them (likely in plaintext, bc. email) over the network. This is a terrible idea for a number of reasons. And (at least partially) it sounds as if you were recreating some sort of SSO or reinventing sessions using information stored in the users browser.

If it really as it sounds the answer to your question is: Yes. It should be fixed. Better now than later. Also: You might want to think about security design. This clearly sounds as if there has been no proper design of authentication and session-handling. You should address those topics before the first line of code is written.

Ben
  • 2,024
  • 8
  • 17
3

The reason we don't like to give opinions is because "is this secure?" and "should this be fixed?" vary wildly from circumstance to circumstance and some security steps are not worthwhile for every company. My stereotypical example is that an anonymous site to vote for favorite cat pictures does not need anywhere near as much security as the web portal for launching nuclear missiles. Your security needs probably fall somewhere in the middle.

Putting it in context

Neither can be answered without understanding the actual risk. Effectively what your team has developed is a permanent session identifier. Gitlab, for instance, did something similar, and were aware of it and left it unaddressed for quite some time before fixing it. The reason why it may not be a high priority is because a permanent session identifier is not a vulnerability on its own. Every system has session identifiers, or API keys, or something else that the client uses so that the server will remember who it is and let it back in. To some extent, this is just more of the same. If an attacker were to get a hold of a standard session identifier (perhaps by stealing cookies in an XSS attack) then the exact same things that you are concerned about would result: the attacker would have full access to the account and would be able to do anything that doesn't require password confirmation.

Additional risks

However there are three differences between your permanent access code and a typical session identifier:

  1. Your access code is permanent.
  2. Your access code is widely shared.
  3. Your access codes are included in the url itself

These can be dangerous in some circumstances. The permanency is an issue. A normal access code can at least be invalidated automatically or on log out, giving an attacker who gains it a limited window to act in. An attacker who steals a permanent access code gains access to the account permanently. That is a bit of a problem.

The fact that the access code is commonly shared also increases the risk. Normally access credentials don't leave cookies. Emailing them around makes it easier for them to leak accidentally: sent to the wrong address, sent via an email provider that doesn't use encryption, etc...

Finally, allowing the access code to come up in the URL gives it more avenues to be stolen, since url data may be cached in intermediate servers or stored to logs, depending on https usage. If you are using https then this may be less of a concern (since the exact URL is also encrypted while the request transits across the internet).

Weighing your options

Ultimately your team needs to understand the actual risks, and if you consider this dangerous you won't have any success changing people's minds otherwise. Saying "Guys this is dangerous and we need to change it!" will likely get shrugs. Explaining why this is dangerous and how it may be taken advantage of by attackers may be much more convincing. Certainly, your team should treat it for what it is (i.e. full account access) and protect it accordingly. Also, you need to make sure it is long enough that it can't be brute-forced.

Even still, in some cases a system like this may make perfect sense. As an example this is no different than Google Drive's "Share URL" scheme which generates a URL that you can share with anyone, and therefore grant them full access to a document. Perhaps your team (and your customers) find the convenience of a simple URL to log themselves in to be worth the potential security concerns. Then again, if this is an e-commerce store and someone can break in and order things for themselves, or if you are storing anything privacy focused, or any number of other scenarios, the potential risk of an attacker using this to get into someones account may not be worth the convenience. Ultimately that's a decision for your team and your customers to make (keep in mind that your customers may make their opinion known by leaving if this results in a wildly publicized data breach).

Conor Mancone
  • 29,899
  • 13
  • 91
  • 96
0

No secrets should be stored in a URL. There are lots of ways it can leak.

  • The referer header (May be sent to another server if content from a 3rd party is included in a page or if a user clicks a link to another website.)
  • Access logs or error logs
  • Shoulder surfing
  • The client's clipboard (if a person or script copies the URL)
  • Addons, (anti)-virus software, anti-phishing services, or other client-side software
  • Server and web app misconfigurations (Including when multiple applications run on the same domain. 3rd party software might not be careful with same site referers, logs, or internal links.)
  • Social engineering (Ya. I can help you with that. Just tell me what page you were looking at.)
  • Web browser history (and caches)
  • Proxies or network eavedroppers (If not end to end encrypted.)
  • All the ways cookies or passwords can leak. (Browser bugs, HeartBleed, spyware)

And for this specific case you also have

  • Network eavesdroppers reading emails sent between hosts
  • Email providers snooping on stored or incoming emails
  • A user that bookmarks their URL and syncs bookmarks between devices would be in trouble if any one of those devices were compromised (even if the client they actually use to visit the website is secure.)
  • "Hashes" might be short enough to brute force. (Which could be the case if collisions are frequent.)
  • Insecure RNGs could allow someone with knowledge of a few secret URLs to predict other secret URLs

Additionally, if a URL is ever leaked then robots could scrape or index all the personal information they find. (robots.txt is just a suggestion.) It could be a big problem if Google's web crawler decides to index a link it found somehow.

And even anti-spam or anti-virus email scanners could increase the risk of leaks. You don't have control over or knowledge of how they handle data. They could end up disclosing the link, failing to securely delete data, or allowing a bug to leak page content.

Future Security
  • 1,701
  • 6
  • 13