39

In a recent pen test of a web application one of the issues found was a 'backup file'. This was a javascript file that was renamed to filename.js1 when an updated version of filename.js was uploaded.

The 'backup file' lives in a directory with forbidden listing and is not referenced or used anywhere in the application.

How did they find this file?

schroeder
  • 123,438
  • 55
  • 284
  • 319
Alfie
  • 451
  • 4
  • 7
  • 38
    The obvious guess is that for every `file.js` that _is_ referenced, they try things like `file.bak`, `file.js1`, `file.bak.js`, `file.js.safe` etc. etc. – TripeHound Aug 27 '19 at 11:14
  • 3
    @TripeHound thank you - this was my initial suspicion but I wanted to check there weren't any tricks I was missing. More effort will be made in the future not to leave these files around, but being a less-predictable-renamer is probably not a bad idea either. – Alfie Aug 27 '19 at 11:58
  • 19
    @Alfie Yes, it is. It masks the problem, rather than solving it. Yes a file named `filename.js-{7bb744e7-b923-459b-a787-93ffe3b55ff0}` is likely not guessable, but that's like saying "I can leave my door unlocked because I have a tripmine right behind the door". Effective, but the completely wrong approach. –  Aug 27 '19 at 12:03
  • 2
    @MechMK1 I understand security through obscurity is never a good thing, but should something like this get overlooked again it would at least offer another degree of protection, albeit a small one. No point making it easy for them, right? – Alfie Aug 27 '19 at 12:13
  • 23
    @Alfie Yes, but again, this is fundamentally the wrong approach. Automate your deployment. Ensure that there is *no way* for old files to remain in your webroot. This should not be something that is done manually. –  Aug 27 '19 at 12:15
  • 4
    @MechMK1 Understood; appreciate the advice, thanks. – Alfie Aug 27 '19 at 12:18
  • 3
    Better use of version control would alleviate the temptation to leave old files lying around.. At least that's what I see at my current company :-) – George M Reinstate Monica Aug 27 '19 at 20:08
  • 21
    Um... if it was a pen test, then you should have a report that describes how they found the file. Or was it an unauthorized attack calling itself a pen test? – atk Aug 28 '19 at 06:13
  • 1
    @atk that might be in another vulnerability report, pure luck, brute force or result of an earlier test round. It is possible that the vulnerability showing the file is out of scope of the test (but the information so gained is not), so there is no such description. – Chieron Aug 28 '19 at 13:11
  • 3
    @Chieron in any of those cases, a professional pen test report should still indicate how the file was discovered. If it's not in there, the client should go back to them and ask to fix/supplement the report. (And if it's not in the contract that all details include steps to reproduce - even if it's "${scanner} discovered it, but it was out of scope so we didn't explore" - then it should be in the contract.) – atk Aug 28 '19 at 15:49
  • 2
    @atk The pen test was conducted by a client for their own benefit. We were only provided with the results in a very limited format. – Alfie Aug 29 '19 at 13:59
  • @Alfie, if you are providing a cloud solution, the EULA, terms of service, and customer contracts should explicitly talk about whether they're allowed to pen test and specify how they provide results. This should include what they need to provide with regards to authorized pen test results. – atk Aug 29 '19 at 16:06
  • @Alfie, If you have an on-prem solution, you have to handle it like any other bug. Customers can change the configuration, and even the software, and you wouldn't be at fault. Ask for the steps to reproduce. If they won't, then - just like any other bug - make a best effort to identify possible product flaws and if you find none, tell the customer reproduction steps or no fix. – atk Aug 29 '19 at 16:07
  • @atk This was an authorized test, but including details in our T&Cs regarding how results are to be provided is a good idea that I am not sure we currently implement. I will follow this up, thanks. – Alfie Aug 29 '19 at 16:27

5 Answers5

75

Brute force scanners

Many automated scanners get around banned directory listings by "bruteforce" searching for files. This means that they will check for additional files with names similar to files that do exist (i.e. filename.js1 as well as files that aren't referenced at all (aka secret.txt). If you happen to have a file whose name is on the bruteforced list and is in an accessible directory, then it will be found regardless of whether or not "directory listing" is enabled

It's worth pointing out that hackers do this same thing, so this is a real issue. In general, if something is in a publicly accessible directory, then you should assume it will be found. So if you don't want it to be public then you need to keep it out of public directories - disabling directory listing provides very little security.

Real vulnerabilities

Finally, this may not seem like a large issue (and it probably isn't), but leaving backups of javascript files in public directories is actually a bad idea in general. When it comes to XSS an attacker will generally have the most success if they are able to exploit a javascript file hosted on the same domain. This is because doing so gives opportunities to bypass a CSP or other security "firewalls". As a result, if an older javascript file happened to have a security vulnerability in it that was fixed in a later version, and an attacker found a way to force the user's browser to load the older javascript file, they may chain their way to a more damaging vulnerability. This may seem far fetched, but chaining together many small vulnerabilities into a larger one is how many of the worst breaches happen.

tl/dr: If something is hosted by your website but doesn't have a reason to be there, then it is a liability. Kill it with prejudice.

Conor Mancone
  • 29,899
  • 13
  • 91
  • 96
12

There are many tools available which brute-force filenames. Some of these are more intelligent than others.

For instance, a "dumb" tool may just have a word list, containing probable names for files and directories, such as

  • /admin/
  • wp-admin.php
  • login.php

A more intelligent tool may look at the files which it already knows about (e.g. by crawling the application) and try to find similarly-named files. In your case, there was a file named filename.js, so the application likely tried to mangle the name, as TripeHound pointed out in a comment:

  • filename.js1
  • filename.js.bak
  • filename.bak.js
  • .filename.js

Why are these files a problem?

One might be tempted to think that an unreferenced file is "safe", because it's not a part of the application. However, the file is still accessible, and depending on the contents of the file, this may allow an attacker to do various things:

  • An attacker might be able to circumvent an URL filter and include a JavaScript file that still contains vulnerabilities from an older version.
  • Unreferenced files may be archives that are left over from deployment and still contain source code, thus allowing an attacker to gain access to that
  • Unreferenced files may contain credentials or other relevant configuration data

In general, it's best to avoid having unreferenced files in your webroot. As the name implies, they are not used by the application and thus are only a source of problems.

4

The real problem here is that you have a deployment / production environment that is not controlled (and thus replicable) through an automated source control and deployment system.

This means that, if you find some new file in your system, you don't know if that's some kind of backdoor dropped by a root kit, or some innocuous renamed file your colleague left behind.

In general, a best security practice is to only ever have files on a server that are put there by an automated script that clones some kind of build artifacts, and to have that automated process also delete files that should no longer be there. Then you can run audits for "are the files in production what the build system says they should be?"

And if you think that "bad deployment practices can't possibly be a life threatening problem for my business," then I invite you to google "Knight Capital Group."

Jon Watte
  • 151
  • 2
1

The same way an attacker would: by guessing.

That's why you have pen testers: to test things that you may not have thought of.

Remove the backup file from your application so that it is not accessible.

Lightness Races in Orbit
  • 2,173
  • 2
  • 14
  • 15
0

How did they find this file?

By very simple "brute force" guesswork, as others have already mentioned.

You will be able to see this as it happened, the request for this file along with the other guesses that were made, in your web server's logs. Unless, of course, the testers managed to find a hold that allowed them to reset your logs (which would be listed in your test report) or you didn't have sufficient logging enabled.

but being a less-predictable-renamer is probably not a bad idea either

It is good practise to avoid old files being in your application directory at all. Even in your source directories, especially if your deployment model is "copy from source".

Instead of keeping old versions of files around for reference or other reasons, make use of the features offered by your version control arrangement. All VCSs will let you retrieve older versions of files, some will let you shelve intermediate versions without properly checking them in, you can use branching to separate out experimental work, etc.