0

I am currently struggling to make nginx secure just folder name, dependlessly what the file name inside of it is. Let's say, I'm accessing a file in a folder /one/two/three, it would look like this:

http://example.com/one/two/35d6d33467aae9a2e3dccb4b6b027878/file.mp3

So the folder "three" would be accessed only by directory md5, and real path would return 403. I have thousands of such folders so I need to keep them hidden, but have static access to them via remote clients which know only the md5 at runtime.

Meanwhile such links should work too:

http://example.com/one/two/35d6d33467aae9a2e3dccb4b6b027878/four/file.mp3

So only a specific directory level is hidden.

Uwe Keim
  • 2,370
  • 4
  • 29
  • 46
  • Your question is unclear. Please edit it to try to make more clear what your problem is. AWS S3 signed URLs look like they might address your problem, but I doubt Nginx will do something this specialist. http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html – Tim Dec 04 '16 at 20:32
  • What im trying to do is to hide real names of the folders, and keep the same folder name regardless of file I'm trying to access in it (secure_link_secret in nginx makes the link depend on the file name). I could simply make a script to rename all folders to their md5 hashes, but this would be nightmare to navigate through them if I need to find a specific one on my server. – Gallardo994 Dec 04 '16 at 20:41
  • Nginx is a web server. It can't do this. – Tim Dec 04 '16 at 20:48
  • It would solve the problem if secure_link could be customized to use my own expression. – Gallardo994 Dec 04 '16 at 21:01
  • Perhaps you should've described your use case and asked how to secure your files, rather than suggesting an implementation method. – Tim Dec 04 '16 at 21:10
  • I need to make access to files only via specific string which I can generate depending on original folder name, which I can't reveal to other people. – Gallardo994 Dec 04 '16 at 23:08

2 Answers2

2

You can achieve this by using internal locations for the hidden folders or files you'd like to protect and a way to check if your hashed code allows access to your files or not.

A direct access to your hidden files (e.g. /protected/folder1/folder2/file.pdf) is not allowed by Nginx as this location has been marked as internal. But your script can redirect to this location with the special header X-Accel-Redirect.

So you can let Nginx do what it can do best, deliver data and your script checks only if access is allowed or not.

Below you can see a simple example for this.

The folder /data contains public content (e.g. public images). Not public images were stored in a different folder (outside htdocs) and provided via location /protected_data. This location has an alias to the folder containing the protected images and a directive internal. So this is not accessable from outside.

In the PHP script I did at first a check if the protected file exists. This may be a security issue but usually checking the user rights is more cost expensive (time consuming) than a simple file_exists. So if security is more important than performance you can switch the order of the checks.

Nginx server config:

...

root /var/www/test/htdocs;

location / {
    index index.php index.htm index.html;
}

location /data {
    expires 30d;
    try_files $uri /grant-access.php;
}

location /protected_data {
    expires off;
    internal;
    alias /var/www/test/protected_data;
}

location ~ \.php$ {
    if (!-e $request_filename) {
        rewrite     /       /index.php last;
    }
    expires                 off;
    include                 fastcgi_params;
    fastcgi_pass            unix:/var/run/php5-fpm.sock;
    fastcgi_read_timeout    300;
    fastcgi_param           SCRIPT_FILENAME  $document_root$fastcgi_script_name;
    access_log              /var/log/nginx/access.log combined;
}
...

PHP script:

<?php
// this is the folder where protected files are stored (see Nginx config alias directive of the internal location)
define('PROTECTED_FOLDER_FILESYSTEM', '/var/www/test/protected_data');

// this is the url path we have to replace (see Nginx config with the try_files directive)
define('PROTECTED_PUBLIC_URL', '/data');

// this is the url path replacement (see Nginx config with the internal directive)
define('PROTECTED_INTERNAL_URL', '/protected_data');

// check if file exists
$filename = str_replace(
    PROTECTED_PUBLIC_URL .'/',
    '/',
    parse_url($_SERVER['REQUEST_URI'], PHP_URL_PATH)
);
if (!file_exists(PROTECTED_FOLDER_FILESYSTEM . $filename)) {
    http_response_code(404);
    exit;
}

// check if access is allowed (here we will use a random check)
if (rand(1,2)==1) {
    // grant access
    header('X-Accel-Redirect: ' . PROTECTED_INTERNAL_URL . $filename);
} else {
    // deny access
    http_response_code(403);
}
Jens Bradler
  • 6,133
  • 2
  • 16
  • 13
0

In order to make this possible, nginx would need to be able to tell that the hash 35d6d33467aae9a2e3dccb4b6b027878 corresponds to three. Nginx is not able to do this as of today (and I don't think it's on the todo list).

The only way I could imagine how you could achieve something similar would be to host the files in another location and creating symlinks with the hash of the target directory as the link name in your location's root directory at the point of time when the files are created/uploaded.

For example your webserver location http://example.com/one/two/ points to a directory (say /var/www/html/), where the symlink 35d6d33467aae9a2e3dccb4b6b027878 points to the directory three/ which is located in another location (e.g. /var/www/protected/).

The upload would need to trigger a script or something alike in order to create the folder three/ in /var/www/protected/, hash 'three' and then create the symlink /var/www/html/35d6d33467aae9a2e3dccb4b6b027878.

This is the only way I could think of.

randomnickname
  • 513
  • 2
  • 11
  • This would make a huge overhead on the server IO cause the amount of folders is above 6000 and the amount constantly increases. – Gallardo994 Dec 04 '16 at 21:19
  • @Gallardo994 6000 symlinks is not a huge overhead – Alexey Ten Dec 05 '16 at 05:22
  • It requires checking every change in every folder each time. And it always changes, always gets modified, amount of folders increases constantly. Still not an overhead for a simple HDD? – Gallardo994 Dec 09 '16 at 18:03