24

Imagine a server setup of a shared webhosting company where multiple (~100) customers have shell access to a single server.

A lot of web "software" recommends to chmod files 0777. I'm nervous about our customers unwisely following these tutorials, opening up their files to our other customers. (I'm certainly not using cmod 0777 needlessly myself!) Is there a method to make sure that customers can only access their own files and prevent them from accessing world readable files from other users?

I looked into AppArmor, but that is very tightly coupled to a process, which seems to fail in that environment.

200_success
  • 4,701
  • 1
  • 24
  • 42
Phillipp
  • 492
  • 1
  • 3
  • 12
  • You've mentioned AppArmor. Are you limited to Ubuntu or can SELinux be used? – Cristian Ciupitu Jul 11 '14 at 12:00
  • 12
    I would actually consider whether the recommendations of the "web software" to `chmod files 0777` is strictly necessary, i.e. address the root cause of the problem, rather than the symptom that, by doing so, anyone can read anyone else's files. Many times the _allow all access_ recommendation is simply a cheap way of avoiding support calls, or lack of technical prowess in being able to set permissions up correctly. In almost no cases have I had to set files `0777` or grant applications full root access when requested. Education of the users and/or vendors helps massively here. – Cosmic Ossifrage Jul 11 '14 at 12:16
  • 3
    @CosmicOssifrage, users can't be educated that easily, they don't want to read instructions or manuals. – Cristian Ciupitu Jul 11 '14 at 12:45
  • @CristianCiupitu, respectfully, I disagree. While users are unlikely to be proactive about these things and would rather the lazy `0777` approach, reactive, notification-based systems which e.g. ping the user an email when they have world-readable files or some other potentially insecure configuration work wonders. Target the education precisely when and for whom it is required; you don't need to tell the pro who takes these precautions already, but you might need to tell the newbie about specific configuration flaws if their setup is insecure simply through ignorance of good practice. – Cosmic Ossifrage Jul 11 '14 at 13:12
  • @CosmicOssifrage, so you're proposing to scan the home directories for any lax permissions and notify the users if so? – Cristian Ciupitu Jul 11 '14 at 13:15
  • @CosmicOssifrage a lot of people do not care about howtos, as Cristian already told. Just check the amount of questions on askubuntu.com which could easy be solved by reading the manpage. basically thats the issue we as sysadmins face, make it work, and make it simple enough that they understand what they should do. in other words: restrict everything possible and only allow whats absolutely nessesary. to use the BOFH approach: instead of telling them they use insecure settings just like 777, notify their boss about the possibly security incident they provoke. – Dennis Nolte Jul 11 '14 at 13:38
  • @CristianCiupitu, yes, I've seen it done on two occasions, both in enterprise-scale deployments with a wide range of user expertise. One went as far as changing the permissions on what it considered "insecure" files. The user had to indicate understanding by listing files they explicitly want world-readable in a file in their home directory. In other words, the users had to _prove_ to the system they knew what they were doing, or those insecure permissions were going to get overwritten. I've seen similar for SSH keys with `from="*"` declarations being removed unless marked up in a special way. – Cosmic Ossifrage Jul 11 '14 at 14:06
  • 12
    Any "web software" that still recommends 777 permissions needs to be taken out and *shot*. Use `suexec` or `mpm_itk` or similar. – Shadur Jul 11 '14 at 15:27
  • @CosmicOssifrage I see you there, but we can't avoid that users are doing this. Educating them would be a monstrous task because there are thousands of customers. – Phillipp Jul 11 '14 at 16:05
  • @CristianCiupitu We use Ubuntu, so selinux would be possible but a lot more complex. – Phillipp Jul 11 '14 at 16:08
  • @Phillipp, SELinux with its Multi Category Security (MCS) would have added another layer of security. See for example [Secure Virtualization Using SELinux (sVirt)](https://danwalsh.livejournal.com/30565.html). – Cristian Ciupitu Jul 11 '14 at 18:10
  • 3
    @CosmicOssifrage I don't think Phillipp is telling or forcing users to `chmod 0777` their files. I think he's nervous about them going to `loltoturialz.com/php_problems` and setting `chmod 0777` on their own while blindly following a poorly written article. There's really no way to prevent them from doing so, or to prevent them from being upset when someone steals their stuff. – Kevin Jul 11 '14 at 22:23
  • @Kevin You are absolutely right. But in the end the customers will complain that we messed up and "didn't keep them safe". Users think like this, and we must prevent that. – Phillipp Jul 12 '14 at 02:05
  • 2
    @kevin - which is precisely why warranty void was created. I have almost never seen a serious appliance (be it software compiled, a bunch of scripts or whatever) without such a clause. And believe it or not - in most corpprate environments users are well aware of this – Dani_l Jul 12 '14 at 05:55

8 Answers8

34

Put a restricted and immutable directory between the outside world and the protected files, e.g.

/
 ├─ bin
 ├─ home
 │  └─ joe <===== restricted and immutable
 │     └─ joe <== regular home directory

or /home/joe/restricted/public_html.

Restricted means that only the user and perhaps the web server can read it (e.g. modes 0700/0750 or some ACLs).

Immutability can be done with chattr +i or by changing the ownership to something like root:joe.

An easy way to create that hierarchy on Ubuntu would be to edit /etc/adduser.conf and set GROUPHOMES to yes.

Cristian Ciupitu
  • 6,226
  • 2
  • 41
  • 55
15

There is an option which you might want to consider (depending how much work you want to do for that).

As others already posted, "normally" you cannot prevent someone with shell access to read world-readable files.

However you could chroot them into their own home, basically limiting the shell access to, first, only the root directory you want (AKA the home directory) and, second, prevent the users from executing everything you do not want them to execute.

I did a similiar approach when I had one user to have access to the webfiles, but I did not want to have him seeing other files outside the webfolder.

This did have a lot of overhead, was a mess to setup, and every time I updated something, it broke.

But for today I think you could achieve it pretty easy with the OpenSSH chroot option:

WikiBooks OpenSSH

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
Dennis Nolte
  • 2,848
  • 4
  • 26
  • 36
  • chroot for SFTP is easy to implement, but I'm not sure it's that easy for shell access. You'd have to setup a chroot with all the binaries and libraries for each user. – Cristian Ciupitu Jul 11 '14 at 12:30
  • 2
    That's implementation specific. ARCHLINUX has a specific arch-chroot command that takes care of all the extra bind mounts etc https://wiki.archlinux.org/index.php/Change_Root#Change_root – Dani_l Jul 11 '14 at 12:41
  • @CristianCiupitu thats what i did, allowing only a specific subset of commands with linking all nessesary libraries, thats why i said it was a mess :) Dani_l true, my setup was a debian server, i never got the time to check in gentoo sadly. – Dennis Nolte Jul 11 '14 at 12:49
  • @Dani_l: what about the installed packages? The `arch-chroot` command doesn't seem to cover that. And then there's also the issue of wasted disk space with all the duplicates. I'm not saying it's impossible to do it, just that it might a bit more complicated currently. – Cristian Ciupitu Jul 11 '14 at 13:13
  • arch-chroot doesn't automagically handle packages. just sets up chroot with bind to /proc /sys etc... – Dani_l Jul 11 '14 at 14:55
  • 1
    Something to make this a -lot- easier is to use UnionFS to chroot users into a special union of the rootfs in read only mode and a read write home directory, this means they see all the system packages and binaries but writes are automatically done in their home folder. this -must- be coupled with making all of the home directories 700 permissions else users could read files from other users anyway. – Vality Jul 11 '14 at 23:31
  • @Vality Is UnionFS stable enough? I would assume a better choice is aufs aufs.sourceforge.net – Dani_l Jul 12 '14 at 05:20
  • @Dani_l Their code bases are really very similar, it seems the differences are mostly politics. I personally find UnionFS V2 is the most stable for my systems but many seem to like both, frankly I think either would be a fair choice so by all means use aufs if you find it to work better, I think only testing on your own machine can really determine stability. – Vality Jul 12 '14 at 20:04
  • @Vality I just saw http://www.unionfs.org/ and my main concern was the /sys/ vs. /proc/mount argument. I couldn't figure out if there really is such a limitation, aside from superblock numbers – Dani_l Jul 12 '14 at 21:19
11

I have found POSIX Access Control Lists allow as you, as the system administrator, to protect your users from the worst of their own ignorance, by overriding the regular user-group-other file system permission, without much of a chance to break anything crucial.

They can be especially useful if you for instance (f.i.) needed home directories to be world accessible because webcontent needs to be accessible for apache in ~/public_html/. (Although with ACL's you can now do the reverse, remove access for all and use a specific effective ACL for the apache user. )

Yes, a knowledgeable user can remove/override them again, are just uncommon enough that that's unlikely, and those users that can are typically not the ones to conveniently chmod -R 777 ~/ anyway, right?

You need to mount the filesystem with the acl mount option:

 mount -o remount,acl /home

In many distributions the default is to create user groups, each user has their primary group, and I have set all users in a secondary group with the unimaginative name of users.

Using ACL's it is now trivial to prevent other users from accessing the home directories:

Before:

 chmod 0777 /home/user* 

 ls -l /home/user*
 drwxrwxrwx.  2 user1  user1  4096 Jul 11 15:40 user1
 drwxrwxrwx.  2 user2  user2  4096 Jul 11 15:24 user2

Now set the effective directory permissions for members of the users group to 0 no read, write or access:

 setfacl setfacl -m g:users:0 /home/user*

 ls -l 
 drwxrwxrwx+  2 user1  user1  4096 Jul 11 15:40 user1
 drwxrwxrwx+  2 user2  user2  4096 Jul 11 15:24 user2

The + sign denotes the presence of ACL settings there. And the getfacl can confirm that:

getfacl /home/user1
getfacl: Removing leading '/' from absolute path names
# file: home/user1
# owner: user1
# group: user1
user::rwx
group::rwx
group:users:---
mask::rwx
other::rwx

The group:users:--- show that group effectively having no access right, despite the regular permissions for other being other::rwx

And testing as user1 :

[user1@access ~]$ ls -la /home/user2
ls: cannot open directory /home/user2: Permission denied

A second common solution on shared systems is to have the automounter mount home directories on demand an a server dedicated to shell access. That's far from fool proof, but typically only a handful of users will be logged in concurrently meaning only the home directories of those users are visible and accessible.

HBruijn
  • 72,524
  • 21
  • 127
  • 192
  • 5
    What is *"f.i."*? I wouldn't recommend using acronyms or abbreviations unless they're a classic one like "e.g.", "i.e.", "etc" and perhaps OP. – Cristian Ciupitu Jul 11 '14 at 14:13
3

For example, if you want user to have access only to his own home directory, you should do:

cd /home
sudo chmod 700 *

Now /home/username is only visible to its owner. To make this the default for all new users, edit /etc/adduser.conf and set DIR_MODE to 0700 instead of the 0755 default.

Of course if you want to alter the default DIR_MODE it depends on your distribution, the one I posted works on Ubuntu.

edit

As @Dani_l correctly mentioned, this answer is correct in making them NOT world readable.

Marek
  • 141
  • 7
3

Linux Containers (LXC) could be the best combination of chroot and separate system.

  1. They are more like an advanced chroot, not virtualization, but you could combine different operating systems in one server.

  2. You can give an user a complete operating system and chroot him there, so when the user logs in, he goes to his container. And you can also limit processor and memory usage there.

Stéphane Graber, the author of LXC, has a nice tutorial to help you get started.

Cristian Ciupitu
  • 6,226
  • 2
  • 41
  • 55
maniaque
  • 710
  • 2
  • 5
  • 13
  • You can't really combine different operating systems, because all of them need to use the *Linux kernel*, but you can use different *distributions*. – Cristian Ciupitu Jul 11 '14 at 14:11
  • 1
    Thanks :) Yes, different linux kernel based operating systems. – maniaque Jul 11 '14 at 14:43
  • @CristianCiupitu do you mean the same identical Linux kernel? or do you mean that each container can have a different version of the kernel? – agks mehx Jul 12 '14 at 07:53
  • @agksmehx, *all the LXC containers share the kernel of the host*. Only their applications and libraries are used. So for example if you have a RHEL 7 host with an Ubuntu 14.04 container, the RHEL kernel (3.10.0-123) will be used, while the Ubuntu one (3.13.0-24.46) will not be used; read also [this comment](https://www.stgraber.org/2013/12/20/lxc-1-0-your-first-ubuntu-container/#comment-174569) from the tutorial. By the way, since the kernels of the containers are not used, it might be a good idea to remove them in order to save some disk space. – Cristian Ciupitu Jul 12 '14 at 13:53
  • @CristianCiupitu that's what i thought. it wasn't clear from the answer or comment, so i wanted to clarify. – agks mehx Jul 12 '14 at 21:27
2

Just to be pedantic - No, there isn't.
@Marek gave a correct answer, but your question is incorrect - you can't prevent anyone from accessing "world readable" files.
Either they are world readable, or they are not. @Marek's answer is correct in making them NOT world readable.

Dani_l
  • 498
  • 2
  • 8
  • 2
    wrong, chroot/jail the user to a subfolder and hes unable to read "normally" world-readable files. – Dennis Nolte Jul 11 '14 at 12:13
  • 1
    -1 I think you're being needlessly critical of the OP's question. He wants to give his customers a safety net in case they aren't smart about their permissions. But it doesn't look like to me that the OP isn't aware of how Unix file permissions work or basic security principals. – Kevin Jul 11 '14 at 22:06
  • Also, you can put the files into a directory inside a 000 permissions directory, then nobody can access them even if the files are world readable. – Vality Jul 11 '14 at 23:32
  • nobody? not even root? ;-) – Dani_l Jul 12 '14 at 18:46
  • @Kevin agreed that my comment is being critic closely to unnecessary. However Dani_I should not wirte that hes pedandic and the beeing wrong. Not stating that i don't agree with the rest of his answer. – Dennis Nolte Jul 14 '14 at 07:17
  • @DennisNolte I believe Kevin was referring to my answer, not your comment. – Dani_l Jul 14 '14 at 13:40
  • ah, that makes sense. – Dennis Nolte Jul 14 '14 at 13:56
0

I see no mention of the 'restricted shell' in the answers given so far.

ln /bin/bash /bin/rbash

Set this as their login shell.

bbaassssiiee
  • 142
  • 6
0

If the web server is running as the same user and group for every domain hosted, it is difficult (if not impossible) to make the setup secure.

You want certain files to be accessible to the user as well as the web server, but not to other users. But as soon as the web server can access them, another user could read them by putting a symlink to the file inside their own web site.

If you can get each web site to run as a separate user, then it becomes fairly simple. Each customer will now have two users on the system, one for the web server and one for shell access.

Create a group containing these two users. Now create a directory with that group and user root. That directory should have permissions 750, which means root has full access and group has read and execute access. Inside that directory you can create home directories for each of the two users. This means the user's home directory will no longer have the form /home/username, but rather something with at least one more directory component. This is not a problem, nothing requires home directories to be named according to that specific convention.

Getting web sites running with different user and group may be tricky, if you are using name based vhosts. Should it turn out that you can only make the separation work with IP based vhosts, and you don't have enough IPs for each site, you can host each web site on an IPv6 address and put a reverse proxy for all of them on an IPv4 address.

kasperd
  • 29,894
  • 16
  • 72
  • 122