102

Coming from the comments in this question Why is it bad to log in as root?:

The sudo mechanics is in use so non-administrative tools "cannot harm your system." I agree that it would be pretty bad if some github project I cloned was able to inject malicious code into /bin. However, what is the reasoning like on a desktop PC? The same github code can, once executed, without sudo rights, wipe out my entire home folder, put a keylogger in my autostart session or do whatever it pleases in ~.

Unless you have backups, the home folder is usually unique and contains precious, if not sensitive data. Root directories however build up the system and can often be recovered by simply reinstalling the system. There are configurations saved in /var and so on, but they tend to have less significance to the user than the holiday pictures from 2011. The root permissions system makes sense, but on desktop systems, it feels like it protects the wrong data.

Is there no way to prevent malicious code happening in $HOME? And why does nobody care about it?

phil294
  • 1,032
  • 2
  • 6
  • 11
  • 112
    [Obligatory xkcd](https://xkcd.com/1200/) – JoL Feb 26 '18 at 17:09
  • 8
    The real issue is that people rarely use mandatory access controls like AppArmor to protect their home directory. When they do, then protecting root protects AppArmor, which in turn protects your home. On Ubuntu for example, your browser is not necessarily allowed to access your holiday pictures, despite running as your user in your home. – forest Feb 27 '18 at 03:43
  • 7
    The OS's job is to protect itself from *you*, the untrusted user and, by proxy, the programs you (perhaps foolishly) run. If you run a program that deletes all your stuff, well, then it sucks to be you. But the OS needs to protect itself and so you running a rogue program – intentionally or unintentionally – should not be able to disable the system. It makes no difference whether is is a desktop system or a server. – Christopher Schultz Feb 27 '18 at 14:52
  • 3
    User [error/stupidity/???] can completely prevent _that user_ using the system but shouldn't impact other users, nor the system as a whole. – Basic Feb 27 '18 at 15:26

13 Answers13

101

I'm going to disagree with the answers that say the age of the Unix security model or the environment in which it was developed are at fault. I don't think that's the case because there are mechanisms in place to handle this.

The root permissions system makes sense, but on desktop systems, it feels like it protects the wrong data.

The superuser's permissions exist to protect the system from its users. The permissions on user accounts are there to protect the account from other non-root accounts.

By executing a program, you give it permissions to do things with your UID. Since your UID has full access to your home directory, you've transitively given the program the same access. Just as the superuser has the access to make changes to the system files that need protection from malicious behavior (passwords, configuration, binaries), you may have data in your home directory that needs the same kind of protection.

The principle of least privilege says that you shouldn't give any more access than is absolutely necessary. The decision process for running any program should be the same with respect to your files as it is to system files. If you wouldn't give a piece of code you don't trust unrestricted use of the superuser account in the interest of protecting the system, it shouldn't be given unrestricted use of your account in the interest of protecting your data.

Is there no way to prevent malicious code happening in $HOME? And why does nobody care about it?

Unix doesn't offer permissions that granular for the same reason there isn't a blade guard around the rm command: the permissions aren't there to protect users from themselves.

The way to prevent malicious code from damaging files in your home directory is to not run it using your account. Create a separate user that doesn't have any special permissions and run code under that UID until you've determined whether or not you can trust it.

There are other ways to do this, such as chrooted jails, but setting those up takes more work, and escaping them is no longer the challenge it once was.

Blrfl
  • 1,628
  • 1
  • 11
  • 8
  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/73860/discussion-on-answer-by-blrfl-why-is-root-security-enforced-but-home-typically). – Rory Alsop Mar 01 '18 at 00:04
  • 4
    it is disturbing how people around here simply upvote the first answer they see. before this was the accepted one, the previously hightest-voted one got 45 upvotes. since then, only one, while this one suddenly gained 50. I really wish everybody voted according to content instead of order. Sorry, off-topic comment. – phil294 Mar 02 '18 at 12:30
  • @Blauhirn That answer is actually accumulating a significant number of downvotes, which is why the score hasn't changed much. You do have a point though; even accounting for that, there has been a _lot_ more voting activity on the top-sorted answer than the one below it. – Ajedi32 Mar 02 '18 at 18:08
  • 2
    @Blauhirn if it makes you feel better, two days ago when I first stumbled on this question, I read all the answers and only upvoted this one, based on content (it wasn't even the most voted by then). (My point is that perhaps people find this answer better than the others - I do.) – Pedro A Mar 02 '18 at 19:46
  • @Hamsterrific There should be a badge for those of us who take the effort to read through every, or almost every, available answer and who hesitate to vote solely on the topmost one. – can-ned_food Mar 04 '18 at 20:12
  • @Blauhim, there's also this possibility that the first answer brings an opinion that most people agree with, but hadn't necessarily thought of before. You then compare both and choose the one you prefer. There also is a reason for this on this answer to have risen to the top. – everyone Mar 05 '18 at 10:52
  • @everyone yes, because I accepted it. Beforehand, it was not upvoted especially much – phil294 Mar 24 '18 at 12:21
55

Because the UNIX-based security model is 50 years old.

UNIX underlies most widespread OSs, and even the big exception Windows has been influenced by it more than is apparent. It stems from a time when computers were big, expensive, slow machines exclusively used by arcane specialists.

At that time, users simply didn't have extensive personal data collections on any computer, not their university server, not their personal computer (and certainly not their mobile phone). The data that varied from user to user were typically input and output data of scientific computing processes - losing them might constitute a loss, but largely one that could be compensated by re-computing them, certainly nothing like the consequences of today's data leaks.

Nobody would have had their diary, banking information or nude pictures on a computer, so protecting them from malicious access wasn't something that had a high priority - in fact, most undergraduates in the 70s would probably have been thrilled if others showed an interest in their research data. Therefore, preventing data loss was considered the top priority in computer security, and that is adequately ensured by regular back-ups rather than access control.

Kilian Foth
  • 907
  • 2
  • 6
  • 8
  • 10
    People did have personal data at the time, mostly in the form of email. Not all of this was simply communication between colleagues. It was still protected by user permissions. I think the main difference between then and now is that people generally didn't connect their computers and download code from poorly trusted, or even malicious sources. This happens routinely now. – Steve Sether Feb 26 '18 at 18:21
  • 1
    @SteveSether even if the "people then didn't do **X** that we do now" explanation fails, the age of the security model is a valid reason. Indeed the attack surface is bigger now, as you accurately point out. – Mindwin Feb 26 '18 at 18:50
  • 2
    While this is partially correct, the real reason is that malicious root access allows you to (usually) compromise the kernel, which **allows you to bypass any protection mechanisms you may have for your home**, such as AppArmor. Also, the security model is more geared towards servers and mainframes, which actually do a lot of UID-based separation. – forest Feb 27 '18 at 03:28
  • 34
    The age is fundamentally not the problem. The problem is that the system *cannot* distinguish between a user intentionally executing a script that wipes their home directory and unintentionally executing one. It's the same old problem of, "You can't tell the user and the attacker apart." If it was that easy to come up with an answer, someone would already be pushing it. -1 for a terribly off-point answer. – jpmc26 Feb 27 '18 at 04:15
  • 2
    Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/73763/discussion-between-forest-and-jpmc26). – forest Feb 27 '18 at 05:30
  • 5
    @jpmc26 Except newer systems (iOS, Android, etc) _have_ already come up with an answer: fine-grained permissions for every app on your system. The reason Linux hasn't adopted that model is _because of its age_; it has decades of software built around this legacy security model that it has to support. (Including system software.) Windows has the same problem, for the same reason. Newer operating systems that have had a chance to start from scratch do not. – Ajedi32 Feb 28 '18 at 16:27
  • @Ajedi32 Previously addressed in the chat. – jpmc26 Feb 28 '18 at 16:48
31

This is a highly astute observation. Yes, malware running as your user can damage/destroy/modify data in your home directory. Yes, user separation on single user systems is less useful than on servers. However, there are still some things only the root user (or equivalent) can do:

  • Install a rootkit in the kernel.
  • Modify the bootloader to contain an early backdoor for persistence.
  • Erase all blocks of the hard disk, rendering your data irretrievable.

Honestly, I find the privilege separation on workstations most useful to protect the workstation from it's biggest enemy: me. It makes it harder to screw up and break my system.

Additionally, you could always set up a cron job as root that makes a backup of your home directory (with, e.g., rsnapshot) and stores it such that it's not writable by your user. That would be some level of protection in the situation you describe.

Obligatory xkcd

David
  • 15,814
  • 3
  • 48
  • 73
  • 3
    "It makes it harder to screw up and break my system" makes it sound like Windows is a much better OS in that regard - its hard to accidentally make a recent windows unbootable/broken, even with a priveledged account. Maybe because the OS software is so strongly separated from utility? – data Feb 27 '18 at 08:32
25

The original design of Unix/Linux security was to protect a user from other users, and system files from users. Remember that 30-40 years ago, most Unix systems were multi-user setups with many people logging into the same machine at the same time. These systems cost tens of thousands of dollars, and it was extremely rare to have your own personal machine, so the machine was shared in a multi-user login environment.

The design was never intended to protect a user or a users files from malicious code, only to protect users from other users, users from modifying the underlying system, and users from using too many system resources. In our current era where everyone has their own computer the design has (mostly) translated into single user machines that protect one process from hogging too many system resources.

For this reason a user executed program has access to any file the user owns. There's no concept of any further access on a users own files. In other words, a process executed as user A has access to read, modify, and delete all the files that belong to user A. This includes (as you note) autostart files.

A more modern approach may entail some form of futher control on certain files. Something like "re-authentication required" to access these files, or perhaps some form of futher protection of one programs files from another programs files. AFAIK there isn't (currently) anything like this in the Linux desktop world. Correct me if I'm wrong?

Steve Sether
  • 21,480
  • 8
  • 50
  • 76
  • 10
    _"AFAIK there isn't (currently) anything like this in the Linux world."_ - not counting Android of course. – user11153 Feb 26 '18 at 16:54
  • 3
    Not Linux, but OS X has "sandboxing" that can restrict the files that some applications can access. – Barmar Feb 26 '18 at 18:21
  • 4
    You can use snap or Qubes os which both offer their unique app isolations. – eckes Feb 26 '18 at 22:07
  • 5
    @Barmar Linux has that as well in the form of AppArmor (on Ubuntu) or SELinux (on Fedora). – forest Feb 27 '18 at 03:31
10

Is there no way to prevent malicious code happening in $HOME?

To answer this question, what some installations do is make use of the existing security framework by making a user specifically to run the program. Programs will have a configuration option to specify as what user they should be running. For example, my installation of PostgreSQL has the database files owned by the user postgres, and the database server runs as postgres. For administrative commands of PostgreSQL, I would change users to postgres. OpenVPN also has the option to change to an unpriviledged user after it's done using the administrative powers of root (to add network interfaces, etc.). Installations may have a user named nobody specifically for this purpose. This way, exploits on PostgreSQL or OpenVPN would not necessarily lead to the compromise of $HOME.

Another option is to use something like SELinux and specify exactly what files and other resources each program has access to. This way, you can even deny a program running as root from touching your files in $HOME. Writing a detailed SELinux policy that specifies each program is tedious, but I believe that some distros like Fedora go halfway and have policies defined that only add additional restrictions to network facing programs.

JoL
  • 242
  • 2
  • 7
8

To answer the second part of your question: There are sandbox mechanisms, but they are not enabled by default on most linux distributions.

An very old and complicated one is selinux. A more recent and easier to use approach is apparmor. The most useful for personal usage (apparmor and similiar systems are mostly used to protect daemons) is firejail, which isolates processes in their own jail.

A firefox can for example only write its profile directory and the Downloads directory. On the other hand you will not be able to upload images if you don't put them into the Downloads directory. But this is by design of such a sandbox. A program could delete your images or upload them to random sites, so the jail prevents this.

Using firejail is easy. You install it and for programs which already have a profile (look into /etc/firejail) you can just do (as root) ln -s /usr/bin/firejail /usr/local/bin/firefox. If you are not root or want to use command line arguments for firejail (e.g. a custom path to the profile files) you can run firejail firefox.

Software distribution systems like Snap and Flatpak add sandboxing mechanisms as well, so you can run an untrusted program installed from a random repository without too many consequences. With all these mechanisms keep in mind that untrusted programs can still do things like sending spam or being part of a dDoS attack or messing with the data you process using the program itself.

allo
  • 3,173
  • 11
  • 24
  • sounds like a purpose-oriented containerization (like OpenVZ, Docker..) – phil294 Feb 27 '18 at 15:23
  • It uses some of the techniques which are also used by docker. It has nothing to do with OpenVZ but is similar to LXC containers which are similar to OpenVZ. – allo Feb 28 '18 at 08:44
3

The presumption that the wrong data is being protected is false.

Protecting root activities does protect your vacation pictures from 2011. And mine, and your brothers', and everyone else's who uses the computer.

Even if you implemented an OS with a scheme that protected the home account by requesting a password every time an app tried to access a file, and removed root password protection, I would not use it because that would be worse for those vacation pictures.

If my brother compromises core system functionality on our home computer, then my vacation pics are deleted, ransom-wared, or whatever else despite your home directory protections, because the system itself is now compromised and can get around whatever user-level restrictions you implemented.

And most people would be very annoyed if they had to enter a password every time they chose File -> Open in their word processor.

Also, we have had the issue of access control being prompted too often on home computers. When Microsoft first rolled out their UAC thing (for which you don't even need to enter a password if using the main account... all you need to do is press a button), it came up a lot and people complained enough about the 0.5 seconds of their life wasted 20 times per day that Microsoft changed it. Now, this was not the kind of protection you're talking about, but it does show us that if people are unwilling to click a security button a few dozen times per day for Microsoft's system security, they're not going to want to click (or worse, type a password) for whatever gets implemented to protect their pics from that random app they just ran.

So the basic answer is:

  1. Protecting root does protect your personal pics.
  2. People complain about that type of authentication being asked too often.
galoget
  • 1,414
  • 1
  • 9
  • 15
Aaron
  • 168
  • 4
  • microsoft is still trying trying to perfect the art of a user, only effecting user created documents (and not others or installed programs or operating system) Win10 > Now "Installed Programs" have there own protected directory of shared data as well "\ProgramData" – MichaelEvanchik Feb 26 '18 at 22:08
  • 2) Yup, but Linux users are not Microsoft users. It says a lot about the community if they accept to frequently enter admin password for system changes. UAC is a good call actually, also see https://superuser.com/questions/242903/windows-uac-vs-linux-sudo (thanks) 1) root protects ~ data from other users and system misbehaviours, I agree. But it does not protect the data from malware ran with the user's rights which is admittedly an inconvenient thing to do – phil294 Feb 26 '18 at 23:44
  • 1
    @MichaelEvanchik Erm. WinNT has always had multi-user privileges, going back to the 90s. XP brought that into the consumer world, except the default account had admin privs. UAC only added more privilege levels within one user account (i.e. more granular). Specifically, `ProgramData` has existed in its current form since 2007 (Vista) and in previous incarnations (protected `All Users` subdirs) since at least 2002 (XP), probably earlier. If one wishes to have a more unix-like security model (including passworded UAC), one only needs to create a new non-Admin user ... which few people want. – Bob Feb 27 '18 at 01:12
  • anyone who gives me a m$ computer, i make them a non admin account, and give create an admin and a password on a sticky and keep it for my records. Usually works out except for the hopeless – MichaelEvanchik Feb 27 '18 at 14:56
  • @Bob NT 4 had separate Start menu folders for "All Users" (which could only be modified by a user belonging to the Administrators or Domain Administrators groups, as I recall) and per-user (which could be modified by each user, but were only accessible to that user). They were also visually separated. Here's a screenshot: http://toastytech.com/guis/nt4start.png from http://toastytech.com/guis/nt4.html. It looks like at least NT 3.51 had the same type of separation, and it's possible that it goes back even further, but NT4 is the first version of Windows NT that I have personal experience with. – user Feb 28 '18 at 07:23
  • Well, rather than ask for password for each file sounds like the common but irritating Security by Admonition, whereas Security by Designation is preferable, see e.g. [1]. With security by designation, the application can access the document if the user has selected it (in a trusted File open dialog or File manager). Likewise the application can write to the clipboard if the user has e.g. pressed Ctrl-C. Plash was an old attempt to implement this idea for the linux command line. [1] http://sid.toolness.org/ch13yee.pdf – gmatht Mar 02 '18 at 09:25
2

Other answers look at why *nix is as it is.

But its worth noting that there is scope to do a little more than the "out of box" config, for protecting user home dir, scripts and files.

Most modern *nix support POSIX or variant ACLs, which can be configured to add the kind of granular access control the OP is looking for. You do have to set them up manually, and they don't try to distinguish access on any basis other than which user/group account is acting. But once set up, you can be very specific which accounts can do which actions on files, and gain at least some extra control by forcing commands to use limited accounts for certain commands or files, rather than one user account for everything. However it will have a fairly tight practicality limit.

Stilez
  • 1,664
  • 8
  • 13
2

One key point of securing root/kernel is forensic integrity. In the event that the domain containing your valuable data (desktop user with private documents and web authentication tokens, development server with confidential code/resources, webapp server with user database, etc.) is compromised, you still have an uncompromised domain from which to evaluate the compromise, determine what happened, develop a plan to defend against the exact same thing happening again, etc.

  • I was going to make a point about not being able to enter that environment, because as soon as you do `sudo su`, the malware will be able to get root as well, but then I realized you can just reboot into another user (among which root). Thought I'd post this for future readers who might think the same; I think you make a good point. – Luc Feb 27 '18 at 22:15
  • 1
    THIS. Of course something can mess up the data in $HOME - but it will have a hard time wiping its tracks. And in many applications (business more than home) surreptitious, undetected compromises are far worse than obviously destroyed or stolen data. – rackandboneman Feb 27 '18 at 22:26
1

Unix is not really a desktop system. It's a system running on a large computer which costs about as much as a house located somewhere in your university's basement. You, as someone who cannot afford his own computer, have to share the computer with two thousand others, and with several dozen users simultaneously for that matter.

Incidentially, you can nowadays also run a Unix-like system on your desktop computer or on a credit-card sized SoC which costs $20.

In principle, however, Unix isn't designed for single users. The single user isn't important. What's in your home directory is your problem, but what root can do is everybody's problem. Therefore, only the few tasks that really require you working as root should be done with that user, and preferably (to limit the time window during which you can do harm) not by logging in with that account, but by explicitly using sudo for the single commands that require it. There is a lot of religion in that as well, which is why some distributors are so darn arrogant as to threaten you when you type su rather than sudo for every single of the 10 different apt commands you have to run to install some petty thing.

So you can erase all your personal photos without being root. That's right. Malware can erase all the stuff in your home directory, that's right. It can deny service by filling your disk until your user quota is reached, that is right. But from the system's point of view, that's just your problem, and nobody else cares. No other user is (in principle) affected.

Now, the issue with a modern single (or few) user system is that the bivalent logic security model is quite inapplicable, just like the "there's hundreds of users" idea.

Unluckily, it is very hard to come up with something better. Look at Windows if you want to see how to not steal an idea (they really managed to make a bad approach even worse).

Some web browsers and phone (or smart TV) operating systems attempt (and fail) at providing something better, and modern Linux has a more fine-grained system, too (but I wouldn't know how to properly set it up without spending weeks of my time).

The problem is that the bivalent security model assumes that normal applications do not require any privileges (which is wrong because some mostly-harmless things do require privileges) whereas non-normal applications require full access to the computer system (which is also wrong, almost no program needs full access, ever).

On the other hand, even finer-grained security models (which still are pretty coarse) make the wrong assumption that if an application requests a set of privileges, it really needs that complete set and the user is comfortable with granting it.

There is, to my knowledge, no system where an application can request the privileges A, B, and C, and the user can agree to granting A (but not B and C), and the application can then query what privileges it was given and decide whether it's able to perform the requested task or not.

Thus, you generally have the choice of granting XYZ-app "store data on permanent store" (which you're maybe OK with) and also allowing "access my location" and "access my personal data" or "install system driver" (which you're not OK with), or well, you can not run the program.
Or, you can allow XYZ-program to "make changes to your computer", whatever that means, or you can choose not to run it. And, you have to confirm it again every single time. Which, be honest, really sucks from a user perspective.

schroeder
  • 123,438
  • 55
  • 284
  • 319
Damon
  • 5,001
  • 1
  • 19
  • 26
  • 2
    "Some phone operating systems attempt (and fail) at providing something better," Um what do you mean by that? Android did attempt at providing an application-wise isolation which works out really well. Also see other answers & comments – phil294 Feb 28 '18 at 14:17
1

Such privileges do not exist because they are inconvenient.

The goal of permissions, in general, is to prevent undesirable actions. The sorts of actions which root can do are far more insidious. A user-level ransomware app may be able to encrypt your files, but they can't hide the fact that they're doing it. When you find an encrypted file, it gets opened just like normal, and reveals that its been encrypted. With a root-level ransomware, it can hijack your entire filesystem and create the illusion that the files are not encrypted until the last moment, then forget the key and bam! All your files become unaccessable at the same time.

Now obviously nowdays we don't log in as root. We use sudo. This is a form of role based privilege. You don't have root privileges until you take on the role of "a user doing administrative tasks." Then you gain those privileges, until you finish the command.

One could create fine grained roles which have access to different folders. Perhaps you want "vacation photos" to be read only unless you enter the "adding/editing photos" role. This would be powerful, but taxing. As Aaron mentioned, Windows' UAC was widely panned for wasting precious seconds asking for permission instead of just doing things. Your computer would need to ask permission more often if it had to switch roles to protect your data. Users have generally not found this to be worthwhile, so it's not supported.

(If you were interested in such capabilities and willing to use sudo to do them, you could create a separate partition which could be mounted ro or rw, depending on what you want to do, and store your photos there).

One of the hardest tasks to deal with in these cases is the granularity of roles. If the user enters one role or another, that's pretty easy to handle. It is harder, however, to handle the case where a particular application needs to enter that role. Maybe Firefox isn't allowed to write to your photos, but GIMP is allowed to. This is tricky because where there are boundaries, you can't have coherent seamless integration. What if Firefox takes advantage of a GIMP plugin to do photo editing? The only way to prevent Firefox from doing so is to prevent it from talking to GIMP.

I'm assuming you have some experience with Windows. Did you ever wonder why the screen goes dim when the UAC comes up? It's actually not for visual confirmation that you're doing something special. It's much more important than that. The windows above the dimmed part are part of a different screen, isolated from the windows below. Why is this important? Well, it turns out that any window on a screen is allowed to manipulate any other window on that same screen. If the UAC came up on the same screen as the installer program asking for permissions, the installer could literally just get a handle to the UAC window and click OK for you! That would certainly defeat the purpose of such a prompt. The solution is that the UAC is provided on a different screen, so no other application can click OK on its own. The only way for OK to be clicked is if the user moves the mouse and clicks it. The darkening is really just there to show you that you can't interact with any of the windows below it while the UAC screen has control of the keyboard/mouse.

So that's the level of effort that has to be gone through for isolation. It's not easy. In fact, it's hard enough that it might make sense for you to protect your key data by having multiple users accounts, and giving each one different access to the data. Then you could use the switch-user capability to switch between them. This would provide the kind of isolation you need to do decent role based privileges.

Cort Ammon
  • 9,206
  • 3
  • 25
  • 26
0

Is there no way to prevent malicious code happening in $HOME?

Assuming you have /home as an individual mount-point, on it's own partition - then you can simple edit /etc/fstab and add the noexec flag to the mounting options. This disables all execution of code on that partition, with the downside that code within ~/bin (as some per-user installs may create it) also won't run anymore. this nevertheless won't stop a executable located elsewhere in the file-system to wipe out /home.

SE Linux is a fully fledged security system, which can nicely isolate - while AppArmor is just for application isolation. Android's underlying Linux features it, since v 5.0 something. On Windows, they recently introduced "protected folders" which is quite the same (a complete rip off). here the downside is, that when manually installing something new, one often has to label and/or set the proper flags, to permit that, while the context under which something runs is most important there. people often suggest to disable it, simply because they do not understand how to handle it. well, this is not exactly the point of choosing an SE Linux distribution for a deployment.

  • Note that `noexec` can be bypassed by using an interpreted language like Python or Bash. – forest Mar 02 '18 at 23:30
  • these are not being installed in /home ...if one want to go nuts, one can mount a network share, without any access to the local FS, you might complain "one can bypass that with SSH" ...SE Linux ordinary rejects script execution in the home directories by default, unless setting a flag... or simply run KVM for proper isolation. –  Mar 03 '18 at 17:38
0

I'm surprised that nobody has mentioned the following:

One of the core reasons to not login to a desktop machine as root is because many activities will change the ownership of files to become root. This commonly means that "normal" users will be unable to run many applications because they can't read the default configuration file that was created by root.

One of the core reasons for not logging into a server machine as root is that from an audit/tracability perspective, it's better to have someone login as themselves and then execute commands as root so that there is some auditable log that indicates that 'user1' logged in and then switched to root rather than only knowing that 'root' logged in from IP address 10.2.1.2 which may be a generic terminal available to many people. On a server machine it's more common to have limited sudo access to many commands to attempt to keep the actions executed by an administrator more auditable and tracable.

As with all root activities, you can to what you want; just remember that the more you do, the bigger the gun pointed at your foot and it only takes one wrong command to pull that trigger.

rm -rf {uninitializedVariable}/*
millebi
  • 51
  • 3
  • "because many activities will change the ownership of files to become root...applications...can't read the default configuration" - Such as? `/etc` is world-readable by default. – AndrolGenhald Mar 02 '18 at 17:49
  • Anyone who runs `rm -rf` with a `*` anywhere in the argv deserves to lose their data. And @AndrolGenhald is right. Usually root-owned files are readable if created with a umask of 022 (the default). Btw, `set -u` is a thing. Use it. – forest Mar 02 '18 at 23:33
  • @AndrolGenhald many applications supply helper scripts that set "stupid" permissions (e.g. 700) which then make the files unreadable by other users. It's stupid, I agree, but I've seen it happen too many times to count. forest I agree! Some SA's change the default on secure systems to a different umask :( which usually bites them later. (Stupidity is billable) – millebi Mar 09 '18 at 03:33