2

Here is a question that I can't seem to find a good answer to.

Under Windows/DOS we have been "taught" that it is ok for use to be able to launch programs from the current directory without having to prefix the path.

However, the default behavior for Linux is that you must prefix ./ for applications/scripts in the current directory.

People say that it is a bad security practice to allow programs/scripts to execute without prefixing the directory. There a lot of "bad security practices", but this one doesn't make sense to me. For example, I am not going to enter /some/unfamiliar/directory and start executing programs like ls. I would cd (to get to my home directory), then type ls -la /some/unfamiliar/directory. Of course, on a bad day I would.

I can see the "threat" if someone puts a program with the name of ls in a directory, and a bad sysadmin cds to that directory and runs ls as root and the ls adds a backdoor user and other evil things. Fine. But, usually in that case a user would complain "oh ls isn't working can you check into that?". I would sudo su "user" and try ls as him.

Unless I am completely missing something, is adding "." to your PATH really that bad?

Edit:

All the answers so far are fine for pointing out the security risks - however - everyone is missing the arguments for Windows. Under Windows the current directory is NOT in your path, so there is no way to disable the ability to just run program.exe afaik. However, the command prompt does accept the ability to prefix the command with .\ (.\ does seem to work, however, you should use \ otherwise you could end up with an escape sequence of some kind). Under Windows should we always prefix the command? Also, should we contact Microsoft and tell them they are bad for implanting this expected behavior?

My background is Windows, so I am biased in expecting programs in the current directory to be able to run, even though I know that isn't the way it works in the Unix world. Which is why I presented the question out there.

To expand what I mean on "bad security practices" (and why they are in quotes) is because we could cite millions, if not billions, of security practices that people SHOULD be doing...but we don't and if we did, a person would certainly go insane. Should we mitigate every security risk with the potential of interfering with the user's experience? I say no but that is because I believe security should be transparent to the user(s), but I like haus's first sentence in his response "As much as anything else it is a mindset.". And I think we should leave it at that.

Natalie Adams
  • 745
  • 1
  • 6
  • 15

5 Answers5

7

As much as anything else it is a mindset.

By having your current directory as part of your path you are indeed increasing your risk. Just because a trojan program has been executed, does not mean that it will not do what one would expect of it. In other words, to use your ls example, the trojan program, unless it was designed by a dunce would indeed provide you the directory listing that you requested, so that unless you say the file sitting there, you would have no reason to expect that something had gone wrong (perhaps it would be smart and decide to remove itself from the directory list provided to lower the chance of detection even further).

For this to occur, it would mean that someone has already infiltrated a given system to the point that it could place files onto your file system. Thus you are already in a bad way, but there is still room for things to be made worse.

As a general practice, I do not place my current directory in my path on my linux/unix systems, and it is not a great hardship. When I write a program or script that I wish to use frequently, I place it into a folder that is in my path, and restrict the write permissions to said file.

Short answerer, will adding the current directory to your path lead to certain doom for a unprivileged user, likely no, but why take the chance?

For root, DO NOT DO IT.

haus
  • 104
  • 1
3

For root it's deadly. For other users the reason I would recommend not adding "." to the path (as well as security) is that it will lead you to weird and confusing problems, e.g. that processes that search the path might start to take very varying amounts of time or find a different program of the same name. /usr/ucb on solaris for example contains the BSD version of the ps command. Make a new filesystem and populate it with a huge number of files then with "." in your path try doing a bash completion for "grep". It will take longer because . has to be searched and there are a million files in there. "." could also be on some painfully slow media like a very distant NFS mount or somesuch. You might get away with it and never see it cause a problem but when it does cause an issue it might not be obvious immediately what the root cause is. I've seen many performance problems in the distant past be caused very long PATHs, PATH's not the same across different environments and so on. My 2D: on your own box with your own user, go for it if that make life easier. On production, keep dot out of the path.

gm3dmo
  • 9,632
  • 1
  • 40
  • 35
  • "on your own box with your own user, go for it if that makes life easier": It's better to develop good habits. Don't do this for the same reason you don't *ever* alias `rm` to `rm -i`. – Dennis Williamson Dec 31 '09 at 23:43
1

As mentioned, the most obvious security risk is just having some malicious code in the current directory which supplants an existing executable. Whether this is exploitable or not really depends on how strict you are on your "Don't run commands from an unvetted directory" policy.

However, a more subtle and potentially more serious problem that may occur is that the resolution of commands is no longer predictable. For example, if you're used to using runcommand by just typing runc<tab> in BASH, this will work in most directories but will fail if you just happen to be in a directory with an executable runcom. Again, this can be circumvented with just being aware of exactly what you're doing, but in my experience after typing runc<tab> fifty times without issue, the fifty-first time can easily slip under the radar.

The annoying part about this is that it involves no malicious intent. There's no virus infecting your system, no trojans planted surreptitiously into your home directory. This is just two different commands, in two different directories, which do two different things. runcom may quite legitimately just need to delete half the files from your home directory, but that doesn't mean you want to run it accidentally.

This can be even more dangerous if you're dealing with scripts. If you run a typical shell script from the command line, it will adopt your $PATH variable. Similar to the above example, this means that script behaviour may be different if you run it from different directories. You may end up running a different version of any given command if you're in, say, /usr/bin instead of using the /usr/local/bin version. Or, alternatively, you can end up having commands available that normally would flag a legitimate error when missing -- one of the few things worse than an important script failing unexpectedly, is said script failing unexpectedly and thinking it's succeeded. Well-written scripts get around that by either setting their own $PATH, or by explicitly calling commands preceded with the full pathname.

Not all scripts are well-written.

A lot of these issues can be dealt with just by setting the current directory as the last point of resolution in the path instead of the first (PATH=$PATH:. rather than PATH=.:$PATH), but this is still not a perfect solution. Seeing as explicitly prefixing a command with ./ to run the local directory version is only an extra two keystrokes anyway, I'll stick with that.

goldPseudo
  • 1,106
  • 1
  • 9
  • 15
1

Clever attacks, although probably rare, can combine multiple techniques starting from a non-privileged environment using a fake executable in a "." current-directory PATH to insert an alias into the environment using a technique similar to my question/answer here which would follow you into a privileged environment.

  1. The non-privileged user's path contains "." - the current directory
  2. A malicious executable with a common name is created in a user-writable directory
  3. Now someone enters that directory with cd and executes the malware thinking they are executing their own handy program
  4. The malicious executable creates a hidden alias to su by writing to the user's ~/.bashrc
  5. It also creates a malicious rc file with an unremarkable name (we'll call it "badrc")
  6. Someone logs in, thus executing the alias definition which does something like alias su='sudo bash --rcfile /path/to/badrc'
  7. The user does su - and gets a shell prompt and starts using commands that were aliased in "badrc" using hidden alias techniques
  8. Profit (for the attacker)

This is certainly a convoluted sequence and other variations are possible, but why leave yourself open? Omitting "dot" from your path (plus tight control over sudoers and maintaining good habits) raises the bar for attackers.

Dennis Williamson
  • 60,515
  • 14
  • 113
  • 148
-1

Along with the pwd is on a slow remote filesystem comes this issue: if . no longer exists for some reason, you can no longer run commands other than shell builtins, which is very confusing. The pwd might have been deleted, the filesystem containing it might have been force-unmounted, or on a server that has shut down.

Andrew McGregor
  • 1,152
  • 7
  • 4