I have a very important file which an application in my workplace uses, i need to make sure it is not delete whatsoever, how can I do that?
-
13Make a backup, so you can restore it ... Other than that, `chattr +i` might help but will make the file read-only as well (and can be overriden with `chattr -i`), also you can try to protected it with SELInux etc. – Sven Dec 02 '14 at 16:02
-
43*Can root create a process that even root can't kill?* – Zaenille Dec 03 '14 at 00:23
-
1It would probably be better to ask it as a new question.. – Itai Ganot Dec 03 '14 at 01:02
-
After reading the answers and your responses I think you've found what you need, so only have metadata: A similar question over at Unix & Linux has some different answers, including SELinux (a LKM that can override root, now in Linux mainline as of kernels 2.6+): http://unix.stackexchange.com/questions/73768/how-do-i-stop-the-root-user-from-deleting-a-file – ǝɲǝɲbρɯͽ Dec 03 '14 at 23:01
-
4@MarkGabriel Yes. A fork bomb. :) – reirab Dec 04 '14 at 14:07
-
4[God, Root, what is difference?](http://ars.userfriendly.org/cartoons/?id=19981111) – Dan Is Fiddling By Firelight Dec 04 '14 at 22:10
-
8The HW Admin may come and remove the disk, shred it, burn the remnants and feed them to hoghs. Or, better, some C(++) programmer may induce some nasal demons. __Whatever is important to you, back it up. Twice.__ – Pavel Dec 05 '14 at 11:50
-
Can the file be stored on a different machine? – Nicolas Raoul Dec 06 '14 at 07:33
-
1Why give root access to people who randomly remove important files? Why not give them user accounts with rights to do anything that they need to be able to do? – Kasper van den Berg Dec 07 '14 at 09:39
-
The only real way I know of to prevent root form doing anything is going for SELinux and sudo. This is highly dangerous and requires complex configuration. You will also probably loose support for your OS in the process. I guess root will always be able to overcome any security putted in place. The chattr method is alright but root can undo it easily. If you need extreme security, SELinux is there for you, otherwise, I think there is no good reasons to actually prevent root from doing anything. Just my personal opinion there, not judging... – trox Dec 08 '14 at 15:46
-
It would be great if there would be file dependency for applications in `Linux`, like installed packages are depends on other packages, removing one such package will also remove all supporting ones. Similarly removing such dependent file(s) may ask for un-installation of the related application to the `root`. – Nitinkumar Ambekar Sep 07 '15 at 11:45
9 Answers
Yes, you can change the attributes of the file to read-only.
The command is:
chattr +i filename
And to disable it:
chattr -i filename
From man chattr
:
A file with the
i
attribute cannot be modified: it cannot be deleted or renamed, no link can be created to this file and no data can be written to the file. Only the superuser or a process possessing theCAP_LINUX_IMMUTABLE
capability can set or clear this attribute.
- 361
- 1
- 4
- 15
- 10,424
- 27
- 88
- 143
-
-
12
-
86Do note that a user with root access can unset that flag and then delete the file. That is unlikely to happen by accident but it does not prorect against intentional deletion. – Grant Dec 02 '14 at 16:42
-
6@Grant, not if the [Securelevel](https://en.wikipedia.org/wiki/Securelevel) is set high enough. The boot process sets the securelevel to 2 before the network is enabled, so resetting the flag requires local machine access (but this means that files used in the boot process before that time need to be immutable as well). – Simon Richter Dec 02 '14 at 17:38
-
@SimonRichter that seems to only apply to BSD kernels. Is there an equivalent in linux? – Grant Dec 02 '14 at 18:10
-
17@Grant If one wants to take it to extreme, you can't prevent that the partition is deleted or the disk is put into a furnace or protons decay in 10^30 years ... – Hagen von Eitzen Dec 02 '14 at 21:27
-
1Please note that whatever the right on the file is, if someone has the "w" right on the directory that holds that file, that person can delete/rename the file (in a simplified way: a filename is just a link to an inode, written in the directory's entry. You can unlink (rm) it if you can edit that directory's entry (=if you have w right on the directory itself). When a file doesn't have any remaining links to it, it's "deleted" (but opened fd to that file still are usable, until all fd to it are closed. The FileSystem usually only "frees" the space when all links & all fd to the file are gone) – Olivier Dulac Dec 03 '14 at 12:21
-
... I actually +1 this, I didn't knew about `chattr +i file` .. It's linux specific, but good to know. – Olivier Dulac Dec 03 '14 at 12:33
-
-
-
Please note that even setting securelevel does not prevent clobbering the file by opening the disk device and writing to it. – joshudson Dec 03 '14 at 17:30
-
1@SimonRichter, no, there is no such thing in Linux as securelevel. There were some patches proposed a year or so back to add one, but was about secure boot and had nothing to do with the immutable flag. – psusi Dec 03 '14 at 18:36
-
2@Itai Ganot man I wish I had read it 4 days ago. I was a question in an examination I took =/ – vfbsilva Dec 03 '14 at 20:41
-
2Given the commend by the OP on Kevin's reply, this answer looks ideal - the OP is asking how to prevent accidental deletion by a user with root access, not how to prevent a malicious super user causing problems – Jon Story Dec 04 '14 at 16:03
-
1@psusi, it appears that I am very old. [2.0.40](http://lxr.free-electrons.com/source/kernel/sched.c?v=2.0.40#L49) still had it. – Simon Richter Dec 05 '14 at 03:05
-
@SimonRichter, neat... I thought the subject tickled my old memory too but when I went digging I found it was not in the current sources and only found references to the EFI secure boot patches on the web. I'm tempted to check out the historic git repo and see why it was removed so long ago. – psusi Dec 05 '14 at 04:06
-
-
1```chattr +a dirName``` will allow files to be added to the directory but not deleted. Found it useful for investigating a software bug as the intermediary files were being deleted before I could analuse them – Bastion Oct 19 '21 at 06:38
Burn it to a CD. Put the CD in a CD-ROM drive and access it from there.
- 1,342
- 8
- 14
-
15+1 for thinking out of the box. And, afaik, it has also been used before in some circumstances (black-box cdrom drive with cd in it shipped to its destination). It may not be appropriate if someone is able to disconnect the drive, anyway. – Alex Mazzariol Dec 04 '14 at 16:49
-
1
-
2I think that's the correct answer to this question. Changing file attribute (chattr -i) can't prevent malicious actions. – Bruno von Paris Dec 05 '14 at 08:23
-
7These day a full-size SD card in a built-in cardreader may be a better solution - lower power consumption, faster access in many cases and more durable in no-write use. – Chris H Dec 05 '14 at 09:53
-
1
-
3
-
1For extra safety, a bit of concrete can be used to seal the drive and the rest of the computer case (on the edges for both). – haneefmubarak Dec 08 '14 at 02:21
-
1@ChrisH write protect on SD is software based. The tab tells the reader it wants to be protected but it can be overridden. – JamesRyan Dec 08 '14 at 13:31
-
@JamesRyan, I wonder who thought that was a good idea. However it may still be useful to protect against an ingenious fool (after all, someone could eject the CD and later throw it in the bin -- without any malice). – Chris H Dec 08 '14 at 13:42
-
@ChrisH Careful -- I went down that road, basically asking that general type of question and offering an alternative, and the whole thing *very strangely* and in what I thought was very silly fashion went up in flames. It's cool, whatever. The underlying idea is somewhat interesting despite its flaws, though, and I offered up a variation on the general theme [here.](http://serverfault.com/a/649736/169529) – Craig Tullis Dec 08 '14 at 23:46
-
@ChrisH you don't have to put the CDROM drive into an external 5,25" bay. Tape it to the case floor and your ingenious fool would have to open the case. If he got that far, he could just rip out every storage media possible, or destroy the computer to get rid of the file... – Alexander Dec 09 '14 at 13:31
-
1Anything is possible if you have physical access to the machine. This is not implied in being root. Please do not comment any more on what funny things are possible if the user with root rights can touch the hardware. Thank you. – Thorbjørn Ravn Andersen Dec 09 '14 at 13:41
-
1@ThorbjørnRavnAndersen as originally suggested your solution (which I like) doesn't preclude the user ejecting the drive even without physical access. Gluing the drive shut may or may not help. – Chris H Dec 09 '14 at 19:22
-
1
-
1
- Create a file system image.
- Mount the image.
- Copy the file to the mounted image.
- Unmount the image and remount it as read-only.
- Now you can't delete it.
Example:
# dd if=/dev/zero of=readonly.img bs=1024 count=1024
# mkfs.ext2 readonly.img
# mkdir readonlyfolder
# mount readonly.img readonlyfolder/
# echo "can't delete this" > readonlyfolder/permanent.txt
# umount readonlyfolder
# mount -o ro readonly.img readonlyfolder
# cat readonlyfolder/permanent.txt
can't delete this
# rm readonlyfolder/permanent.txt
rm: cannot remove `readonlyfolder/permanent.txt': Read-only file system
- 391
- 2
- 3
-
3`mount -o remount,rw readonlyfolder/ && rm readonlyfolder/permanent.txt` – Kaz Wolfe Dec 03 '14 at 23:08
-
3Taking this a bit further, you can use `squashfs` or `cramfs` which are compressed and read-only. It needs a special tool to build the filesystem. – Zan Lynx Dec 05 '14 at 17:44
-
This may be a stupid question but what's to stop someone deleting readonly.img when it is not mounted... (if necessary dismounting beforehand)? – mike rodent Jan 25 '20 at 11:23
You should create multiple hard links to the file as well. These should be in various locations that regular users can't access.
This way, even if they do manage to override your chattr protection, the data will remain and you can easily restore it where your application is looking for it.
- 87
- 1
-
11
-
1However they will provide additional protection from DELETION, which was the original question. – barbecue Dec 05 '14 at 20:28
-
2@barbecue If the file is unlinked at the name an application looks for it at, it doesn't matter that the file's content exists under some other name. For anything looking for the file with the expected name, the file still has been deleted. – user Dec 08 '14 at 12:36
-
I think this is a rather good solution, possibly as one of several simultaneous safeguards. As barbecue says, the question was specifically about deletion. Secondly the consequence of someone discovering that the file is not present where it is expected to be bears no relationship to whether it is deleted. Scenario: "OMG, the file is gone!", response "Check the other hard link(s)" – mike rodent Jan 25 '20 at 11:27
Linux has so-called bind-mount option which is rather powerful and useful feature to know:
% cd $TMP && mkdir usebindmountluke && cd usebindmountluke
% echo usebindmountluke > preciousfile
% sudo mount -B preciousfile preciousfile
% sudo mount -oremount,ro preciousfile
% echo sowhat > preciousfile
zsh: read-only file system: preciousfile
% rm preciousfile
rm: cannot remove ‘preciousfile’: Read-only file system
— what's being done here is bind-mount file to itself (yes, you can do that in Linux), then it's re-mounted in R/O-mode. Of course this can be done to directory as well.
- 9,171
- 2
- 24
- 50
Others have answered your question as you've asked it. As @Sven mentioned in a comment, the general solution to the question, "How do I make sure I never lose a file?" is to create a backup of the file. Make a copy of the file and store it in multiple places. In addition, if the file is extremely important and your company has a policy for backing up important data with a backup service, you might look into have this file included in the service.
- 224
- 1
- 8
-
2Well, of course the file is being backed-up regularly, I just wanted another layer of protection against users which are sometimes working on the box with root user permissions. – Dec 03 '14 at 08:37
On Linux the immutable flag is only supported on some types of file system (most of the native ones like ext4
, xfs
, btrfs
...)
On filesystems where it's not supported, another option is to bind-mount the file over itself in read-only mode. That has to be done in two steps:
mount --bind file file
mount -o remount,bind,ro file
That has to be done at each boot though, for instance via /etc/fstab
.
- 560
- 4
- 13
In a comment to the answer by Kevin, Jerry mentions:
Well, of course the file is being backed-up regularly, I just wanted another layer of protection against users which are sometimes working on the box with root user permissions. –
I'm going to assume that you can't change this practice, as it's a really, really bad idea.
All of the suggestions about using a read-only device have the same problem -- it makes it a PITA for you to make legitimate changes when you need to. In the case of a lockable drive, such as an SD card, you run into the problem that you're suddenly vulnerable when you unlock it to make your changes.
What I would recommend instead is setting up another machine as an NFS server, and sharing the directory with the important files to the machine(s) that the users have root on. Share the mount as read-only, so that the machines with users you don't trust can't make any modifications. When you need to legitimately make changes, you can connect to the NFS server and make our changes there.
We use this for our webservers, so that a successful exploit against the webserver won't be able to insert or change any files that the server would then serve back out, or change the configuration.
Note that this can stull be bypassed in the same way that all of the mount-point related ones can be :
- Make a copy of the protected directory
- Unmount the directory
- Move the copy in place of the mount, or symlink it in if that mount doesn't have sufficient space.
-
Why is it a "really, really bad idea" to back up an important file regularly and also make an effort to protect the original against accidental deletion? In the OP's original question, and from the OP's comment on the answer you referenced, it is clear that the concern is no malicious activity, but accidental/incompetent activity. – Craig Tullis Dec 07 '14 at 19:54
-
1@Craig : it's a bad idea to have lots of users with root, especially if they aren't trusted to not mess with critical files. – Joe H. Dec 08 '14 at 07:36
-
Ah... well of course it is. :-) But that wasn't the crux of the OP's question. The OP asserted that there *are* users with root access who should be protected against accidentally deleting a file. – Craig Tullis Dec 08 '14 at 08:36
-
@Craig : it might not be the crux of the question, but it *is* the crux of the problem (XY problem?) ... but I have no idea what they're doing as root, so if they could make use of setuid and/or limited sudo privileges. And you should re-read the question, as I see no mention by Jerry that he's only trying to protect against unintentional removal ("i need to make sure it is not delete whatsoever"), and he only gave one follow-up that I see (which triggered my response). – Joe H. Dec 08 '14 at 08:46
-
[See the OP's response to this answer](http://serverfault.com/a/648758/169529) – Craig Tullis Dec 08 '14 at 09:06
-
I don't disagree that having everybody running around as root is a big issue, just not the issue the OP is asking about. ;-) And for the record, I really like the notion of putting the file(s) on a separate NFS server that a root/sudo user on the local host doesn't have root privileges to monkey around with. I'm mostly on your side, here. – Craig Tullis Dec 08 '14 at 09:07
-
@Craig : you mean the respone that I quoted in my own answer? He said he was looking for an extra level of protection ... he never said if he was trying to protect against accidental deletion or malicious users. And my answer still stands for the generic case. It's also a good idea to run something to monitor critical files (puppet, cobweb, etc.) You seem awfully eager to put down my answer when your response is one that specifically has the problems that I'm trying to make sure people avoid. – Joe H. Dec 09 '14 at 15:06
-
For crying out loud, I UPVOTED your answer. So out of the 2 upvotes you've receive, one of 'em was from me. What part of *'And for the record, I really like the notion of putting the file(s) on a separate NFS server that a root/sudo user on the local host doesn't have root privileges to monkey around with. I'm mostly on your side, here.'* sounds like I'm eager to put down your answer? – Craig Tullis Dec 09 '14 at 16:58
-
I meant the OP's comment: *'Well, of course the file is being backed-up regularly, I just wanted another layer of protection against users which are sometimes working on the box with root user permissions'*, which clearly indicates the issue being addressed is not malicious mischief, but an extra little layer of protection against, essentially, incompetence. Nice ignition, though. ;-) – Craig Tullis Dec 09 '14 at 17:00
Why not create an ISO 9660 image, which is read-only by design?
Mount the ISO image, and it'll look like a CD-ROM, but with the performance of a hard drive, and files on the mounted image will be just as safe from deletion as files on a physical CD-ROM.
The idea of burning the sensitive file to a CD and running it from a CD-ROM is interesting, assuming that setting the immutable bit on the file isn't deemed sufficient.
There are potential negative issues with running it off a physical CD, including performance (CD-ROM drives are much, much slower than hard drives or SSD's). There's the likelihood of the CD-ROM being removed by a well-meaning individual and replaced with a different disc that they need access to. There's the likelihood of a malicious party just taking the disc out and tossing it in a microwave (or the trash), thus "deleting" your file. There's the inconvenience of having to have a dedicated hardware CD-ROM drive just for that one file, and other factors.
But the OP made it clear that the primary intent is to protect against accidental deletion, not against malicious acts, and that the file(s) in question is backed up and recoverable should an accident occur, but it is highly desirable that the file never be accidentally deleted.
It seems that running the file from a mounted ISO image would satisfy the requirement.
- 488
- 3
- 14
-
1Root can still delete a file by manipulating the image directly. It is just a normal file which happens to be mounted. – Thorbjørn Ravn Andersen Dec 09 '14 at 00:24
-
@ThorbjørnRavnAndersen How so? ISO 9660 by design is immutable. The party making that change would have to delete and replace the entire ISO file. Not that they couldn't do that. But they couldn't go in and surgically delete one file without tremendous expertise, if even then. It would be much easier to remove a physical CD-ROM from a drive and toss it in a dumpster. ;-) – Craig Tullis Dec 09 '14 at 00:29
-
No need to be sophisticated - just overwrite the image file with zeros. – Thorbjørn Ravn Andersen Dec 09 '14 at 00:51
-
@ThorbjørnRavnAndersen I'll concede that point easily enough. The caveat is that it would require intentionally dismounting the image and overwriting it. A thorough perp would just `shred` it at that point. But unless you are denying physical access to the machine, it still seems easier to just pop a physical CD out of the drive and toss it in the dumpster than to dismount and overwrite the ISO file, although either is easy. And the OP has stated that the important file is backed up on a regular basis, so this is just an extra measure against accidental damage, not against malicious mischief. – Craig Tullis Dec 09 '14 at 01:15
-
I've pointed out how to change an ISO9660 image even if it is supposed to be unchangeable. My point is that if a bit is writable at all, root can write it. – Thorbjørn Ravn Andersen Dec 09 '14 at 12:43
-
And you do not have to dismount an image to change its content... – Thorbjørn Ravn Andersen Dec 09 '14 at 15:50
-
ISO 9660 IS IMMUTABLE. If you're going to insist that you can make changes willy-nilly to a mounted ISO 9660 image, please provide documentation/proof of the same. – Craig Tullis Dec 09 '14 at 16:51
-
-
To get a CD-ROM (physical *or* virtual) that you can change, you need something like ISO 13490 (multi-session) or ISO 17341 (rewritable). – Craig Tullis Dec 09 '14 at 17:12
-
Here is a recipe for Linux. Download an iso-file (e.g c.iso) and mount it to see the files inside. Now run "shred -z c.iso" as root to zero out the iso file while it is still mounted. Then run "free && sync && echo 3 > /proc/sys/vm/drop_caches && free" to clear caches (http://unix.stackexchange.com/a/87909/4869). Now look again... – Thorbjørn Ravn Andersen Dec 09 '14 at 17:13
-
Yeah, *that's* easier than just popping a physical disc out of the drive and throwing it away. Or replacing it with a version that does something nasty. :-) Seriously, though, this is still the equivalent of removing the physical disc. In no way does this enable surgical deletion of a single file out of the image. – Craig Tullis Dec 09 '14 at 17:16
-
By the way @ThorbjørnRavnAndersen, you do realize that I upvoted your answer just a minute or after you originally posted it, right? There isn't one magical, universally right answer, to this or essentially any other problem. Peace. ;-) – Craig Tullis Dec 09 '14 at 17:18
-
You asked for documentation/proof. You got it. End of discussion. – Thorbjørn Ravn Andersen Dec 09 '14 at 17:21
-
@ThorbjørnRavnAndersen No, you sidestepped the question. The actual request was to document how you would go in surgically and alter or delete a file from the ISO 9660 image. You failed to do that. *Of course* you can overwrite the ISO image. I said as much from the very start. But that's really no different from popping a physical CD-ROM disc out of a physical drive and tossing it in the dumpster. Honestly, the idea of dedicating an actual, physical CD-ROM drive to one file is a little silly. Thanks for playing. Thumbs down for stomping your feet, though. – Craig Tullis Dec 14 '14 at 21:25