How to lock a file against deletion but still make it writeable?

34

8

I want to make a file locked against deletion but still be writeable. How do I do this?

The file in question is a Truecrypt volume as a file on a NAS SMB Network share, so I don't want to accidentally delete it.

therobyouknow

Posted 2011-08-23T12:06:51.733

Reputation: 3 596

@soandos - I don't agree I'm afraid. Because writing zeros to it still means that the file exists, but is full of zeros. Indeed there are Linux commands to create a 'sparse' file full of zeros. – therobyouknow – 2018-04-05T08:00:34.730

1Its not possible. Writing zeros amounts to deleting it. – soandos – 2011-08-23T12:08:44.180

Answers

35

For Windows:

  1. Deny "Delete" permission on the file.
  2. Remove or deny "Delete child items" permission on the parent directory.

For Unix (including OS X):

  1. Remove "Write" permission on the parent directory.

Note that this will only prevent the file from being removed (deleted), but won't do anything against accidental truncation or overwriting with garbage. If a file is writable, you can write anything to it, period.

Also, file permissions are next to impossible to translate between operating systems. If the NAS runs Linux, and you try to set permissions from within Windows, the end result may be different from what you expect.

user1686

Posted 2011-08-23T12:06:51.733

Reputation: 283 655

Best answer I will get I think. +1 and thanks. Yes, the NAS is a Lacie 2big running a version of linux on ARM. – therobyouknow – 2011-08-23T12:22:59.397

1You don't actually need to Deny delete, just remove Delete from existing access control entries (ACEs). (Deny ACEs make things more complex, usually not a good course in the longer term.) – Richard – 2011-08-23T18:42:49.653

2@Richard: AFAIK, removing requires completely disabling ACL inheritance for that file, which makes it even more complex in the end. – user1686 – 2011-08-23T21:57:41.527

I agree with Grawity. Deny Delete permission is the way to go. If the share permission is set to modify, then the deny will still block the deletion, while leaving it blank will allow deletions. – surfasb – 2011-08-24T10:30:25.887

26

In Linux you could create a hard link to it. Then you can write to it and "delete" it, but you'll be only removing the reference in your directory. The other hard-link will still point to the file's contents, so it won't have been deleted anyway.

In Unix world, you don't "delete" files. You just decrease the number of hard links to it. When nothing else is pointing to it, the space is considered free and can be used…

woliveirajr

Posted 2011-08-23T12:06:51.733

Reputation: 3 820

2Good idea. Don't know if @Rob can create hardlinks on his NAS, but if he can that's a very clever solution. – CarlF – 2011-08-23T19:54:09.920

1+1 I'll could that in combo with the accepted answer if possible. or seperately if not. +1 for thinking outside the box as said. However the linux share in question is a NAS drive, not sure what console facilities are available, its an embedded or semi-embedded form of linux. +1 still though as it might help other folks who have a regular linux desktop or computer acting as a share. – therobyouknow – 2011-08-23T20:34:43.423

2The same technique should work with NTFS. – Rotsor – 2011-08-23T20:40:46.597

+1 RBerteig and +1 CarlF for supporting woliveirajr solution. – therobyouknow – 2011-08-23T20:40:47.267

12

Backups. You can't really protect a writeable file from damage even if you can from deletion. Back it up daily.

CarlF

Posted 2011-08-23T12:06:51.733

Reputation: 8 576

4+1. Do this no matter what other belts and suspenders are applied. – RBerteig – 2011-08-23T18:19:16.720

+1 CarlF and +1 RBerteig. Totally agree. The files are all backed up onto optical media as well (DVD-R,+R,+R DL and blu-ray 25gb and dl 50gb). I may also consider a second hard drive. – therobyouknow – 2011-08-23T20:38:11.673

I should add, I have backups of the files within the truecrypt volume which is the container file. Not the truecrypt volume itself. – therobyouknow – 2011-08-27T09:31:20.200

0

On a cow file system like btrfs you can achieve this by using subvolumes + snapshots or cp with --reflink=always this will effectively result in as many files as you want which would consume the same amount of space as one + some overhead (but without an insane number of copies or snapshots especially combined with tiny file sizes this should not be noticeable) until they are modified in which case only the parts that have been changed are stored separately and the rest is still shared. Then set the permissions on each separately (to achieve what you want just regularly make a snapshot or copy with read only permissions (and optionally mount it ro or not a all if it's a snapshot and if it's a file use chattr +i (users can't write or modify the file even if the have write permissions) on one copy if you are paranoid).

orange_juice6000

Posted 2011-08-23T12:06:51.733

Reputation: 115

0

In "standard" UNIX, it seems to be impossible to protect a single file from deletion if the directory is writeable. Intuitively, one might expect that clearing the w protection from the mode bits with 'chmod' should protect against deletion, but THIS IS NOT THE CASE. Similarly, in AFS, you cannot protect single files from being deleted, because ACL entries (lacking or denying the relevant 'd' permmission) only apply to the directory as a whole.

Klaus Engelhardt

Posted 2011-08-23T12:06:51.733

Reputation: 1

0

In addition to the previous anwers I would consider having a look at selinux. There you can define pretty granular limitations.

Niels Basjes

Posted 2011-08-23T12:06:51.733

Reputation: 536