8

I have a requirement to write files to a Linux file system that can not be subsequently overwritten, appended to, updated in any way, or deleted. Not by a sudo-er, root, or anybody. I am attempting to meet the requirements of the financial services regulations for recordkeeping, FINRA 17A-4, which basically requires that electronic documents are written to WORM (write once, read many) devices. I would very much like to avoid having to use DVDs or expensive EMC Centera devices.

Is there a Linux file system, or can SELinux support the requirement for files to be made complete immutable immediately (or at least soon) after write? Or is anybody aware of a way I could enforce this on an existing file system using Linux permissions, etc?

I understand that I can set readonly permissions, and the immutable attribute. But of course I expect that a root user would be able to unset those.

I considered storing data to small volumes that are unmounted and then remounted read-only, but then I think that root could still unmount and remount as writable again.

I'm looking for any smart ideas, and worst case scenario I'm willing to do a little coding to 'enhance' an existing file system to provide this. Assuming there is a file system that is a good starting point. And put in place a carefully configured Linux server to act as this type of network storage device, doing nothing else.

After all of that, encryption on the files would be useful too!

phil_ayres
  • 181
  • 1
  • 3
  • 11
  • 4
    What you are asking can't be done. If you have root access to the machine, you can do block-level operations directly on the disk. So it doesn't matter what filesystem is on top, you can't protect anything from root, you can only slow it down or make it so obscure it seems secure. – Regan Oct 25 '13 at 22:50
  • After reading the SEC interpretation http://www.sec.gov/rules/interp/34-47806.htm I'm going to agree with @Regan. However, this whole thing is slightly absurd. E.g., how does one erase a CD? With fire, of course. – Mark Wagner Oct 25 '13 at 23:10
  • I absolutely agree that the requirements are 'slightly absurd'. They are trying to make it so obvious that there has been an attempt to hide the truth that no IT guy would agree to doing what a no-good exec is asking. Hitting delete on a large directory as root was apparently too easy for somebody. Physical destruction becomes the only way to cover things up in the SEC's rules. – phil_ayres Oct 27 '13 at 18:42
  • chattr +i filename, you need give this command every time you create a file – c4f4t0r Jan 12 '14 at 23:15
  • @c4f4t0r doesn't stop: `chattr -i filename` then rm – phil_ayres Jan 12 '14 at 23:40
  • @phil_ayres there is a typo chattr +i isn't the same of chattr -i, touch file ; chattr +i file ; rm file "rm: cannot remove `file': Permission denied" and echo "hello world" > file "bash: file: Permission denied" – c4f4t0r Jan 12 '14 at 23:56

4 Answers4

2

You can sort of do this with OpenAFS and read-only volumes. It's a lot of infrastructure to install to make it work however and might not meet the requirements.

http://www.openafs.org/

Basically, there is a writeable volume and one or more read-only copies of the volume. Until you release the writeable volume, the read-only copies are unchangeable to clients. Releasing the volume requires admin privileges.

It seems like any solution would require either specialized hardware or a network file system that duplicates the semantics of specialized hardware.

1

It seems that there is no way to do this without writing custom file system / kernel code.

A viable solution appears to be to use Amazon Glacier with WORM archive storage option. According to the AWS Official Blog at: https://aws.amazon.com/blogs/aws/glacier-vault-lock/

[...] a new Glacier feature that allows you to lock your vault with a variety of compliance controls that are designed to support this important records retention use case. You can now create a Vault Lock policy on a vault and lock it down. Once locked, the policy cannot be overwritten or deleted. Glacier will enforce the policy and will protect your records according to the controls (including a predefined retention period) specified therein.

You cannot change the Vault Lock policy after you lock it. However, you can still alter and configure the access controls that are not related to compliance by using a separate vault access policy. For example, you can grant read access to business partners or designated third parties (as sometimes required by regulation).

For me, this provides exactly what is needed without the expense of NetApp or EMC hardware, while appearing to meet the record retention requirements.

phil_ayres
  • 181
  • 1
  • 3
  • 11
  • There is no logic difference from my solution. The server administrator, in this case Amazon, can still erase or tamper with some or all the files. The only difference here is the file storage provider...? – nrc Oct 21 '16 at 14:02
  • You have it exactly right in your assumption that the storage provider is the real difference. With an in house server administrator, the regulator believes they can be manipulated by a more senior person in the same organisation to delete or alter records. Of course, you could ask somebody at Amazon to destroy everything, but the assumption is that there will be a paper trail and there is a better chance a unexpected request would be rejected. Not quite as good as formal escrow, but separating responsibilities provides much of the protection that is needed. – phil_ayres Oct 31 '16 at 05:10
  • 1
    You can still delete the files by ceasing to pay for the storage. – TZubiri Feb 29 '20 at 23:28
0

If you simply need to access files from a system in which users cannot overwrite them you can mount a remote volume on which you have no write permission. Easiest way to do this is to mount a read-only samba/cifs share.

Otherwise if you need a way to allow users to write new files (that cannot be overwritten or modified) a solution is to mount an FTP path with FUSE curlftpfs.

You can set your proftpd directory with these directives:

AllowOverwrite off
<Limit WRITE>
  DenyAll
</Limit>
<Limit STOR>
  AllowAll
</Limit>

In this way new files can be stored in the mounted directory, but they cannot be anymore modified or removed.

links: CurlFtpFS, ProFTPD

nrc
  • 1,071
  • 8
  • 8
  • I get what you are saying, and it certainly seems to be an option. But if I'm the administrator of the file server I can delete anything. The aim is to prevent even administrators (at least those without access to the physical drives) from deleting files. – phil_ayres Jan 11 '14 at 19:35
  • The FTP server acts as a cheap WORM device. But yes, the administrator of the remote FTP server can access the files and alter them. A solution is to sign the file on its creation, with an asymmetric key system, to prevent any system administrator can mess with the files. An administrator can still erase the files but cannot modify anymore the file without being noticed. – nrc Jan 11 '14 at 19:59
  • Unfortunately just signing the file to demonstrate (lack of) tampering is insufficient according to the SEC regs. Hence the question about making the files completely immutable. – phil_ayres Jan 12 '14 at 21:05
0

This is a variation on the "Infalible backup" problem and the only way to implement it is with multiple remote worm file systems that use and share checksums and do not have shared physical or administration access. This ensures everything is write once, duplicated, integrity provable, and in the case of a single block being erased, changed or corrupted, recoverable.

Plan9 or it's derivatives may impliment all of the required features. See Plan9 and Venti