21

I am thinking about putting my whole linux server under version control using git. The reason behind it being that that might be the easiest way to detect malicious modifications/rootkits. All I would naively think is necessary to check the integrity of the system: Mount the linux partition every week or so using a rescue system, check if the git repository is still untempered and then issue a git status to detect any changes made to the system.

Apart from the obvious waste in disk space, are there any other negative side-effects?

Is it a totally crazy idea?

Is it even a secure way to check against rootkits since I most likely would have to at least exclude /dev and /proc ?

Tobias Hertkorn
  • 359
  • 5
  • 12
  • 5
    I would vote for "a totally crazy idea", too many implications. Changes to files will occur all the time and will make upgrade procedures a nightmare. – forcefsck Mar 17 '11 at 15:46
  • @forcefsck - why would file changes occur all the time? Shouldn't they just occur during a system upgrade? – Tobias Hertkorn Mar 17 '11 at 16:13
  • 1
    Just a thought, why not use something like [dirvish](http://www.dirvish.org/) or rsync with --link-dest? If you use dirvish to make your backups it will give you a nice report for each backup showing what changed. You can rsync in --dry-run mode to compare the current state against your backup. If you use dirvish you will be using a tool that is well-tested as a backup system. – Zoredache Mar 17 '11 at 16:37
  • I think this is a great idea, and I'm gonna try it out. I'm sure there will have to be some `.gitignore` surgery, but in the end it could be super useful considering how mature the git toolset is. – John DeBord Sep 21 '20 at 06:10

6 Answers6

17

That's a "Bad Idea" (tm). Aside from all else, your repository will run slow as all heck, and get worse as every revision is kept.

Try centralised management, like puppet / cfengine / chef. That'll keep things as you expect, and revert unexpected changes.

Combine that with something like iwatch to get emails of unauthorised file alterations.

Combine that further with rpm/deb files if needed to roll out custom applications.

Throw in something like rkhunter or chkrootkit now and then for kicks and you should be good to go.

Job done.

voretaq7
  • 79,345
  • 17
  • 128
  • 213
Sirex
  • 5,447
  • 2
  • 32
  • 54
  • +1 for Bad Idea -- Centralized management (puppet/cfengine/chef/radmind/etc.) will give you the ability to ensure that your system is configured according to your defined requirements, and most can also be used as a "tripwire" type system to tell you when stuff changes that shouldn't have. – voretaq7 Mar 17 '11 at 15:56
  • I think its really a bad idea. Ok you can see which files are new and changed. But if you run git it needs a lot of cpu to unpack and calculate your files. If you do this on the whole mashine it takes a lot of time. – René Höhle Mar 17 '11 at 15:57
  • 1
    I'll throw another one on that list; [etckeeper](http://kitenet.net/~joey/code/etckeeper/) will do the git repo, except only for /etc. – Shane Madden Mar 17 '11 at 15:59
  • repositories like git are increadibly fast. And I am not worried about CPU or IO since the server I have in mind has scheduled downtimes (hence the possibility to mount it using a rescue system) – Tobias Hertkorn Mar 17 '11 at 16:08
  • What I don't understand: how do the centralised management systems guarantee that there are no false positives - the system is compromised + the tools used to check the system are compromised = false positive – Tobias Hertkorn Mar 17 '11 at 16:12
  • 1
    GIT is fast, but not on really large sets of files, I have seen it get very slow. I was trying to use it to the entire root web folder for a web site with a lot of content. It was taking 2-3 minutes just to do a status or commit. – Zoredache Mar 17 '11 at 16:41
5

Another alternative is to set up tripwire, which is GPL'ed software that spiders through all the important files on your system and determines which have changed in ways you have defined as unacceptable. Change can be defined as simply as mtime, through inode number, all the way to cryptographically-strong checksums.

It takes some setting up and tuning if you don't want to get a whole lot of reports every night about changed files in /var/run, changes in DHCP client files in /etc, and the like, but if you do go to that trouble, it can be very helpful indeed.

The database of file properties is signed with a key not known to the machine, which helps you have confidence that no tool has maliciously changed the database or the tripwire binaries. For complete certainty you can burn a copy of the tripwire tools and databases to a read-only medium, which can be mounted on the server and used to verify all changes since the disc was burned, if a complete forensic analysis is needed.

If you're going to do this, it's quite important to get tripwire set up and running before the machine is deployed into production, or you can never be completely sure that some malicious user didn't have a chance to infect the machine before it was tripwired.

MadHatter
  • 78,442
  • 20
  • 178
  • 229
3

I don't think this is likely to work, but as an experiment I'd like to see what happens if you do this with just the /etc folder. That's where most of the configuration information is kept.

Joel Coel
  • 12,910
  • 13
  • 61
  • 99
2

@Sirex provided a very good answer already, but if you want to make a step further with security, the best way to deal with it is first prevention then detection.

Try setting up a system with / filesystem mounted read-only. Make /tmp a separate ramfs mounted with noexec,nodev option. For the system to work, you really only need /var to be mounted read-write. So under /var mount an fs with rw,noexec,nodev and remove write permissions to /var/tmp (afaik, it is rarely needed by daemons and that should be configurable). Also use a security patch for your kernel to further limit access to resources by users, try grsec for example. Use a firewall with the most restrictive rules possible.

Some distributions provide extensive documentation on system hardening. For example:

forcefsck
  • 351
  • 1
  • 9
2

I think it's a good idea to analyse the changes a tool makes in your system:

  1. install a bare Linux in a VM
  2. initialise the root git
  3. install the tool you want to analyse
  4. see all changes the tool made in your system

... Delete the VM

You would have to add a lot of folders to the .gitignore file though like proc, etc.

rubo77
  • 2,282
  • 3
  • 32
  • 63
1

For the situations where you are just interested in keeping certain folders across the entire filesystem under version control, the following approach might work:

First, create a Git repository at the / level:

$ cd /
# git init

Then create a /.gitignore that whitelists only certain folders, for example to whitelist only /path/to/versioned/config/folder/ (based on https://stackoverflow.com/a/11018557/320594):

/*
!/path/
/path/*
!/path/to/
/path/to/*
!/path/to/versioned/
/path/to/versioned/*
!/path/to/versioned/config/
/path/to/versioned/config/*
!/path/to/versioned/config/folder/
!/.gitignore

Then create a first commit:

# git add -A
# git commit -m "Initial commit"

And then add the additional folders that you want under version control on demand.

PS:

In addition to the previous method, if you need to keep /etc/ under version control, you might prefer to use etckeeper (https://etckeeper.branchable.com/) for versioning that specific folder as it is more specialized for that purpose (e.g. it commits automatically after installing packages).

Jaime Hablutzel
  • 416
  • 4
  • 10