2

I have a home server that was compromised recently, it has been used to mine some crypto currencies.

I have not stopped anything yet apart from locking ssh to my user only. The processes are still running and I want to 1/ understand how they got in and 2/ ensure that everything they changed is reverted.

I am using Debian 9 and everything is up to date.

I know they used some kind of insecurity around (or masquerading as) postgresql:

  • CPU usage jumped through the roof on feb 3rd 18:00
  • htop reveals the culprit is from postgres user
  • last informs that some weird IPs made it through with postgres user
  • I then listed all these IPs in my logs and I saw them in auth.log and fail2ban.log

The auth log says about postgres or these IPs:

  • jan 31st 02:18 authentication failure
  • jan 31st 10:11 successful su for postgres by root
  • jan 31st 10:29 password changed for postgres
  • feb 1st 14:26 password accepted for postgres over ssh
  • feb 1st 17:11 password changed for postgres

Then a lot of ssh connections and things that I have yet to discover. They installed the miner in /var/tmp/ / (space char to pretend not to be there), which contains two binaries and a config file with the remote crypto wallet to mine for (I guess). The rest has yet to be found, and this is the 2nd part of my question.

Now about the first, I want to understand how they got in so easily with ssh and postgres user.

I remember I did some changes with postgres recently. I made some cleanup of the server and removed mongodb and postgres because I didn't use them anymore.

Here's my apt log for january 31st:

  • 10:04 ran apt upgrade, the following postgres packages were upgraded:
    • postgresql-common:amd64 (181, 181+deb9u1)
    • postgresql-client-9.6:amd64 (9.6.4-0+deb9u1, 9.6.6-0+deb9u1)
    • postgresql-9.6:amd64 (9.6.4-0+deb9u1, 9.6.6-0+deb9u1)
    • postgresql:amd64 (9.6+181, 9.6+181+deb9u1)
    • postgresql-contrib-9.6:amd64 (9.6.4-0+deb9u1, 9.6.6-0+deb9u1)
    • postgresql-client-common:amd64 (181, 181+deb9u1)
  • 10:26 upgrade ended
  • 10:27 purge mongodb
  • 10:34 purge postgresql
  • 10:35 autoremove (nothing related to postgres)

Then the next apt action after that is not until february 7th 15:19 where I ran purge on postgresql-*

I did that because I received alerts over email for root that I didn't pay attention to:

* SECURITY information for myserver.xxx.net *

postgres to root Feb 1

myserver.xxx.net : Feb 1 20:50:43 : postgres : user NOT in sudoers ; TTY=pts/1 ; PWD=/ ; USER=root ; COMMAND=/bin/bash

This happened twice more, on feb 3rd 17:37 and 4th 09:43.

On feb 7th I noticed the CPU was high (just with htop) and noticed postgres, I thought that I didn't stop it before removing it and believed it was some kind of old rogue process. I killed -9 them (without checking if it did kill them) and purged all postgresql packages (as seen in apt log).

The emails were actually triggered when they were logged in as postgres in the server and tried to sudo themselves. The auth log show a lot of attempts with su, but I don't think they succeeded. I have put a crazy password for root that I don't even know, I know I can edit the kernel command line in case I really need root should I be locked out. My main user is in the sudoers and I log into the server using SSH keys (my user password is also crazy).

So tonight I was having a look at the CPU/RAM/disk graphs on my monitoring application and I saw that the CPU went straight to 100% and I saw again postgres with a weird process:

postgres 64174  0.0  0.0  65000  6224 ?        Ss   22:19   0:00 /lib/systemd/systemd --user
postgres  1219  193  0.1 266620 13276 ?        Sl   22:23  28:38 (sd-pam)                                                                                                                                                                                                                                                        -c yamr.cfg

(notice that between the (sd-pam) and the yamr argument there are like 100 spaces)

I quickly found the weird files in /var/tmp with a simple ls /proc/1219/exe`. That's how I knew they went in.


So let's recap:

  • jan 31 10:04: maintenance apt upgrade
  • jan 31 10:11: root logs in as postgres (I believe this is normal)
  • jan 31 10:26: maintenance ends
  • jan 31 10:29: password change for postgres: what?????
  • jan 31 10:34: purge postgresql package
  • feb 01 14:26: they log in over SSH on first try: WTF???
  • feb 01 17:11: password changed for postgres: of course they did.
  • feb 03 17:29: log in over ssh
  • feb 03 17:37: sudo attempt (I got an email alert)
  • feb 03 17:39: loads of su attempts which all seem to have failed (for 2 minutes, they seem to be manual: several seconds between attempts)
  • feb 03 18:00: configuration file for the miner is created and CPU usage jumps to 100%

I believe I did everything I know to understand what happened and what kind of access they had.
I believe that an interactive shell as postgres is not that bad, I don't think they could do anything harmful to the server. The postgres user belongs to the postgres and ssl-cert groups. I couldn't find any interesting information about ssl-cert group.

So my questions:

  1. How could they get in?
    1. Is there some kind of default stupid password for postgres? Why the interactive shell and not /bin/false or /usr/sbin/nologin?
    2. Why was the postgres password changed after the end of my apt upgrade?
    3. Why wasn't the postgres user removed when I purged the postgresql package?
  2. How can I tell what did they do apart from going into /var/tmp/? Here are the files that are owned by postgres: simple removal is enough?

    /tmp/rootshell
    /tmp/...
    /tmp/.../yam
    /tmp/.../h64
    /tmp/.../yam.cfg
    /tmp/ntfs_sploit.osyvgu
    /tmp/ntfs_sploit.osyvgu/volume
    /tmp/ntfs_sploit.osyvgu/lib
    /tmp/ntfs_sploit.osyvgu/lib/modules
    /tmp/ntfs_sploit.osyvgu/lib/modules/4.9.0-4-amd64
    /tmp/ntfs_sploit.osyvgu/mountpoint
    /tmp/ntfs_sploit.osyvgu/modprobe.d
    /tmp/ntfs_sploit.osyvgu/modprobe.d/sploit.conf
    /tmp/libhax.so
    /tmp/.ssh_bak
    /run/user/111
    /run/user/111/bus
    /run/user/111/gnupg
    /run/user/111/gnupg/S.gpg-agent
    /run/user/111/gnupg/S.gpg-agent.ssh
    /run/user/111/gnupg/S.gpg-agent.browser
    /run/user/111/gnupg/S.gpg-agent.extra
    /run/user/111/gnupg/S.dirmngr
    /run/user/111/systemd
    /run/user/111/systemd/private
    /run/user/111/systemd/notify
    /run/user/111/systemd/transient
    /run/screen/S-postgres
    /var/tmp/ 
    /var/tmp/ /yam
    /var/tmp/ /h64
    /var/tmp/ /yamr.cfg
    

It seems that the ntfs exploit can be the one described in this post in which case it's really bad, because they could inject code in the kernel and now my system is completely unreliable.

I can see that 4.9.0-4-amd64 strange module they have in my last output (it's also the release of my kernel so I can't tell):

# last | grep 4.9.0-4-amd64
reboot   system boot  4.9.0-4-amd64    Fri Feb  2 20:56   still running
reboot   system boot  4.9.0-4-amd64    Fri Feb  2 20:50 - 20:51  (00:00)

How bad is this?

edits:

  • according to the project zero submission, Debian stretch is impacted. I have a 2016 version of ntfs-3g installed :(
  • I have tried to run the CVE-2017-0358 exploit on an unprivileged user and it didn't work, so I guess they couldn't get root access.
  • There is a shell called /tmp/rootshell but it does not escalate privileges (setuid flag not set, which is the exploit). This shell seems to be rootshell.c (according to strings) which is a bash wrapper running as root. It does not escape the current user (postgres in their case)
  • I remember now having reset the postgres password:
    • I wanted to know what data was inside the databases, but I didn't know any password to access the databases
    • So I went online, found a guide to reset the password and did it
    • I saw that no data inside the databases was worth saving so I removed postgres
    • From that moment, I assumed that there was no postgres software nor user on the machine, little did I know that the postgres user still existed, and worse: had an interactive shell and was allowed to connect through SSH.
  • I guess that this was the breach they used to get in.

1 Answers1

1

I hate to say the obvious, but leaving the box running and online is bad practice. You're opening yourself to criminal abuse of your network connection. The privilege escalation attack surface is very big, I would be very concerned if they had an unprivileged shell on the box.

At the very least, I would remove the network connections and isolate the machine in a lab environment. There could be a logic bomb running which hoses the machine on particular conditions, but you can't protect yourself against everything.

What web apps are you running? what mining software are you using? was your postgres listening to the Internet? Do you share passwords between accounts? Some of the things you're wondering about, like what happens to the postgres account when the package is purged, can be reproduced on a VM.

A well-patched Debian machine is difficult enough to exploit that unless there's a critical 0-day released, nobody would waste the exploit on somebody's personal box.

Look for unique traits and try Googling

  • You say you found a wallet? it might give you a clue if the attacker is part of a worm and the methods they might be using.

  • Try to crack the password they're using for Postgres. It might be random, but it might have clues too.

  • Their source IP might also help identify if others are experiencing this.

I would guess that you might have had a vulnerable web application which they used to gain control of postgres. If this is possible, your webserver logs should have the attacker IPs from your fail2ban logs.

Is your password guessable or reused anywhere? A reasonably strong, unique password should be impractical to hack. The timeouts and fail2ban would make the effort hopeless.

mgjk
  • 7,535
  • 2
  • 20
  • 34
  • Thanks a lot for your input. The machine is actually NATed and the only ports that are forwarded are http, https, ssh, ftp, ldap, smtp (TCP) and ipsec (UDP). I'm now 100% sure they didn't use any web app or anything because to get in their IPs are only in auth.log and fail2ban.log (also, see my last edit). Then, I'm 90% sure they didn't get root access because they only attempted one exploit which failed. It is though very bad because a lot of private files were actually readable by postgres (seriously, WTF is wrong with this user account) – Benoit Duffez Feb 10 '18 at 12:53
  • Can you help me get information about the wallet part? I've never used or documented myself on crypto currencies. The wallet configuration file contains: `threads = 0 mining-params = xmr:av=0&donation-interval=50 mine = stratum+tcp://etnXXXX:yam@188.***.***.***:3333/xmr compact-stats = 1 print-timestamps = 0 ` (private data redacted) – Benoit Duffez Feb 10 '18 at 12:58