18

What is the best way to kill Zombie processes and D state process by single command.

vnix27
  • 856
  • 2
  • 11
  • 19
  • 1
    Related: https://unix.stackexchange.com/questions/5642/what-if-kill-9-does-not-work/70372#70372 – lepe Dec 04 '18 at 03:37

5 Answers5

25

Double tap.

Actually, reboot. There's no real way to easily get rid of a zombie, but there's really no reason to because a zombie isn't taking up resources on the computer; it's an orphaned entry in a process table. Init is supposed to collect it but something went wrong with the process. http://en.wikipedia.org/wiki/Zombie_process

Perhaps you're asking because there's worse problem...are you getting a boatload of zombies roaming your process table? That usually means a bug in the program or a problem with a configuration. You shouldn't have a huge number of zombies on the system. One or two I don't worry. If you have fifty of them from Apache or some other daemon, you probably have a problem. But that's not directly related to your question...

Bart Silverstrim
  • 31,092
  • 9
  • 65
  • 87
19
/sbin/reboot

You can't kill a zombie - its already dead

If the ppid still exists, then terminating that can often clean up the spawned zombies.

You shouldn't be killing processes in uninterruptible sleep - usually this means they're i/o bound, but IIRC it can also occur during a blocking read from e.g. a network socket.

quanta
  • 50,327
  • 19
  • 152
  • 213
symcbean
  • 19,931
  • 1
  • 29
  • 49
14

Errors in underlying filesystem or disks might cause I/O bound processes. In this case try to "umount -f" the filesystem they depend upon - this will abort whatever outstanding I/O requests there are open.

Arie Skliarouk
  • 598
  • 1
  • 5
  • 12
1

Next line will kill all zombies

ps -xal | grep defunct | awk '{ system (" kill -9 " $4 ) }'
Swisstone
  • 6,357
  • 7
  • 21
  • 32
0

Most answers focus on Zombies, this targets D state

Processes that are stuck in "uninterruptible sleep" show up in a process list as being in state D

In my experience, these are frequently blocked on I/O in some way. If you've been reading files from a mounted network disk and the remote host has restarted or network connectivity has failed in some way, then the copy process can be stuck waiting indefinitely.

The only fixes are to just kill the process, or to umount -f /mnt/source to unmount the disk, and then the process will see that it has gone, and will fail cleanly. Obviously the I/O action will be incomplete too, likely with a partial file copy.

Related, tar and rsync process can be in this state while running through SSH - they are literally blocked waiting for the network to catch up. Point is that state D is not necessarily bad, so killing them all may not be an ideal solution.

Criggie
  • 2,219
  • 13
  • 25