2

Is there a way in linux to send a signal or otherwise install an handler to be called when a process surpasses a given rate of hard page faults per second?

A simple sigstop would avoid many accidental crashes I had (swap death), but I imagine there may be false positives if, say, the process uses memory mapped files.

foober
  • 61
  • 4

1 Answers1

2

To check for processes with a high rate of page faults per second:

pidstat -r

The interesting column is majflt/s ( Total number of major faults the task has made per second, those which have required loading a memory page from disk ). From there it is up to you to decide what to do with the processes or filter the ones that can be safely stopped.

rsl
  • 396
  • 1
  • 3
  • Yeah, I know how to get that information: it's all in /proc/pid/stat; my goal was more to avoid polling every pid every second and to register a callback at the kernel level. Something like /proc/sys/kernel/core_pattern that lets you specify a program to handle core dumps. Or maybe something easily hacked up with kprobes. (But I didn't know that particular tool, so thank you.) – foober May 06 '11 at 10:23