3

So my system is getting DOS'd, or I've just opened a kind-of-huge file (openstreetmaps planet.osm). The system goes totally unresponsive... so unresponsive that it'll take about 10 minutes to get into my shell, run top, and kill the offending process. I'm ready to hit the reset button.

Question: is it possible to set aside a certain minimum amount of system resources such that, if my system gets pegged, I still have 2% cpu (2ghz system ~ 40 mhz! - should be enough for shell right? I mean, it worked in the early 90's) set aside somewhere? Or some way to throttle offending processes?

I get these situations where I wish the OS would throttle back runaway processes so that I could still use the system, even if it meant a 10% performance drop overall. The ability to act in situations like this instead of being completely helpless would be ... nice.

bundini
  • 263
  • 2
  • 9

2 Answers2

2

You could write a script to look for processes running on tty0 or ttyS0 or wherever you want a priority root login, and set those processes to a real-time scheduling priority. The script itself should be started with a real-time priority.

Getting access to memory during a swap storm is a harder task. You can use Linux cgroups. You can write a program in C instead of a script and use mlockall to lock its memory into RAM. Then that program can use ptrace to poke into other processes and force them to call mlockall. You can use that to get a bash shell that won't be affected by swap.

Unless you are a programmer or can find someone who's already written this (I didn't) cgroups is probably an easier way to reserve some high priority memory.

Zan Lynx
  • 886
  • 5
  • 13
1

Funny that you say "nice". One solution is to "renice" the offending process so that it won't hog CPU (essentially lowering the priority of the app).

To launch a process with lower priority:

nice <program> &

To change the priority of a running process:

renice 4 <program>

The scale of priorities runs from -20 to 20. 0 is default, 20 is the lowest priority, -20 is the highest priority.

duffbeer703
  • 20,077
  • 4
  • 30
  • 39
  • Unfortunately if you can't get into a shell you can't run the command. Is there a way to set a default nice value for processes start with but then overwrite it for a particular process (like bash), or alternatively to automatically lower the nice setting on a process that has been using x% CPU for n minutes? – Catherine MacInnes Jan 28 '10 at 04:53
  • Yes, that's the problem. You might not necessarily know what the process is beforehand. Just brainstorming, there seem to be a couple of obvious heuristics one could develop. I.E. if a processes %cpu usage spikes over some level x, for a given time t, start applying some kind of negative feedback renice mechanism. – bundini Jan 28 '10 at 05:00
  • Generally processes don't run out of control. Find the offenders and launch them with nice. Also, you can setup your login shell to launch bash with a high priority. – duffbeer703 Jan 28 '10 at 23:00
  • @bundini Consider making all processes run by an unprivileged user default to being nice (you can use /etc/security/limits.conf for this) then if you have a process hang the machine just drop to a terminal, log in as root and you will be able to run higher priority processes. – Vality Apr 09 '14 at 20:57